OpenAI Announces GPT-4o
OpenAI has announced GPT-4o, its new flagship model that can reason across audio, vision, and text in real time.
GPT-4o (“o” for “omni”) is a step towards a more natural human-computer interaction—it accepts any combination of text, audio, image, and video as input and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
Hello.
Welcome to Hive.
To confirm your authorship of the content, could you please add the link to your Hive blog in your website?
https://learningpages.org/blog/
You can remove this mention, once we confirm the authorship.
Thank you.
More Info: Introducing Identity/Content Verification Reporting & Lookup