OpenAI is launching a faster, cheaper version of the artificial intelligence model that underpins its ChatGPT chatbot, as the startup tries to maintain its lead in an increasingly crowded market.
During a livestream on Monday, OpenAI unveiled GPT-4o. This is an updated version of the GPT-4 model, which is over a year old. A new big language model trained on massive amounts of data from the Internet will be better at processing text, audio and video in real time. Updates will be available in the coming weeks.
Ask a question verbally and the system can respond with an audio response within milliseconds, allowing for a smoother conversation, the company says. Likewise, if you give the system a request for an image, it may respond with an image.
The update will provide a number of features to free users that were previously only available to users with a paid subscription to ChatGPT, such as the ability to search the web for answers to queries, talk to the chatbot and hear responses in different voices, and tell it to save details that chat the bot will be able to remember in the future.
The release of GPT-4o could shake up the rapidly evolving artificial intelligence landscape, where GPT-4 remains the gold standard. A growing number of startups and major tech companies, including Anthropic, Cohere and Alphabet Inc.’s Google, have recently released artificial intelligence models that they say match or beat GPT-4’s performance in certain tests.
The OpenAI announcement also took place the day before the Google I/O developer conference. Google, an early leader in artificial intelligence, is expected to use the event to introduce more artificial intelligence updates after the race to keep up with Microsoft Corp.-backed OpenAI.
Instead of relying on different AI models to process these inputs, GPT-4o (the “o” stands for omni) combines voice, text, and image into a single model, allowing it to run faster than its predecessor.
But the new model faced some obstacles. As the researchers spoke during the demonstration, the sound was frequently interrupted. The artificial intelligence system also surprised the audience when, after teaching a researcher the process of solving an algebra problem, it chimed in with a flirty voice: “Wow, that’s what you’re wearing.”
OpenAI today begins rolling out the new text and image capabilities of GPT-4o to some paid users of ChatGPT Plus and Team, and will soon offer these capabilities to enterprise users. The company will make a new version of its “voice mode” assistant available to ChatGPT Plus users in the coming weeks.
As part of its updates, OpenAI said it is also giving anyone access to its GPT Store, which includes customizable chatbots created by users. Previously, this was only available to paying customers.
In recent weeks, rumors about OpenAI’s next startup have become a Silicon Valley parlor game. A mysterious new chatbot has caused a stir among artificial intelligence watchers after it appeared on a benchmarking website and appeared to rival GPT-4 in performance. OpenAI CEO Sam Altman winked at the chatbot on X, fueling rumors that his company was behind it.
The company is working on a wide range of products, including voice technology And video software. OpenAI is also developing a search function for ChatGPT, formerly Bloomberg reported.
On Friday, the company quashed some feverish speculation by saying it would not soon launch GPT-5, a long-awaited version of its model that some in the tech world believe will be radically more capable than current artificial intelligence systems. The company also said it would not introduce a new search product, a tool that could compete with Google. Google shares noted above in the news.
But after the event ended, Altman was quick to continue the rumors. “We’ll have something to share soon,” he wrote on X.