OpenAI GPT-4o is Coming - Top 5 New Features You Need to Know

OpenAI GPT-4o is Coming - Top 5 New Features You Need to Know

The maker of ChatGPT showed off some new upgrades during this week's OpenAI Spring update 

Between a more human-like, natural-sounding voice and Google's lens-esque vision capabilities, many of the impressive features are revealed in a surprisingly fast series of live demos

This week we're looking at some of the features announced by OpenAI as a lot is going on, including the debut of the new iPad Pro2024 and iPad Air2024 You may have missed it Read on to discover the 5 biggest updates to ChatGPT that you may have missed

There is a new model in the town, which OpenAI calls GPT-4oThis is not ChatGPT-5, but a significant update to the existing model of OpenAI

During the OpenAI Spring Update, CTO Mira Murati stated that the GPT-4o model can be inferred across speech, text, and vision This omnimodel is considered to be much faster and more efficient than the current ChatGPT-4

Based on some of the live demos, the system certainly seemed to be moving at speed, especially in conversational voice mode, but more about it below

GPT-4o is not locked behind the月額20 monthly Premium plus service In fact, OpenAI makes GPT-4o available to all users

It provides free users with other tools as well as the native tools and updates that GPT-4o is bringing to the table These include access to a custom chatbot and a ChatGPT store with user-created models and tools

Free users also have access to advanced data analysis tools, vision (or image analysis), and memory, and ChatGPT can remember previous conversations

You may wonder, what are paid users getting now? According to OpenAI, paid users will continue to get up to 5 times the capacity and queries that free users do

The most interesting part of the OpenAI live demo was a vocal conversation with ChatGPT

The new voice assistant will be real-time, including the ability to let you interrupt the assistant and change the tone and make it react to the user's emotions

During the live demo, the OpenAI presenter asked the voice assistant to make a bedtime story Throughout the demonstration, they had demonstrated the ability to interrupt it and sound it not only natural, but dramatic and emotional They also had voices robotic sounds, singing and telling stories with more intensity

It was all very impressive

While many of the voice assistant features displayed were impressive, the live translation tool really seemed to take it a notch

During the demonstration, Murati spoke to the voice assistant in Italian Mark Chen asked his assistant to translate from English to Italian and from Italian to English It seemed to work pretty well This may be a real boon for travelers

ChatGPT has acquired a new native vision feature similar to Google Lens Basically, this feature allows ChatGPT to "see" using the phone's camera

The demo team asked ChatGPT to show the equation and help solve the problem AI voice assistant walked through math problems without giving answers

It also seemed to be possible to see the changes made

When combined with the new desktop app, vision also seems to include the ability to view the desktop In one demo, ChatGPT was also able to view and analyze the code and explain potential problems and what the code should do

Chatting with a GPT-4o model can be the perfect tutor

There were more updates and tools mentioned during the OpenAI Spring Update like desktop apps available There was also face detection and emotional perception

Please note that not all features are still available as of this writing The feature is gradually starting up over the next few weeks, but we don't know when certain features will be available

For example, voice assistants do not appear to be available We tested it and it's still an old version for now

The new model will require some practical testing and we are already starting to see what it can do at our end Check back with us as we put the GPT-4o through the pace

Categories