ovs in episode 4 ChatGPT-4: 5 Biggest Upgrades You Need to Know

ovs in episode 4 ChatGPT-4: 5 Biggest Upgrades You Need to Know

This week, OpenAI launched the latest version of the game-changing ChatGPT AI chatbot 

It's not ChatGPT-5, but it's all important to add 'o', which stands for 'Omni' at the end It emphasizes that ChatGPT-4o is more comfortable with the interaction of voice, text and vision 

Here are the 5 most important upgrades over its predecessors

This is definitely the main upgrade for casual users Previously, the smarter GPT-4 was only accessible to those willing to fork per20 per month for a plus subscription Now, thanks to its efficiency improvements, OpenAI says that GPT-4o is free for all users

However, that does not mean that there is no significant benefit to getting a paid subscription Not only do paid users get five times more prompts per day (the conversation goes back to the more limited GPT-35 when you run out), but the big voice mode improvements make it off limits to the free account at first (they're not here yet, but based on the demo, the voice and vision features are game-changing) (This will be the default)

GPT-4 has a voice mode, but it is quite limited It can only respond to 1 prompt at a time, so it's like alexa, Google Assistant, or Siri As the video below shows, it has changed a lot with GPT-4o

It's worth looking at yourself, but in summary, ChatGPT is able to come up with a "bedtime story about robots and love" in jaw-dropping real-time to please the audience, GPT-4o raises its voice drama, switches to robot tones, cuts to tracking and ends the story with songs You can even use it to create your own files

Crucially, it responded to all these changes without forgetting the main thread of the conversation — even the best smart speakers can not handle now

The impressive voice mode presentation led to an even more impressive demonstration of the vision function GPT-4o was able to help solve written linear equations captured through the phone's camera in real time Crucially, it did so without giving an answer as requested 

At the end of the demo, when "I❤ ️chatGPT" is written down for the AI to "see", the AI is praised It is difficult to understand how this can be used in the real world to explain some codes or summarize foreign texts in English, but it is not just text: the 2nd demo correctly detected happiness and excitement in the face of a fresh selfie

At present, the improved visual ability appears to be aimed at still images Still, in the near future, OpenAI believes that GPT-4o will be able to watch sporting events, explain the rules and do things with things like video

Surprisingly fast, ChatGPT-4 can certainly see the gears spinning, especially in more complex queries According to OpenAI, ChatGPT-4o is "much faster" and is certainly noticeable in use 

If you need real timing, XDA developers offer some benchmarks 

In GPT-4o, a 488-word answer appeared in less than 12 seconds, but a similar answer "under GPT-4, it may require nearly 1 minute generation" I was also able to generate a CSV within 1 minute, but "GPT-4 took me almost as long to generate the cities used in this example

The web version may be enough for most people, but there's good news for those who crave a desktop app

OpenAI has released a dedicated Mac app that is currently available with early access to Plus subscribers However, since this is a staggered rollout, you will have to wait until you receive an email from OpenAI with a download link Even if you find something legitimateWith the dmg file, you will not be able to use it until your account is green and available

What about Windows? Well, OpenAI says that Windows apps should be ready by the end of 2024 Perhaps the delay is because Microsoft is still pushing Windows11 users towards the use of a co-pilot with ChatGPT

Categories