OpenAI may be moving away from numbers in a way that names the next generation of artificial intelligence modelsAt least that's a suggestion from a recent presentation in Paris
During a demonstration of ChatGPT Voice at the VivaTech conference, Romain Huet, Head of Developer Experience at OpenAI, showed a slide that reveals the potential growth of AI models in the coming years, and GPT-5 was not included in it It indicated "today" between GPT-3-era, GPT-4-era, and gpt-Next
I doubt that the next generation model will carry its Monica, but it's a hint that the company is moving away from GPT-5 as a brand This is also consistent with the fact that CEO Sam Altman was concerned about when the model would come out in a recent interview
In the world of artificial intelligence, naming is still a nasty business as companies try to stand out from the crowd while maintaining hacker credentials
There is Grok, Xai's chatbot, and Groq, a new inference engine that is also a chatbot Then you have ChatGPT, Sora, Voice engine, DALL-E, etc and OpenAI
OpenAI began to mark it with the release of GPT-3 and ChatGPT This model is one step more different than what we have seen before, especially in conversation, and there has been exponential progress from that point on
In GPT-4, looking at the model with the first hint of multimodality and improved reasoning, everyone expected GPT-5 to follow the same path, but a small team from OpenAI trained GPT—4o and everything changed
Until last year Altman was talking about gpt-5 being in training, but when he quizzed on its release in the past few months, he instead pivoted, hedged and talked about "many impressive models" coming this year
During his presentation on Wednesday Huet suggested that he intends to see OpenAI models of multiple sizes in the coming months and years
According to a slide shared by Huet, by the end of the year you will see something codenamed GPT-Next, which I suspect is virtually Omni-2— a more sophisticated, better-trained large version of GPT—4o
Which, the graph shows a noticeable difference over what we have today The findings suggest that this is not a breakthrough improvement - with good things to come in the next few years
During a recent safety update released in conjunction with the International AI Seoul Summit, OpenAI spent a lot of time assessing the capabilities of the new model prior to its release
GPT-4o is already a gradual change in AI, and since it can only infer text, it is important to understand text, images, and videos natively Future models are likely to be built on this and may require more complex safety assessments
Comments