5 New Sola Videos That Take GenAI to the Next Level

5 New Sola Videos That Take GenAI to the Next Level

OpenAI has released a new video generated using its AI model Sora; the video, released on TikTok, includes a roller-skating horse, a bubble dragon, and a mythical tea

The AI Lab has been teasing Sora since it was first released to the public in February, leading to intense speculation as to when the public will finally be able to try it out

In a recent interview with Marquees Brownlee's WVFRM podcast, the Sora team said that public availability is unlikely to happen anytime soon This appears to be partly due to the need for further safety studies and partly due to the fact that it takes minutes, not seconds, to make a video

For now, we can only be patient with the videos produced by the team itself, often in response to quick suggestions from people on social media In one of the new videos we were asked to show a "family of cute rabbits eating dinner in a den"

There are now several AI video models and tools on the market, with Runway already approaching a year since its release and Pika Labs partnering with ElevenLabs to expand into sound effects and lip-synced dialogue

None of it, including the very realistic Stable Video Diffusion clip, seems to come close to what is possible on Sora It may only be a matter of time The team told Brownlee that by the time the video is finally generated, there will be enough time to go somewhere, make a cup of coffee, and come back

They also took advantage of the vast number of GPUs available in OpenAI to train Sora and adopted a new type of architecture that blends technology from models like GPT-4 and DALL-E In addition, Sora uses a very diverse set of training datasets, including a variety of sizes, lengths, and resolutions

One of the more notable videos in this new round of clips is a dragon seemingly made of foam blowing a foam flame The movement, quality, and physics are all impressively realized

Currently, the team has minimal control over the output, as all prompting is done via text

That will likely change by the time Sora is released to the public, as they work on more granular controls for manipulating lighting, camera movement, and orientation These are all features that are available on other platforms like Pika and Runway

Sora's ability to create something amazing from a short prompt is impressive In one of the new clips, a teapot is seen pouring water into a cup, but the cup is filled with a swirl of color and movement

Many of the new videos have been shared to TikTok in a vertical format, demonstrating the ability to create vertical videos using only text prompts

We all want to play with Sora It is an impressive tool with use cases in video production, marketing, architecture, and many other areas One of the new videos shows a walk-through of a rather unusual kitchen with a bed on one side

The Sora team told Brownlee that there is work to be done before Sora is ready to be turned into a real product or incorporated into ChatGPT

Tim Brooks, Sora's research leader, said: "Our motivation for wanting to get Sora out in this way before it is ready is to find out what is possible and what safety studies are needed

"We wanted to show the world that this technology is on the horizon and hear from people how it can help," he said, gathering feedback from safety researchers about the risks it poses

He stated that not only is Sora not a product, but he does not even have a timeline for when it will be commercialized

Categories