Higgsfield is a new artificial intelligence video generation platform, built using the same model that OpenAI used to create its impressive Sora engine
The startup is funded by Menlo Ventures and is focused on providing as many cameras and motion controls as possible, especially human motion It plans to begin accessing the system at the end of this month
According to a spokesperson, it can generate clips of up to 10 seconds, which is not close to Sora's one-minute long shot, but much longer than the existing model, and it is not known if Sora's final release will offer long clips
I have not yet been able to try out Higgsfield myself, but from the few videos that have already appeared on social media, the major differentiator seems to be the controls
The company's website states that "unparalleled personalization and control" are key features, along with "realistic human characters and movement"
Realistic human movement is not the forte of many AI models, which often walk slowly, backward, or merge with themselves
That's why Sora was a game changer: the first video OpenAI shared included realistic human movement, with characters walking naturally down the street Other AI video tools are also making some progress in this area
Many existing AI video models, including Stable Video Diffusion, are diffusion models similar to the techniques that drive AI image generators
Higgsfield is a transformer model like the one that powers ChatGPT and Google Gemini, but incorporates a diffusion model
A spokesperson said that by combining these two technologies, "we can output ultra-smooth, realistic video"
This is similar to the approach taken by OpenAI with Sora and StabilityAI with Stable Diffusion 3 to improve prompt compliance and control
According to Menlo Ventures, these architectures can be combined to build "world models"
These are "AI models so realistic that they can simulate the physical world, resulting in longer, smoother, more coherent sequences that rival professionally produced content
The company is gradually rolling out access to Higgsfield, starting with a small group of content creators to test its limits before making it more widely available
But if you can't wait, there is an app available for the iPhone called Diffuse It is built using the Higgs field model and allows users to personalize their videos and have finer control over their movements
It is only available in Canada, India, the Philippines, South Africa, and some countries in Central Asia
The long-term goal is to create a studio-grade video marketing platform for creators and businesses and a range of consumer products similar to the Diffuse app
Comments