CREATU GROUP

The New Sora videos just dropped by OpenAI—that must be seen to be believed.

The most recent films from OpenAI’s Sora generative video model are starting to resemble a Hollywood production more than any other AI work we’ve seen thus far, and they were created with just one prompt.

Although Sora is only accessible to OpenAI and a small number of testers, we are learning more about what is feasible as they post the results on social media.

We watched footage of puppies playing in the snow, a couple in Tokyo, and a flypast of a gold mining hamlet in 19th-century California in the first wave of video releases.

These days, we witness one-minute movies that appear to be full productions, replete with various shots, effects, and smooth animation, all from a single cue.

What is in the new clips?

The previews we’ve seen give us hope for what real generative entertainment may provide. Whether paired with additional AI models for sound, lip syncing, or even production-level software like LTX Studio, creativity is genuinely realized.

Blaine Brown, an X developer, posted a video in which he built a music video by fusing a tune made with Suno AI with a Pika Labs lip sync and Bill Peebles’ Sora alien.

Tim Brooks’ fly-through of the museum is remarkable for the range of shots and motion control it accomplishes; it looks like an indoor drone movie.

Others, such as a couple dining in an enlarged fish tank, demonstrate its potential with intricate action while maintaining a steady flow throughout the entire clip.

How does Sora compare?

An important point in the AI video is Sora. It makes use of a combination of the picture generation diffusion models seen in MidJourney, Stable Diffusion, and DALL-E, as well as the transformer technology found in chatbots such as ChatGPT. 

As of right now, it can perform tasks that are not achievable with any of the other popular AI video models, such as Gen-2 from Runway, Pika 1.0 from Pika Labs, or Stable Video Diffusion 1.1 from StabilityAI.

Although the current generation of AI video programs can only produce films lasting one to four seconds and occasionally struggle with intricate movements, the realism is almost on par with Sora’s.

Other AI firms are nevertheless paying attention to Sora’s capabilities and development process. It has been verified by StabilityAI that Stable Diffusion 3 will use a similar design, and a video model will probably be released at some point.

In order to give characters more realism, Runway has already made adjustments to its Gen-2 model, and the results are considerably more consistent motion and character development. Pika also announced lipsync as a noteworthy feature.

Leave a Comment


Notice: Undefined index: src in /home/ammaidali11/public_html/wp-content/plugins/elementor/core/page-assets/loader.php on line 86

Notice: Undefined index: dependencies in /home/ammaidali11/public_html/wp-content/plugins/elementor/core/page-assets/loader.php on line 86

Notice: Undefined index: version in /home/ammaidali11/public_html/wp-content/plugins/elementor/core/page-assets/loader.php on line 86