HomeArtificial IntelligenceThe first music video created using OpenAI's unreleased Sora model is here

The first music video created using OpenAI's unreleased Sora model is here

OpenAI wowed the tech community and lots of media and humanities professionals earlier this yr – while upsetting traditional videographers and artists – by introducing a brand new AI model called Sora that creates realistic, high-resolution and smooth videos of as much as 60 seconds created per clip.

The technology stays unreleased to the general public for now – OpenAI said on the time, back in February 2024 that it makes Sora “available for red-teamers to evaluate critical areas for damage or risk” and a select small group of “visual artists, designers and filmmakers.” But that hasn't stopped a number of the early users from using it to create and publish latest projects.

Now considered one of OpenAI's hand-picked Sora early access users, author/director Paul Trillo, who in March was among the many first on the earth to demo third-party videos created using the model, has created something called “first official music video created with Sora by OpenAI.”

The Video was made for indie chillwave musician Washed Out (Ernest Weatherly Greene Jr.) and his latest single “The Hardest Part.” It's essentially a 4-minute series of contiguous, quick zoom shots of assorted scenes, all stitched together to create the illusion of a continuous zoom. Check it out below:

On his account on the social network X, Trillo posted that he had the concept for the video ten years ago but gave it up. He also responded to questions from his followers and stated that the video was like this Made from 55 individual clips Created by Sora from a pool of 700 and assembled in Adobe Premiere.

Unrelated but related, Adobe recently announced that it will integrate Sora and other third-party AI video generator models into its subscription software Premiere Pro. However, no timeline has been set for this integration, so within the meantime those wanting to emulate Trillo's workflow would should generate AI video clips in other third-party software like Runway or Pika (since Sora doesn't remain public) after which save and import them into Premiere . Not the top of the world, but not as seamless because it could possibly be.

In one (n Interview with the Washed Out/Greene said: “I'm looking forward to having the ability to integrate a few of these brand latest technologies and see how it will possibly turn into my results. So if that is groundbreaking, I'd like to be a component of it.” The interview goes also address a number of the specific prompts:

“

Trillo posted this too only the text-to-video capabilities of the modelmoderately than taking still images shot or generated elsewhere and feeding them into the AI ​​so as to add motion (a well-liked tactic amongst artists within the rapidly evolving AI video scene).

This example demonstrates Sora's power of making media with AI and is a helpful rejoinder recently revealed information that one other of Canadian creative studio Shy Kids' first demo videos, titled “Air Head”, which incorporates a man with a balloon for a head, actually borrowed heavily from other VFX and video editing tools resembling rotoscoping in Adobe After Effects.

It also shows the continued desire of some creatives – within the music and video industries – to make use of latest AI tools to precise themselves and tell stories, at the same time as many other creatives view the technology, and OpenAI specifically, as exploitative and infringing on the copyright of human artists criticize by scraping training on their previous work without consent or compensation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read