runwaycertainly one of several AI startups developing video generation technology, today announced an API that allows developers and organizations to integrate the corporate's generative AI models into third-party platforms, apps and services.
Currently only available to a limited extent (there’s a Waiting list), the Runway API offers only a single model to select from – Gen-3 Alpha Turbo, a faster but less powerful version of Runway’s flagship Gen-3 Alpha – and two plans, Build (geared toward individuals and teams) and Enterprise. The base price is one cent per credit (one second of video costs five credits) and Runway says that “trusted strategic partners,” including marketing group Omnicom, are already using the API.
The Runway API also brings with it unusual disclosure requirements. All interfaces that use the API must display a “Powered by Runway” banner “in a distinguished location” that links to Runway's website, the corporate writes in a blog post, to “help users understand the technology behind (the applications) while complying with our terms of service.”
Runway, which is backed by investors including Salesforce, Google and Nvidia and was most recently valued at $1.5 billion, faces stiff competition within the video generation space from OpenAI, Google and Adobe, amongst others. OpenAI is anticipated to release its video generation model Sora in some form this fall, while startups like Luma Labs proceed to refine their technologies.
A typical example of that is that Luma today, and this might be no coincidence, began its video generation API, which has no queue and goes beyond features offered by Runway, including the power to “steer” the virtual camera in AI-generated scenes.
With the preliminary launch of the Runway API, Runway becomes certainly one of the primary AI vendors to supply a video generation model via an API. But while the API could help the corporate on its path to profitability (or a minimum of help it recoup the high costs of coaching and running the models), it’s going to not resolve the remaining legal questions surrounding these models and generative AI technology more broadly.
Runway's video generation models, like all video generation models, were trained on a lot of video examples to “learn” the patterns in those videos and generate latest footage. Where did the training data come from? Runway refuses to say, like many vendors as of late – partly out of fear of losing competitive advantage.
But training details are also a possible source of IP-related lawsuits if Runway trained with copyrighted data without permission. There is evidence that this was indeed the case – a report of 404 Media released a runway spreadsheet with training data in July that included links to YouTube channels from Netflix, Disney, Rockstar Games, and developers like Linus Tech Tips and MKBHD.
It's unclear whether Runway ultimately used any of the videos from the spreadsheet to coach its video models. In an interview with TechCrunch in June, Runway co-founder Anastasis Germanidis said only that the corporate uses “curated, internal datasets” for model training. But even when that were the case, it wouldn't be the one AI vendor to take copyright issues calmly.
Earlier this yr, Mira Murati, CTO of OpenAI, did circuitously deny that Sora was trained using YouTube content. And Nvidia According to reports used YouTube videos to coach an internal video generation model called Cosmos.
Many generative AI providers imagine that the doctrine often called Fair use offers legal protection — and They claim this in court and in public statementsOthers are less willing to take risks and/or see a more “ethical” approach to model training as a selling point for his or her services. In developing its Firefly models for video generation, Adobe is alleged to have Offer Payments to artists in exchange for clips, for instance.
In its terms of service, Luma states that it agrees to defend and hold API business customers harmless for damages arising from IP infringement claims. Other vendors, including OpenAI, offer similar indemnification policies; Runway doesn’t, even though it announced last December that it could work with stock media library Getty to develop “more commercially protected” versions of its products.
Whatever the end result of the court cases regarding the legality of coaching on copyrighted content, one thing is obvious: Generative AI video tools threaten to show the film and tv industry as we realize it on its head.
A 2024 study A study commissioned by the Animation Guild, a union representing Hollywood animators and cartoonists, found that 75% of film production corporations that adopted AI cut, consolidated or eliminated jobs after adopting the technology. The study also estimates that greater than 100,000 jobs within the U.S. entertainment industry might be lost to generative AI by 2026.