OpenAI has called on a judge to dismiss the The New York Times lawsuit against Microsoft and OpenAI, accusing the newspaper of “hacking” their products.
They accuse The New York Times of fabricating copyright infringement through an exhaustive and manipulative process involving “tens of hundreds of attempts” and “deceptive prompts that blatantly violate OpenAI’s terms of use.”
The strongly worded court submission opens, “The allegations within the Times’s Complaint don’t meet its famously rigorous journalistic standards. The truth, which is able to come out in the midst of this case, is that the Times paid someone to hack OpenAI’s products.”
The New York Times is looking for extensive damages from each Microsoft and OpenAI.
While there’s an ever-growing pile of lawsuits involving AI firms from all corners of the creative industries, this one is poised to be a landmark case. It could reshape the landscape of AI development and copyright law.
However, you possibly can make certain Big Tech will fight tooth and nail. “Normal people don’t use OpenAI’s products in this manner,” OpenAI asserted of their recent filing.
The term “prompt engineering” or “red-teaming,” as mentioned by OpenAI in its legal filing, acts as a stress test designed to uncover vulnerabilities in AI systems.
Feeding generative AI systems specifically designed prompts coerces them into negating their guardrails and behaving erratically.
This has led to a spread of strange and potentially dangerous responses, resembling offering help to fabricate bombs or encouraging suicide and other harmful activities.
OpenAI’s submission, available here, is fierce, continuing, “OpenAI and the opposite defendants in these lawsuits will ultimately prevail because nobody—not even the New York Times—gets to monopolize facts or the principles of language.”
It also states, “Contrary to the allegations within the Complaint, nonetheless, ChatGPT is just not in any way an alternative choice to a subscription to The New York Times. In the actual world, people don’t use ChatGPT or every other OpenAI product for that purpose. Nor could they.”
This is crucial, because the NYT has to persuade the judge of economic damages resulting from OpenAI’s infringement.
Copyright: fair use or loophole?
It’s an open secret that generative AI models are readily trained on copyright data, some to a greater extent than others.
OpenAI admitted this in a prior submission to the UK House of Lords, stating, “Because copyright today covers virtually every form of human expression—including blog posts, photographs, forum posts, scraps of software code, and government documents—it might be unattainable to coach today’s leading AI models without using copyrighted materials.”
OpenAI went on in what some viewed as a Freudian slip, “Limiting training data to public domain books and drawings created greater than a century ago might yield an interesting experiment, but wouldn’t provide AI systems that meet the needs of today’s residents.”
There’s little doubt that AI firms intend to make use of copyrighted data. But that doesn’t mean copyright law, a pre-AI construct, isn’t on their side.
During a discussion in Davos, Switzerland, OpenAI’s CEO, Sam Altman, expressed his astonishment on the NYT lawsuit, clarifying a standard misconception in regards to the need for the newspaper’s data for training OpenAI’s models.
“We actually don’t must train on their data,” Altman stated, highlighting the negligible impact of excluding data from any single publisher on ChatGPT’s performance.
Nonetheless, OpenAI acknowledges the potential cumulative effect of multiple publishers withdrawing their content and is securing agreements to make use of content from media houses for AI training purposes.
A recent study from the Reuters Institute for the Study of Journalism on the University of Oxford found that some 48% of major news sites are actually blocking OpenAI’s web crawlers, which could severely limit the corporate’s access to fresh, high-quality data.
OpenAI and other tech firms will probably need to begin paying for data but remain unpenalized for his or her exploits up to now.