Brenda Sharton of Dechert isn’t any stranger to coping with litigation at the sting of technological innovation.
While on maternity leave within the Nineties, she read in regards to the Internet's attraction to hundreds of thousands of users and shortly became an authority on its intersection with privacy law.
In recent years, she has had a sense of déjà vu after winning the dismissal of two of the primary lawsuits against a generative AI company within the United States, while reading up on the emerging technology and explaining it to the courts.
Sharton, managing partner of Dechert's Boston office and chair of the firm's cyber, privacy and AI practice, points out that artificial intelligence is “nothing latest” and has been developed over greater than a decade, largely behind the scenes.
But because the arrival of the most recent wave of generative AI, led by OpenAI's ChatGPT, Sharton and a handful of specialists must defend corporations that now face sweeping copyright and privacy claims that might hamper the emerging industry.
Sharton's most high-profile AI case was a proposed class motion lawsuit against her client Prisma Labs, maker of the favored photo editing tool Lensa. As she put it, the plaintiff had actually claimed that “everyone in Illinois who has ever uploaded a photograph to the Internet” was harmed by the software allegedly being trained on images scraped from the Internet without her express consent .
But a federal judge ruled in August that the plaintiff had failed to ascertain “specific and specific” infringement and couldn’t prove that his images were included within the extensive data set. “The judges said you might have to elucidate what was inaccurate,” says Sharton, and likewise “what was done that violated the law.”
In other cases, the boundaries of what AI corporations call “fair use” of copyrighted material have yet to be determined.
Andy Gass, a partner at Latham & Watkins, defends OpenAI in cases filed by publishers including The New York Times and DeviantArt alleging copyright infringement. He also defends rival AI company Anthropic in lawsuits by music publishers alleging illegal copyright infringement.
Gass says the range of cases currently being litigated is “each fascinating and quite necessary” – although he cautions against interpreting initial decisions as predictions for future AI litigation.
“The problems that we’re seeing and coping with now are, in a way, fundamental problems,” he says. “But they can be very different from those presented in three, five or ten years.”
Gass and his team, who had been working on generative AI issues long before ChatGPT was released to much fanfare in late 2022, brought together lawyers with the technologists from a number of the corporations they represented. They go into detail about how the models are trained in order that they will analyze the copyright issues that will arise.
“(AI litigation) involves very novel technology but very well-established legal principles,” Gass says. “As a lawyer, the challenge is to elucidate this to the judges.”
Sharton says exploring the small print with the courts is probably the most difficult features of working as an AI lawyer. “You need to train the judges loads,” she says. “It’s an enormous learning curve for them too. And her. . . I don’t have the luxurious of specializing (in certain areas) like lawyers do.”
Warrington Parker, managing partner of Crowell & Moring's San Francisco office, represents defendant ROSS Intelligence, an AI-based legal tech company, in one in every of the primary generative AI copyright infringement cases – filed by Thomson Reuters in May 2020.
Parker argued this month before Delaware Judge Stephanos Bibas in a lawsuit that has not yet been resolved. He's undecided the judge is “persuaded” by his arguments, including his assertion that the AI training data utilized by ROSS has a public profit and needs to be considered fair use. “But I feel he’s interested.”
In addition to the judges, it is usually in regards to the public. Although none of the present lawsuits have yet reached a jury trial — and a few doubt they may, given their complexity — some lawyers defending AI clients worry that negative public perceptions of AI could sway a panel's opinion.
For a jury, “the concept that you might have taken on another person’s work. . . can be an issue,” says Parker – although he doesn’t accept that characterization.
The query of how the brand new Trump administration will regulate the technology can be particularly relevant for corporations with AI customers.
If the brand new administration decides to offer corporations more leeway, plaintiffs' lawyers “won't have the option to support, for instance, the Federal Trade Commission's actions, which they normally do,” Sharton says.
Furthermore, even when some are lost, the final result of existing cases is probably not enough to limit the sector's growth. “If it's just damages, I feel some actors pays those damages and move on,” Parker says. “In other words, it’s the price of doing business.”
Currently, there are other anecdotal signs that the judiciary is taking note of the capabilities of generative AI. During a case management conference earlier this 12 months, 90-year-old Judge Alvin Hellerstein demonstrated his personal interest in the problem. The legendary judge “pulled out his iPad and played a song that had been generated by an AI tool that was one way or the other about his profession as a judge,” Gass says.
Even less adventurous judges will find yourself with a greater understanding of the technology, Gass predicts. Applying the analogy to the early Internet age, he says, “We're still within the dial-up modem phase of developing these tools.”