OpenAI will remove access to considered one of its ChatGPT voices after actress Scarlett Johansson objected that it sounded “eerily similar” to her own.
Earlier this week, the corporate announced that it was “working on pausing Sky's voice,” which is considered one of the few options users can select when conversing on the app.
Johannson received Sam Altman, CEO of OpenAI, contacted her in September and again in May and asked if she would comply with using her voice within the system.
She declined, only to listen to a voice assistant that sounded eerily much like herself just days after the second request. She was, in her own words, “shocked, indignant and in disbelief.” OpenAI responded by saying:
AI voices shouldn’t intentionally mimic a celeb's distinctive voice – Sky's voice isn’t an imitation of Scarlett Johansson, but belongs to a different skilled actress using her own, natural speaking voice.
Johansson is understood to have voiced an AI up to now for a job within the fictional 2013 film “Her” – Altman has explained himself a fan of. He also recently tweeted the word “she” without much further explanation.
Johansson said she needed to Hire legal counsel to request the removal of Sky's voice and knowledge about how the corporate created it.
This dispute is a prescient warning of the long run identity damage brought on by AI – damage that would affect any of us at any time.
Identity is a sensitive issue
Artificial intelligence is evolving at an incredible pace, with OpenAI's ChatGPT leading the best way. It's very likely that AI assistants will soon give you the chance to have meaningful conversations with users and even form all styles of “relationships” with them. This may very well be why Johansson is anxious.
One thing is now irrefutably clear: we cannot eliminate AI through laws. Instead, we’d like a right to identity and thus a right to demand the removal, deletion (or otherwise) of content that damages identity.
But what exactly is “identity”? It's a fancy idea. Our identity may say nothing about our specific personal characteristics or qualities, however it is prime to who we’re: something we construct through lifelong decisions.
But it's also about greater than just how we see ourselves, as celebrities show. It's related to our image. It's a collective project – it's cultivated and shaped by the best way others see us. And in this manner, it's linked to our personal characteristics, like our voice, our facial expression or the best way we dress.
Minor attacks on our identity can have limited impact, but they’ll add up like death by a thousand cuts.
Legal defense
As AI democratizes access to technologies that may manipulate images, audio and video, our identities develop into increasingly vulnerable to harms that are usually not covered by legal protections.
In Australia, students already use generative AI to create sexually explicit deepfakes to bully other students.
But unlike deepfakes, most identity breaches don’t violate criminal law or attract the ire of the eSafety Commissioner. Most legal remedies offer inadequate and piecemeal solutions that can’t repair the harm done. And in lots of Western democracies, these solutions require legal motion that’s costlier than most can afford.
AI may be used to control or create content that shows “you” doing stuff you haven’t done (or would never do). It could make you appear less competent or otherwise damage your fame.
For example, it could make you appear drunk in knowledgeable setting, as with former Speaker of the US House of Representatives Nancy Pelosi. It might show “you” vaping, although this could exclude you from the Your sports teamor place you in a pornographic deepfake video.
Australian law is lagging behind
The United States Congress recently proposed a enforceable right to privacy. But even without this, US protections exceed those offered in Australia.
In the United States, privacy is defended by a mix of legal claims, including against the publication of personal facts, the portrayal of a subject in a false light, or the misappropriation of likenesses (e.g., co-opting a part of one other's identity and using it for your individual purposes).
Based on the few facts available, US case law suggests that Johansson may very well be successful in her claim for misuse of the image.
One Key case from 1988 with American singer Bette Midler and the Ford Motor Company. Ford desired to feature the singer's voice in an promoting campaign. When Midler declined, Ford hired a “sound-alike” to sing considered one of Midler's most famous songs in order that it sounded “as very like her as possible.”
Midler won, with a court comparing Ford's conduct to that of an “average thief” who simply takes what he cannot buy.
Australian public figures haven’t any comparable opportunities. In Australia and the UK, the law intervenes when a celebration tries to make a profit by passing off inferior imitations or sound likenesses as “the unique.” However, this only applies if consumers are misled or the unique suffers a loss.
Misrepresentation may additionally occur, but provided that the buyer believes that there’s a connection or endorsement.
Australia needs a rights-based approach, modeled on the European Union, that has a really specific goal: Would.
Identity or “personality rights” empower those affected and bind those that publish digital content. Those affected can obtain compensation or seek injunctive relief to stop the display or distribution of fabric that violates their dignity, privacy or self-determination.
Johansson herself has successfully sued a author in France on the idea of those protections (although this victory was ultimately more symbolic as lucrative).
Thanks to artificial intelligence, it’s now easy to assume the identity of one other person. Identity rights are immensely essential. Even if these rights include protections without cost speech, their very existence allows people to guard their image, name and privacy.