HomeEthics & SocietyWoman scammed out of €800k by an AI deep fake of Brad...

Woman scammed out of €800k by an AI deep fake of Brad Pitt

What began as a ski holiday Instagram post led to bankruptcy for a French interior designer after scammers used AI to persuade her she was in a relationship with Brad Pitt.

The 18-month scam targeted Anne, 53, who received an initial message from someone posing as Jane Etta Pitt, Brad’s mother, claiming her son “needed a girl such as you.” 

Not long after, Anne began talking to what she believed was the Hollywood star himself, complete with AI-generated photos and videos.

“We’re talking about Brad Pitt here and I used to be stunned,” Anne told French media. “At first, I believed it was fake, but I didn’t really understand what was happening to me.” 

“There are so few men who write to you want that,” Anne described. “I loved the person I used to be talking to. He knew consult with women and it was at all times thoroughly put together.”

The scammers’ tactics proved so convincing that Anne eventually divorced her millionaire entrepreneur husband.

After constructing rapport, the scammers began extracting money with a modest request – €9,000 for supposed customs fees on luxury gifts. It escalated when the impersonator claimed to want cancer treatment while his accounts were frozen resulting from his divorce from Angelina Jolie. 

A fabricated doctor’s message about Pitt’s condition prompted Anne to transfer €800,000 to a Turkish account.

Scammers requested money for fake Brad Pitt’s cancer treatment

“It cost me to do it, but I believed that I is perhaps saving a person’s life,” she said. When her daughter recognized the scam, Anne refused to consider it: “You’ll see when he’s here in person then you definitely’ll apologize.”

Her illusions were shattered upon seeing news coverage of the actual Brad Pitt together with his partner Inés de Ramon in summer 2024. 

Even then, the scammers tried to keep up control, sending fake news alerts dismissing these reports and claiming Pitt was actually dating an unnamed “very special person.” In a final roll of the dice, someone posing as an FBI agent extracted one other €5,000 by offering to assist her escape the scheme.

The aftermath proved devastating – three suicide attempts led to hospitalization for depression. 

Anne opened up about her experience to French broadcaster TF1, however the interview was later removed after she faced intense cyber-bullying.

Now living with a friend after selling her furniture, she has filed criminal complaints and launched a crowdfunding campaign for legal help.

A tragic situation – though Anne is definitely not alone. Her story parallels a large surge in AI-powered fraud worldwide. 

Spanish authorities recently arrested five individuals who stole €325,000 from two women through similar Brad Pitt impersonations. 

Speaking about AI fraud last 12 months, McAfee’s Chief Technology Officer Steve Grobman explains why these scams succeed: “Cybercriminals are in a position to use generative AI for fake voices and deepfakes in ways in which used to require quite a bit more sophistication.”

It’s not only people who find themselves lined up within the scammers’ crosshairs, but businesses, too. In Hong Kong last 12 months, fraudsters stole $25.6 million from a multinational company using AI-generated executive impersonators in video calls. 

Superintendent Baron Chan Shun-ching described how “the employee was lured right into a video conference that was said to have many participants. The realistic appearance of the individuals led the worker to execute 15 transactions to 5 local bank accounts.”

Would you have the ability to identify an AI scam?

Most people would fancy their possibilities of spotting an AI scam, but research says otherwise. 

Studies show humans struggle to distinguish real faces from AI creations, and artificial voices idiot roughly 1 / 4 of listeners. That evidence got here from last 12 months – AI voice image, voice, and video synthesis have evolved considerably since. 

Synthesia, an AI video platform that generates realistic human avatars speaking multiple languages, now backed by Nvidia, just doubled its valuation to $2.1 billion. Video and voice synthesis platforms like Synthesia and Elevenlabs are among the many tools that fraudsters use to launch deep fake scams.

Synthesia admits this themselves, recently demonstrating its commitment to stopping misuse through a rigorous public red team test, which showed how its compliance controls successfully block attempts to create non-consensual deepfakes or use avatars for harmful content like promoting suicide and gambling.

Whether or not such measures are effective at stopping misuse – clearly the jury is out.

As corporations and individuals wrestle with compellingly real AI-generated media, the human cost – illustrated by Anne’s devastating experience – will probably rise. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read