HomeNewsSo you've been scammed by a deepfake. What are you able...

So you've been scammed by a deepfake. What are you able to do?

Earlier this month, a Hong Kong company lost HK$200 million (A$40 million) in a single Deepfake Fraud. An worker transferred money after a video conference with scammers who looked and seemed like high-ranking company officials.

Generative AI tools can create image, video and voice replicas of real people saying and doing things they’d never have done. And these tools have gotten increasingly easier to access and use.

This can perpetuate and disrupt the abuse of intimate images (including things like “revenge porn”) democratic processes. Currently, many jurisdictions are grappling with tips on how to do that Regulation of AI deepfakes.

But if you happen to are a victim of a deepfake scam, are you able to receive compensation or redress on your losses? The laws has not yet caught up.

Who is responsible?

In most cases of deepfake fraud, fraudsters avoid attempting to outsmart banks and security systems and as an alternative go for so-called “push payment” scams, by which victims are tricked into instructing their bank to pay the fraudster.

So if you happen to're on the lookout for a treatment, there are not less than 4 possible destinations:

  1. the fraudster (who often disappeared)

  2. the social media platform that hosted the fake

  3. Any bank that paid out the cash on behalf of the fraud victim

  4. the provider of the AI ​​tool that created the fake.

The quick answer is that when the scammer disappears, it’s currently unclear whether you shall be entitled to any redress from any of those other parties (although that might change in the longer term).

Let's see why.



The social media platform

In principle, you can claim damages from a social media platform if it hosts a deepfake intended to defraud you. But there are hurdles to beat.

Platforms often present themselves as mere intermediaries of content – ​​meaning that they are usually not legally accountable for the content. In the United States, platforms are explicit be spared from this sort of liability. However, in most other common law countries, including Australia, no such protection exists.



The Australian Competition and Consumer Commission (ACCC) takes Meta (Facebook's parent company) in court. They are testing the potential of holding digital platforms directly accountable for deepfake crypto scams in the event that they actively goal the ads to potential victims.

The ACCC also argues that Meta ought to be held liable as an accomplice to the fraud – since it didn’t promptly remove the misleading ads after being informed of the issue.

At a minimum, platforms ought to be accountable for promptly removing deepfake content used for fraudulent purposes. They may already claim to do that, however it could soon change into a legal requirement.

The ACCC has sued Meta (Facebook's parent company) to look at whether Facebook might be sued for targeting victims with scam ads.
Jeff Chiu/AP


The Bank

In Australia, the legal obligations around whether a bank has to reimburse you within the event of a deepfake fraud are usually not regulated.

This has recently been considered by the Supreme Court of the United Kingdom, in a case that’s prone to be influential in Australia. It points out that banks are usually not obliged to reject a customer's payment instructions if the recipient is suspected of being a (deepfake) fraudster, although they’re generally obliged to act promptly once the fraud is discovered becomes.

That is, the United Kingdom introduces Compulsory system This requires banks to compensate victims Push payment fraudnot less than under certain circumstances.

In Australia it’s ACCC and others have recommend proposals for an analogous scheme, although none are currently available.

Customers stand in front of ATMs at Australian banks
Australian banks are unlikely to be held accountable for customer losses on account of fraud, but latest systems could force them to pay compensation to victims.
TK Kurikawa/Shutterstock


The provider of AI tools

The providers of generative AI tools are currently not legally obliged to make their tools unusable for fraud or deception. Legally, there is no such thing as a duty of care to the world at large to forestall the fraud of others.

However, generative AI providers have the chance to make use of technology to scale back the likelihood of deepfakes. Like banks and social media platforms, they could soon be required to accomplish that, not less than in some jurisdictions.



The recently proposed I HAVE Act requires providers of generative AI tools to design these tools in such a way that synthetic/fake content might be detected.

It is currently assumed that this might work digital watermarkalthough its effectiveness still exists discussed. Other measures include deadlines, a digital ID card to confirm an individual's identity, and further education concerning the signs of deepfakes.

Can we stop deepfake scams completely?

None of those legal or technical protections are prone to be entirely effective in stemming the tide of deepfake scams, scams, or deceptions—particularly as generative AI technology continues to advance.

However, the response doesn't need to be perfect: slowing down AI-generated fakes and scams can still reduce the damage. We also have to put pressure on platforms, banks and technology providers to maintain track of the risks.

While chances are you’ll never find a way to completely avoid becoming a victim of a deepfake scam, with all of those latest legal and technical developments, chances are you’ll soon find a way to hunt compensation if something goes incorrect.

As audio, video and image deepfakes change into more realistic, we’d like multi-layered strategies for prevention, education and compensation.


LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read