HomeIndustriesThe danger of deepfakes is different than you think that

The danger of deepfakes is different than you think that

Stay up so far with free updates

One of our loudest moral panics today is the fear that deepfakes enabled by artificial intelligence will weaken democracy. Half of the world's population votes in 70 countries this yr. Some 1,500 experts According to a survey by the World Economic Forum at the tip of 2023, misinformation and disinformation will probably be the largest global risk over the following two years. Even extreme weather risks and interstate armed conflicts are seen as less threatening.

But, to place it mildly, their fears seem exaggerated. Not for the primary time, the Davos consensus might be flawed.

Fraud has been a feature of human nature ever because the Greeks dumped a wood horse outside Troy's partitions. More recently, the Daily Mail's publication of the Zinoviev letter – a forged document purporting to have come from the Soviet head of the Comintern – had a significant impact on the 1924 British general election.

Of course, this was before the web age. Today, there may be concern that the ability of AI could industrialize such disinformation. The web has reduced the associated fee of distributing content to zero. Generative AI is reducing the associated fee of making content to zero. The result might be an amazing amount of data that, because the US political strategist Steve Bannon put it memorably: “flood the zone with shit”.

Deepfakes – realistic AI-generated audio, image or video imitations – pose a specific threat. The latest avatars generated by leading AI corporations are so good that they’re almost indistinguishable from real ones. In such a world of “fake people” because the late philosopher Daniel Dennett called themWho are you able to trust online? The danger will not be a lot that voters will trust the unreliable, but that they are going to distrust the trustworthy.

But to date, a minimum of, deepfakes are usually not causing as much political damage as feared. Some generative AI startups argue that the issue is one among distribution reasonably than generation, and place the blame on the large platform corporations. At the Munich Security Conference in February, 20 of those major tech corporations, including Google, Meta and TikTok, pledged to stop deepfakes which can be designed to mislead. To what extent the businesses are keeping their guarantees is difficult to say, however the relative lack of scandals is encouraging.

The open source intelligence movement, which incorporates quite a few cyber detectives, has also been successful in exposing disinformation. US scientists have Database of political deepfake incidents to trace and detect the phenomenon, and has registered 114 cases as of January. And it may possibly be that the increasing use of AI tools by hundreds of thousands of users will itself deepen public understanding of the technology and immunize people against deepfakes.

Tech-savvy India, which just held the world's largest democratic election during which 642 million people voted, was an interesting test case. There, AI tools were used extensively to impersonate candidates and celebrities, generate endorsements from deceased politicians, and sling mud at opponents within the political whirlwind that’s Indian democracy. Yet the election didn’t appear to have been marred by digital manipulation.

Two experts from the Harvard Kennedy School, Vandinika Shukla and Bruce Schneier, who studied using AI in election campaigns, concluded that the technology was used predominantly constructively.

For example, some politicians used the official Bhashini platform and AI apps to dub their speeches into India's 22 official languages, strengthening their connections with voters. “The technology's ability to create involuntary deepfakes of anyone may make it harder to tell apart truth from fiction, but its consensual use is more likely to make democracy more accessible,” They write.

That doesn't mean that using deepfakes is at all times harmless. They have been used to cause criminal harm and private suffering. Earlier this yr, British engineering firm Arup was defrauded of $25 million in Hong Kong after fraudsters used digitally cloned videos of a senior manager to order a money transfer. This month, explicit deepfakes of 50 girls from Bacchus Marsh Grammar School in Australia were circulated online. It appeared that the photos of the women had been taken from social media posts and manipulated to create the pictures.

Criminals are sometimes among the many first adopters of recent technologies. It is their sinister use of deepfakes to focus on private residents that ought to worry us most. Public use of the technology for nefarious purposes is more more likely to be quickly exposed and combatted. We ought to be more concerned about politicians spouting authentic nonsense than about fake AI avatars producing inauthentic gibberish.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read