HomeArtificial IntelligenceDeepfakes will cost $40 billion by 2027 as adversarial AI gains momentum

Deepfakes will cost $40 billion by 2027 as adversarial AI gains momentum

Deepfakes are certainly one of the fastest growing types of adversarial AI today. Losses related to deepfakes are expected to extend from $12.3 billion in 2023 to 40 billion dollars by 2027and is recording an astonishing average annual growth rate of 32%. Deloitte expects deep fakes to extend in the approaching years, with the banking and financial sector being a key goal.

Deepfakes are the most recent type of hostile AI attacks and are reaching a 3,000% increase last yr alone. Deepfake incidents are predicted to extend by 50 to 60% in 2024, with 140,000–150,000 cases worldwide predicted for this yr.

The latest generation of generative AI apps, tools and platforms give attackers every thing they should quickly and intensely cost-effectively create deep fake videos, voice imitations and fraudulent documents. Pins' Language Intelligence and Security Report 2024 estimates that deepfake fraud targeting call centers costs an estimated $5 billion annually. Their report underscores how serious a threat deepfake technology poses to banks and financial services

Bloomberg reported last yr that “there may be already a cottage industry on the Dark Web selling fraud software for $20 to hundreds.” A recent infographic based on Sumsub Identity Fraud Report 2023 provides a worldwide overview of the rapid growth of AI-powered fraud.

Companies will not be prepared for deepfakes and hostile AI

Adversarial AI creates recent, unexpected attack vectors and a more complex, nuanced threat landscape with an emphasis on identity-based attacks.

Not surprisingly, one in three corporations don’t have any strategy to deal with the risks of a hostile AI attack, which might more than likely start with deepfakes of their key executives. Ivanti'The latest research shows that 30% of corporations don’t have any plans to detect and defend against hostile AI attacks.

The Ivanti State of Cybersecurity Report 2024 found that 74% of organizations surveyed are already seeing signs of AI-powered threats. The overwhelming majority, 89%, imagine AI-powered threats are only starting. Of nearly all of CISOs, CIOs and IT leaders surveyed by Ivanti, 60% fear their organizations are unprepared to defend against AI-powered threats and attacks. Using a deepfake as a part of an orchestrated strategy that features phishing, software vulnerabilities, ransomware and API-related vulnerabilities is becoming increasingly common. This is consistent with the threats that security experts expect to change into more dangerous with recent generation AI.

Source: Ivanti 2024 State of Cybersecurity Report

Attackers focus deepfake efforts on CEOs

VentureBeat commonly hears from enterprise software cybersecurity CEOs preferring to stay anonymous about how deepfakes have evolved from easily detectable fakes to actual videos that look real. Voice and video deepfakes look like a preferred attack strategy utilized by industry executives to defraud their corporations of hundreds of thousands of dollars. The threat is compounded by how aggressively nation-states and enormous cybercrime organizations are doubling down on efforts to develop, hire and grow their expertise. Generative Adversarial Network (GAN) Technologies. Of the hundreds of CEO deepfake attempts which have taken place this yr alone, the one which targeted CEO of the world's largest promoting agency shows how sophisticated attackers have gotten.

In a recent Tech news at a look with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how improvements in AI are helping cybersecurity professionals defend systems while also commenting on how attackers are using it. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election, and threats from China and Russia.

“Deepfake technology is so good today. I believe that's certainly one of the areas that you just really worry about. I mean, in 2016, we were tracking this, and also you saw people actually having conversations with bots, and that was 2016. And they're literally arguing, or they're promoting their cause, they're having an interactive conversation, and it's like there's nobody behind it in any respect. So I believe it's pretty easy for people to get sucked into something that's real, or there's a narrative that we wish to get behind, but loads of this could be pushed by other nation states and has been pushed by them,” Kurtz said.

CrowdStrike's intelligence team has invested loads of time understanding the nuances that make up a convincing deep fake and determining which direction technology is moving to attain maximum impact on viewers.

Kurtz continued, “And what we've seen previously – we've spent loads of time researching this with our CrowdStrike intelligence team – is that it's somewhat bit like a pebble in a pond. You take a subject or hear a subject, anything related to the geopolitical environment, and the pebble gets thrown into the pond, after which all these ripples unfolded. And it's this amplification that happens.”

CrowdStrike is thought for its deep expertise in AI and machine learning (ML) and its unique single-agent model that has proven effective in executing its platform strategy. With such deep expertise at the corporate, it's comprehensible that its teams are experimenting with deep fake technologies.

“And if there's the power to create deepfakes now, in 2024, and a few of our internal people made some fun parody videos with me just to point out me how creepy that’s, you wouldn't find a way to inform that's not me within the video. I believe that's certainly one of the areas that I'm really concerned about,” Kurtz said. “There's all the time concerns about infrastructure and things like that. Loads of those areas are still about ballots and things like that. Some of it isn't, but the way you construct the false narrative to get people to do things that a nation-state wants them to do, that's the world that I'm really concerned about.”

Companies must face the challenge

Companies run the danger lose the AI ​​war in the event that they don’t sustain with the rapid pace of attackers, they may find a way to make use of AI as a weapon for deepfake attacks and all other types of hostile AI. Deepfakes have change into so commonplace that the Department of Homeland Security has published a guide Increasing threat from deepfake identities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read