HomeArtificial IntelligenceWhy adversarial AI is the cyber threat nobody sees coming

Why adversarial AI is the cyber threat nobody sees coming

According to a recent study, security leaders' intentions are misaligned with their actions to secure AI and MLOps report.

An overwhelming majority of IT leaders, 97%, say securing AI and protecting systems is critical, yet only 61% are confident they’ll get the funding they need. Although nearly all of IT executives surveyed 77%Although they reported experiencing some type of AI-related breaches (not specifically models), only 30% have implemented manual defense against adversarial attacks of their existing AI development, including MLOps pipelines.

Only 14% plan and test such attacks. Amazon Web Services defined MLOps as “a set of practices that automate and simplify machine learning (ML) workflows and deployments.”

IT leaders are increasingly counting on AI models, making them a gorgeous goal for a wide range of adversarial AI attacks.

On average, IT leaders' corporations have 1,689 models in production, and 98% of IT leaders consider a few of their AI models critical to their success. 83 percent see widespread use across all teams of their organization. “The industry is working hard to speed up AI adoption without the property security measures in place.” write the analysts of the report.

AI Threat Landscape Report from HiddenLayer provides a critical evaluation of the risks faced by AI-based systems and advances in securing AI and MLOps pipelines.

Defining adversarial AI

The goal of Adversarial AI is to intentionally mislead AI and machine learning (ML) systems so that they’re worthless for the use cases for which they were designed. Enemy AI refers to “using artificial intelligence techniques to govern or deceive AI systems.” It’s like a clever chess player exploiting his opponent’s vulnerabilities. These intelligent attackers can evade traditional cyber defenses by leveraging sophisticated algorithms and techniques to evade detection and launch targeted attacks.”

HiddenLayer's report defines three broad classes of adversarial AI, defined below:

Adversarial attacks on machine learning. Aimed at exploiting vulnerabilities in algorithms, the goals of this sort of attack range from altering the behavior of a broader AI application or system, to evading the detection of AI-based detection and response systems, to stealing the underlying technology. Nation states practice espionage for financial and political reasons and try to reverse engineer models in an effort to obtain model data and likewise use the model as a weapon for his or her purposes.

Generative AI system attacks. The goal of those attacks is usually to focus on filters, guardrails, and restrictions designed to guard generative AI models, including all data sources and huge language models (LLMs) on which they rely. VentureBeat has learned that LLMs proceed to be weaponized by nation-state attacks.

Attackers find it difficult to bypass content restrictions in order that they can freely create banned content that the model would otherwise block, including deepfakes, misinformation or other kinds of harmful digital media. Attacks on generational AI systems are also popular amongst nation-states in search of to influence elections within the United States and other democratic countries all over the world. The U.S. Intelligence Community 2024 Annual Threat Assessment notes that “China is exhibiting a better level of sophistication in its influence operations, including experimentation with generative AI” and “the People's Republic of China (PRC) may seek to overturn the 2024 U.S. election based on its desire to outside “To stay ahead of, to influence at a certain level.” Critics of China and reinforce the social divisions within the USA.”

MLOps and software supply chain attacks. These are mostly federal and huge e-crime syndicate operations aimed toward disrupting frameworks, networks and platforms used to construct and deploy AI systems. Include attack strategies The goal is to focus on the components utilized in MLOps pipelines to inject malicious code into the AI ​​system. Poisoned records are delivered through software packages, arbitrary code execution, and malware delivery techniques.

Four ways to defend against an enemy AI attack

The larger the gaps between DevOps and CI/CD pipelines, the more vulnerable the event of AI and ML models becomes. Protecting models stays an elusive, moving goal, made even tougher by way of AI as a weapon.

However, these are only a number of of the various steps corporations can take to defend against an adversarial AI attack. These include the next:

Make red teaming and risk assessment a part of the muscle memory or DNA of the organization. Don't accept red teaming sporadically or, worse, only when an attack triggers a brand new sense of urgency and vigilance. From now on, Red Teaming have to be a part of the DNA of all DevSecOps that support MLOps. The goal is to preemptively discover vulnerabilities within the system and pipeline and work to prioritize and mitigate any attack vectors that emerge as a part of MLOps' System Development Lifecycle (SDLC) workflows.

Stay informed and adopt the defensive AI framework that works best for your online business. Ensure that a member of the DevSecOps team stays up up to now on the various defense frameworks available today. Knowing which most closely fits an organization's goals means that you can secure MLOps, saving time while securing the broader SDLC and CI/CD pipeline. Examples include the NIST AI Risk Management Framework and the OWASP AI Security and Privacy Guide.

Reduce the danger of synthetic data-based attacks by integrating biometric modalities and passwordless authentication techniques into any identity access management system. VentureBeat has learned that synthetic data is increasingly getting used to spoof identities and gain access to source code and model repositories. Consider using a mixture of biometric modalities, including facial recognition, fingerprint scanning and voice recognition, combined with passwordless access technologies to secure systems utilized in MLOps. Gen AI has proven able to helping create synthetic data. MLOps teams are increasingly battling deepfake threats, so a layered approach to securing access is quickly becoming a must.

Randomly and steadily audit verification systems to maintain access rights up up to now. As synthetic identity attacks develop into probably the most difficult threats to contain, it’s critical to maintain verification systems updated and audited. VentureBeat predicts that the subsequent generation of identity attacks will rely totally on synthetic data that’s aggregated to create the looks of legitimacy.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read