HomeArtificial IntelligenceZscaler finds that enterprise AI adoption is increasing by 600% in lower...

Zscaler finds that enterprise AI adoption is increasing by 600% in lower than a yr, putting data in danger

Enterprise reliance on AL/machine learning (ML) tools is increasing by nearly 600%, escalating from 521 million transactions in April 2023 to three.1 billion monthly in January 2024. Increased security concerns account for 18.5% of all AI/ML Transactions led to blocked, a rise of 577% in only nine months.

CISOs and the businesses they protect have good reason to be cautious and block record numbers of AI/ML transactions. Attackers have refined their craft and are actually using LLMs as a weapon to attack organizations without their knowledge. Adversary AI can be a growing threat because it is a cyber threat that nobody sees coming.

Zscalers ThreatLabz 2024 AI Security Report The study, released today, quantifies why organizations need a scalable cybersecurity technique to protect the numerous AI/ML tools they adopt. Data protection, managing the standard of AI data and privacy concerns dominate the survey results. Based on greater than 18 billion transactions from April 2023 to January 2024 on the Zscaler Zero Trust Exchange, ThreatLabz analyzed how corporations are using AI and ML tools today.

The adoption of AI/ML tools within the healthcare, finance and insurance, services, technology and manufacturing industries, in addition to the chance of cyberattacks, provides a sobering impression of how unprepared these industries are for AI-based attacks. Manufacturing generates probably the most AI traffic with 20.9% of all AI/ML transactions, followed by finance and insurance (19.9%) and services (16.8%).

Blocking transactions is a fast, temporary win

CISOs and their security teams are selecting to dam a record variety of AI/ML tool transactions to guard themselves from potential cyberattacks. It is a brutal measure that protects probably the most vulnerable industries from an onslaught of cyberattacks.

ChatGPT is probably the most used and blocked AI tool today, followed by OpenAI, Fraud.net, Forethought and Hugging Face. The mostly blocked domains are Bing.com, Divo.ai, Drift.com and Quillbot.com.

.

Manufacturing only blocks 15.65% of AI transactions, which is low given this industry's vulnerability to cyberattacks, particularly ransomware. The finance and insurance sector blocks the biggest share of AI transactions at 37.16%, indicating increased concerns about data security and privacy risks. It is concerning that despite processing sensitive health data and personally identifiable information (PII), the healthcare industry blocks a lower-than-average 17.23% of AI transactions, suggesting that it could be lagging behind in efforts to guard the info contained in AI tools .

Wreaking havoc in time- and life-critical businesses like healthcare and manufacturing leads to ransomware payouts repeatedly higher than other corporations and industries. The recent United Healthcare ransomware attack is an example of how an orchestrated attack can cripple a complete supply chain.

Blocking is a short-term solution to a much larger problem

Making higher use of all available telemetry data and deciphering the vast amounts of knowledge that cybersecurity platforms can collect is a primary step beyond blocking. CrowdStrike, Palo Alto Networks and Zscaler are advancing their ability to realize latest insights from telemetry.

George Kurtz, co-founder and CEO of CrowdStrike, told the keynote audience at the corporate's annual Fal.Con event last yr: “One of the areas where we have now really pioneered is in recognizing weak signals from various endpoints.” And we are able to link them together to seek out novel discoveries. We are actually extending this to our third-party partners so we are able to investigate other weak signals not only across endpoints but in addition across domains and develop novel detection.”

Leading cybersecurity vendors with deep AI expertise and lots of with a long time of ML experience include Blackberry Persona, Broadcom, Cisco Security, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft, McAfee, Sophos and VMWare Carbon Black. Be sure these vendors train their LLMs on AI-driven attack intelligence to maintain pace with attackers' increasing use of adversarial AI.

A brand new, deadlier AI threat landscape is here

“For enterprises, AI-driven risks and threats fall into two broad categories: the privacy and security risks related to enabling enterprise AI tools, and the risks of a brand new cyber threat landscape created by generative AI tools and automation,” Zscaler says within the report.

CISOs and their teams face the daunting challenge of defending their organizations against the onslaught of AI attack techniques briefly outlined within the report. Protecting against worker negligence when using ChatGPT and ensuring confidential data is just not unintentionally shared ought to be a board issue. They should prioritize risk management because the core of their cybersecurity strategies.

Protecting mental property from ChatGPT leakage to an organization, mitigating shadow AI, and ensuring privacy and security are central to an efficient AI/ML tools strategy.

Last yr, EnterpriseBeat spoke with Alex Philips, CIO at National Oilwell Varco (NOV), about his company's approach to generative AI. Phillips told EnterpriseBeat he’s tasked with educating his board on the advantages and risks of ChatGPT and generative AI on the whole. He repeatedly provides the board with updates on the present status of GenAI technologies. This ongoing educational process helps set expectations for the technology and make clear how NOV can take protective measures to make sure Samsung-like leaks never occur. He identified how powerful ChatGPT is as a productivity tool and the importance of getting security right while keeping shadow AI under control.

Balancing productivity and security is critical to meeting the challenges of the brand new, unknown AI threat landscape. Zscaler's CEO was targeted in a vishing and smishing scenario by which threat actors impersonated Zscaler CEO Jay Chaudhry in WhatsApp messages in an try and trick an worker into purchasing gift cards and revealing additional information . Zscaler was capable of thwart the attack using its systems. EnterpriseBeat has discovered that that is a widely known attack pattern targeting top CEOs and technology leaders across the cybersecurity industry.

Attackers are counting on AI to launch ransomware attacks on a bigger scale and faster than previously. Zscaler notes that AI-driven ransomware attacks are actually a part of the arsenal of nation-state attackers and the frequency of their use is increasing. Attackers now use generative AI prompts to create tables of known vulnerabilities for all firewalls and VPNs in a corporation they’re attacking. Attackers then use the LLM to generate or optimize code exploits for these vulnerabilities with customized payloads for the goal environment.

Zscaler notes that generative AI may also be used to discover vulnerabilities amongst partners in an organization's supply chain while identifying optimal ways to hook up with the corporate's core network. Even when organizations have a robust security posture, downstream vulnerabilities can often pose the best risks. Attackers are continually experimenting with generative AI and constructing their feedback loops to enhance leads to more complex, targeted attacks which are even harder to detect.

An attacker wants to make use of generative AI across the whole ransomware attack chain – from automating reconnaissance and code exploitation for specific vulnerabilities to generating polymorphic malware and ransomware. By automating critical parts of the attack chain, threat actors can generate faster, more sophisticated, and more targeted attacks against organizations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read