Artificial intelligence technology presents latest risks – and latest opportunities – for financial institutions trying to improve their cybersecurity and reduce fraud.
Banks and financial services groups have needed to take care of cyber attacks for many years as their financial assets and vast customer databases make them prime targets for hackers.
But now they're coping with criminals using generative AI – which could be trained using images and videos of real customers or executives to provide audio and video clips impersonating them. Experts warn that these have the potential to outsmart cyber security systems. According to a report from identity verification platform Sumsub, the variety of “deepfake” incidents within the financial technology sector has increased increased by 700 percent in 2023Year for yr.
At the identical time, criminal gangs use generative AI technologies to spread malware. In one experiment, cybersecurity researchers used an AI-sized language model (LLM) to develop a harmless type of malware which will collect personal information reminiscent of usernames, passwords and bank card numbers. The researchers found that by consistently changing its code, the malware managed to bypass IT security systems.
To counter the threat, financial services firms – that are amongst the biggest technology investors – are using AI of their cyber defenses. For not less than a decade, banks have been using various forms of AI, reminiscent of machine learning, to detect fraud by identifying patterns in transactions and flagging anomalies.
The difficulty is maintaining with cybercriminals who’ve access to the most recent AI tools. Many banks find it difficult to do that, in line with a report released by the US Treasury Department in March. It concluded that financial firms should consider greater use of AI to combat tech-savvy cybercriminals and share more details about AI security threats.
However, using AI in this fashion could also pose other risks. One concern is that criminals could attempt to inject false data into the LLMs that underlie generative AI systems, reminiscent of OpenAI's ChatGPT, which is utilized by financial services firms.
“If the attacker injects normal (financial) transactions as fraudulent or vice versa, then the (AI) model would learn to misclassify these activities,” warns Andrew Schwartz, senior analyst at Celent, a consulting firm specializing in financial services.
However, some financial services firms are advancing using generative AI systems. In February, Mastercard, the payments technology company, introduced a preview of its own generative AI software, which could help banks higher detect fraud. This software, which analyzes transactions on the Mastercard network, will give you the option to scan 1 trillion data points to predict whether a transaction is real.
According to Mastercard, the technology could increase banks' fraud detection rates by a median of 20 percent and in some cases by as much as 300 percent.
According to Mastercard, AI-powered transaction monitoring can provide one other major profit: a discount in reported “false positives” by greater than 85 percent. These are cases where a bank incorrectly flags a legitimate transaction as fraudulent. Mastercard plans to make the AI feature commercially available later this yr.
“(Our AI) really helps provide consumers with a greater experience while accurately identifying the precise fraud cases,” says Johan Gerber, executive vice chairman of cybersecurity and innovation at Mastercard.
300%Partially increasing the fraud detection rate, as Mastercard claims when using its AI software
Other cybersecurity functions, reminiscent of analyzing threats in real time and coordinating faster responses to them, may also be automated.
For example, Irish company FBD Insurance uses Smarttech247's AI-based security software to research as much as 15,000 IT “events” per second across its network for potential security threats. Such events may include an worker accessing prohibited IT or email systems or firewall violations.
“A giant change in our AI is that we interpret and investigate things as they occur,” says Enda Kyne, chief technology and operations officer at FBD. Traditional cybersecurity technologies take longer to detect threats and accomplish that “after the actual fact,” he explains.
However, experts emphasize that AI-powered cyber defenses is not going to replace the IT and risk management experts of economic groups within the foreseeable future. Emerging flaws in generative AI – reminiscent of falsification of facts or “hallucinations” – mean the technology still requires careful monitoring.
Yashin Ahmed, head of cybersecurity services at tech giant IBM's financial services division, has mixed views on financial firms' use of AI for cybersecurity. Although AI can create “tremendous efficiencies” for information security, he adds that financial services firms are “struggling” to maintain up with the increasing use of this technology.
“You don’t know all of the places where the corporate necessarily uses AI,” he emphasizes. “And they don’t know whether the corporate secured the AI through the development process and whether the corporate has tools in place to secure the AI once it’s deployed in a customer-facing role.”
Recruiting employees with the precise mixture of AI and cyber skills will help minimize unintended consequences. However, given the worldwide cybersecurity workforce shortage for greater than a decade and intense competition for AI experts from major tech firms, finding staff could be difficult.
A “very small” variety of candidates “have the extent of understanding and experience” that financial services firms want, says Giancarlo Hirsch, managing director at Glocomms, a technology recruiter. “So it’s a much smaller pool of candidates.”
Demand for AI-powered cybersecurity can be expected to spice up sales of commodity software. The global marketplace for AI cybersecurity services According to data provider Statista, the worth is anticipated to rise from around $24 billion in 2023 to almost $134 billion in 2030.
“Attackers will use AI increasingly in the approaching years,” says Rom Eliahou, director of business development at BlueVoyant, a cybersecurity company. “And you just can’t combat the dimensions of activity without using AI and machine learning yourself. There might be too many threats on the market.”