Home Artificial Intelligence Why cyber risk managers need to fight AI with AI

Why cyber risk managers need to fight AI with AI

Artificial intelligence technology is presenting new risks — and new opportunities — to financial institutions hoping to improve their cyber security and reduce fraud.

Banks and financial services groups have had to grapple with cyber attacks for decades, as their financial assets and huge customer databases make them prime targets for hackers.

Now, though, they are up against criminals using generative AI — which can trained on images and videos of real customers or executives, to produce audio and video clips impersonating them. These have the potential to fool cyber security systems, experts warn. According to a report by identity verification platform Sumsub, the number of “deepfake” incidents in the financial technology sector increased by 700 per cent in 2023, year on year.

At the same time, criminal gangs are using generative AI technologies to spread malicious software, or malware. In one experiment, cyber security researchers used an AI large language model (LLM) to develop a benign form of malware that can collect personal information, such as usernames, passwords and credit card numbers. By constantly changing its code, the malware was able to evade IT security systems, the researchers found. 

Keeping up with the Clones: banks need to stop ‘deepfakes’ impersonating their customers © Andrew Brookes/Getty Images/Image Source

To counter the threat, financial services companies — which are among the biggest spenders on technology — are deploying AI in their cyber defences. For at least a decade now, banks have been using different types of AI, such as machine learning, to detect fraud by spotting patterns in transactions and flagging anomalies.

The difficulty lies in keeping up with cyber criminals who have access to the latest AI tools. Many banks are struggling to do so, according to a report published in March by the US Treasury department. It concluded that finance companies should consider greater use of AI to counter tech-savvy cyber criminals, and share more information about AI security threats.

Deploying AI in this way, however, could create other risks. One concern is that criminals could attempt to inject false data into the LLMs that underpin the generative AI systems, such as OpenAI’s ChatGPT, used by financial services companies.

“If the attacker injects normal [finance] transactions as fraudulent, or vice versa, then the [AI] model would learn to classify these activities incorrectly,” warns Andrew Schwartz, senior analyst at Celent, a consultancy specialising in financial services.

Some financial services companies, however, are pressing on with generative AI systems. In February, Mastercard, the payments technology company, previewed its own generative AI software, which it said can help banks better detect fraud. This software, which analyses transactions on the Mastercard network, will be able to scan 1tn data points to predict whether a transaction is genuine.

Mastercard says the technology might be able increase banks’ fraud detection rates by, on average, 20 per cent and, in some cases, by as much as 300 per cent.

AI-enhanced transaction monitoring can deliver another big benefit: a more than 85 per cent reduction in the “false positives” reported, according to Mastercard. These are instances where a bank mistakenly flags a legitimate transaction as a fraudulent one. Mastercard plans to make the AI feature commercially available later this year.

“[Our AI] is really helping to give a better experience to consumers, while still accurately detecting the right frauds,” says Johan Gerber, Mastercard’s executive vice-president of cyber security and innovation.

300%Increase in fraud detection rate in some cases, as claimed by Mastercard when using its AI software

Other cyber security functions, such as analysing threats in real time and coordinating swifter action against them, can be automated, too.

For example, Irish company FBD Insurance uses AI-based security software from Smarttech247 to analyse up to 15,000 IT “events” per second on its network for potential security threats. Such events could include an employee accessing prohibited IT or email systems, or firewall breaches.

“A big change in our AI is that we’re interpreting and inspecting things as they happen,” says Enda Kyne, chief technology and operations officer at FBD. Traditional cyber security technology takes longer to spot threats, doing so “after the fact”, he explains.

Experts stress, however, that AI-powered cyber defences will not replace financial groups’ IT and risk management professionals in the foreseeable future. Emerging flaws in generative AI — such as the fabrication of facts, or “hallucination” — means the technology still needs careful oversight.

Yashin Ahmed, who leads cyber security services in financial services at tech group IBM, has mixed views on finance companies’ use of AI for cyber security. Although AI can create “tremendous efficiencies” for information security, he adds that financial services companies are “struggling” to keep track of the growth in its use.

“They don’t know all the places where the business is necessarily using the AI,” he points out. “And they don’t know if the business has secured the AI during the development process and if the business has tools in place to secure the AI once it’s deployed in a customer-facing type role.”

Recruiting staff with the right mix of AI and cyber skills can help to minimise the unintended consequences. But finding staff can be challenging, given a decade-plus global shortage of cyber security staff and intense competition for AI experts from large tech companies.

A “very small” number of candidates “have the level of understanding and experience” financial services companies want, says Giancarlo Hirsch, managing director at Glocomms, a technology recruiter. “So it’s a much nicher candidate pool.”

Demand for AI-enhanced cyber security is also likely to boost sales of off-the-shelf software. The global market for AI cyber security products and services is forecast to grow from about $24bn in 2023 to nearly $134bn by 2030, according to data provider Statista.

“Attackers are going to use AI more and more in upcoming years,” says Rom Eliahou, director of business development at BlueVoyant, a cyber security company. “And you simply can’t combat the scale of activity without using AI and machine learning yourself. There’s going to be too many threats out there.”

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment