Home Artificial Intelligence Artificial intelligence helps banks flag fraud

Artificial intelligence helps banks flag fraud

CLEAR LAKE — Clear Lake Bank & Trust combines AI with behavioral analytics for fraud prevention and protection. Behavioral analytics looks for things that go beyond what a customer’s typical behavior is to flag possible fraudulent activity.

Matt Ritter, chief operating officer at Clear Lake Bank & Trust, said the bank works with vendors that provide those services to “stay ahead of criminals and what they are trying to do, whether it be direct financial fraud transactions with the bank or attempting to scam a customer.”

Clear Lake Bank & Trust also uses AI to automate routine tasks and eliminate human error, according to Ritter.

He acknowledged AI is a relatively new technology, which increases the risks involved with using it.

“It is relatively immature in nature in its current state, so the risk can evolve quickly,” Ritter said. “Machine learning in AI can underestimate or overestimate the importance of certain data, so it can create inaccurate decisions. You have to be mindful of that. It’s only as good as the people who created the artificial intelligence.”

People are also reading…

Clear Lake Bank & Trust used the same approach when deciding whether to use AI as it does with any other type of technology, according to Ritter.

“Banking is really about managing risk, so whenever we employ a new service or product, a new technology or a new process, we go through a pretty robust risk assessment to identify what are the risks we see with this, what are some of the things we could do to mitigate that risk, and kind of determine what is the level of risk and if it is acceptable or not,” he said.

Ritter educates customers and the community about how criminals are using AI to commit fraud.

“What’s really scary right now is they are using artificial intelligence to collect consumer information and commit identity fraud,” he said. “They bring the data together to create a fake identity and then apply for a loan using somebody else’s Social Security number.”

AI can even be used to clone someone’s voice, making it easier for fraudsters to pull off scams where they pretend to be someone’s grandchild on the phone to convince senior citizens to send the money, according to Ritter.

Cybersecurity and fraud busting are also important uses for AI at Waterloo-headquartered Veridian Credit Union, according to Brett Engstrom, chief information officer.

“The things we like to use AI for are things that don’t add friction for members’ lives, but it does add security,” he said. “An example would be ‘behavioral biometrics.’ I know that’s a mouthful. It’s based on how you use your device, how you interact with that device. Are you right-handed or left-handed? Which thumb do you use? How fast do you scroll? I know it sounds crazy — but when you combine those things — there’s all these details that kind of combine together to form almost a fingerprint, if you will.”

Deviation from those user habits can indicate a device has fallen into the wrong hands and someone other than the customer is trying to access their account, Engstrom said. “With behavioral biometrics, I can tell it’s not you. It’s behavioral analytics. This is one of those instances where we can add security without making it harder for the member.”

The downside is “the bad guys can use AI too.”

Fake “phishing” sites try to rip off members or commit fraud. But there’s also AI to keep the financial institutions a step ahead and detect when someone is attempting to create such a site and convey that to regulators and law enforcement.

“Our manager of IT security works directly with me,” Engstrom said. “We work hand in hand together with (financial institution) examiners and everything else. It keeps us busy, making sure we’re one step ahead all the time. And so far, we have been.”

To those who view AI warily as a step toward “Big Brotherism,” Engstrom said, “That a good question. I would describe AI in general as a very polarizing topic. Some people’s ‘cool’ is other people’s ‘Big Brother.’”

But there’s an egalitarian aspect to that as well.

“As a CDFI — Community Development Financial Institution — we have a special interest in serving the underserved, making sure we get products in people’s hands in some of those lower-income communities,” Engstrom said. “Well, believe it or not. AI helps us do that. AI-driven (loan) underwriting is different and more powerful than traditional underwriting.”

Instead of simply looking at credit score, Engstrom said, “Solutions based on AI take maybe 20 or 50 or 1,000 different lines of data from anonymized data from reports and, they can say, based on this machine-learned model, ‘I can look at this single applicant and say they’re very unlikely to have a problem with their loan, even though maybe their credit score is a little lower.’ It actually lets you make better loans, more loans, and less risky loans than the things you can just eyeball.

“In that regard, it’s kind of the opposite of Big Brother,” Engstrom said.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment