Visa Utilizes AI to Detect $40 Billion in Fraud as Scammers Adopt AI Techniques

Visa Utilizes AI to Detect $40 Billion in Fraud as Scammers Adopt AI Techniques
Visa Utilizes AI to Detect $40 Billion in Fraud as Scammers Adopt AI Techniques
  • The company is employing AI and machine learning, including risk scoring, to combat fraud, Visa stated.
  • James Mirfin, global head of risk and identity solutions, stated that their AI model scores over 500 different attributes for each transaction and creates a score. They handle approximately 300 billion transactions annually.
  • A Visa report reveals that fraudsters are increasingly using generative AI to create more convincing scams, resulting in significant losses for consumers.

Visa's global head of risk and identity solutions, James Mirfin, revealed to CNBC that the payments giant is employing artificial intelligence and machine learning to combat fraud.

Nearly double the amount of fraudulent activity was prevented by the company from October 2022 to September 2023, totaling $80 billion.

Visa's Mirfin stated that fraudulent tactics used by scammers involve consistently testing primary account numbers generated using AI, which are card identifiers usually comprising 16 digits but can have up to 19 digits in certain cases, found on payments cards.

Criminals use AI bots to repeatedly submit online transactions by combining primary account numbers, CVVs, and expiration dates until they receive an approval response.

An enumeration attack results in $1.1 billion in fraud losses annually, which is a substantial portion of the global fraud losses, as stated by Visa.

Mirfin explained to CNBC that their AI model analyzes over 500 attributes for each transaction and assigns a score. With this model, they process approximately 300 billion transactions annually.

Real-time risk scores are assigned to each transaction to detect and prevent enumeration attacks when a purchase is processed remotely without a physical card via a card reader or terminal.

Mirfin stated that all transactions have been processed by AI, which examines various attributes and evaluates each transaction.

"Our model will detect new types of fraud, score them as high risk, and allow our customers to decide whether to approve or reject those transactions."

Visa uses AI to evaluate the probability of fraud for token provisioning requests, in order to combat fraudsters who exploit social engineering and other deceitful tactics to unlawfully generate tokens and execute fraudulent transactions.

Over the past five years, the company has spent $10 billion on technology aimed at decreasing fraud and enhancing network security.

Generative AI-enabled fraud

Scammers are increasingly using generative AI, voice cloning, and deepfakes to deceive people, according to Mirfin.

AI is being used in romance scams, investment scams, and pig butchering, according to him.

Criminals use a tactic called pig butchering to build relationships with victims and convince them to invest in fake cryptocurrency trading or investment platforms.

Mirfin stated that the person in the market using a phone to call someone is not a criminal, but rather utilizing artificial intelligence in various forms such as voice cloning, deepfake, or social engineering to carry out different actions.

AI tools like ChatGPT can help scammers create more convincing phishing messages to trick people.

With just three seconds of audio, cybercriminals can use generative AI to clone a voice, which can be used to deceive family members into thinking a loved one is in danger or trick banking employees into transferring funds out of a victim's account, according to Okta, a U.S.-based identity and access management company.

Okta stated that celebrity deepfakes, which are created using generative AI tools, have been used to deceive fans.

Visa's chief risk and client services officer, Paul Fabara, stated in the company's biannual threats report that scams are becoming increasingly convincing due to the use of Generative AI and other emerging technologies, resulting in significant losses for consumers.

AI could make financial scams a 'growth industry'

A report from Deloitte's Center for Financial Services stated that cybercriminals can commit fraud more cost-effectively by targeting multiple victims simultaneously using generative AI, requiring fewer resources.

As bad actors continue to find and deploy sophisticated, yet affordable, generative AI, incidents like this are likely to increase in the years ahead, potentially resulting in a significant increase in fraud losses to banks and their customers, with an estimated $40 billion in losses in the U.S. by 2027, up from $12.3 billion in 2023.

An employee at a Hong Kong-based firm unwittingly sent $25 million to a fraudster who had deepfaked his chief financial officer and instructed him to make the transfer.

In 2021, a case similar to this was reported in Shanxi province, where an employee was tricked into transferring 1.86 million yuan ($262,000) to a fraudster who used a deepfake of their boss in a video call.

by Sheila Chiang

Business News