Financial scammers are becoming increasingly skilled at using work email to deceive people.
- Despite banning the use of generative artificial intelligence by employees, companies are still falling victim to financial scams that use the technology to enhance traditional phishing tactics.
- With tools like ChatGPT or FraudGPT, criminals can easily create realistic videos of financial statements, fake IDs, false identities, or deepfakes of company executives using their voice and image.
- A $25 million scam that recently targeted a Hong Kong-based company highlights the sophistication of modern crimes and the challenges of detection.
While over a quarter of companies prohibit their workers from using generative AI, it does little to safeguard against criminals exploiting it to deceive employees into divulging confidential information or submitting fraudulent bills.
With the help of ChatGPT or its dark web equivalent, FraudGPT, criminals can easily create realistic videos of financial statements, fake IDs, false identities or even convincing deepfakes of a company executive using their voice and image.
A recent survey by the Association of Financial Professionals found that 65% of respondents' organizations had experienced attempted or actual payments fraud in 2022. Of those who lost money, 71% were compromised through email. The survey also revealed that larger organizations with annual revenue of $1 billion were the most vulnerable to email scams.
Phishing emails are one of the most common types of email scams. These fraudulent emails appear to be from a trusted source, such as Chase or eBay, and ask individuals to click on a link leading to a fake but convincing-looking website. The email requests the victim to log in and provide personal information, which the criminals can use to gain access to bank accounts or commit identity theft.
Targeted phishing attacks are more specific and personalized than generic ones. They are tailored to an individual or a particular organization, with criminals often conducting research on job titles, colleagues, and supervisors before sending emails.
Old scams are getting bigger and better
Generative AI has made it more difficult to distinguish between real and fake content, as scams are becoming increasingly sophisticated. Previously, errors in fonts, writing, or grammar were easy to identify. However, with the advent of tools like ChatGPT and FraudGPT, criminals can now create convincing phishing and spear phishing emails, even impersonating a CEO or other manager in a company. They can use these tools to hijack a voice for a fake phone call or an image in a video call.
Recently in Hong Kong, a finance employee believed he received a message from the company's UK-based chief financial officer requesting a $25.6 million transfer. Although initially suspicious of a phishing email, his fears were assuaged after a video call with the CFO and other colleagues he recognized. However, it was later discovered that everyone on the call was deepfaked. The employee only found out the deception after checking with head office, but by then the money had already been transferred.
The effort put into making these credible is quite impressive, according to Christopher Budd, director at cybersecurity firm Sophos.
Deepfakes featuring prominent public figures have emerged, demonstrating the rapid advancement of technology. In the summer, a fake investment scheme depicted a deepfaked Elon Musk endorsing a non-existent platform. Additionally, deepfaked videos of Gayle King, Tucker Carlson, and Bill Maher discussing Musk's new investment platform circulated on social media platforms such as TikTok, Facebook, and YouTube.
The process of creating synthetic identities is becoming increasingly simple, as people can either use stolen information or generate new identities using AI technology, according to Andrew Davies, the global head of regulatory affairs at ComplyAdvantage.
Criminals can use the vast amount of information available online to create highly convincing phishing emails, as large language models are trained on the internet and are familiar with the company, CEO, and CFO, according to Cyril Noel-Tagoe, principal security researcher at Netacea, a cybersecurity firm that specializes in automated threats.
Larger companies at risk in world of APIs, payment apps
The increasing credibility of threats made by generative AI is exacerbating the growing scale of the problem, which is being driven by automation and the proliferation of financial transaction websites and apps.
According to Davies, the evolution of fraud and financial crime has been significantly influenced by the transformation of financial services. A decade ago, electronic money transfer options were limited, primarily relying on traditional banks. However, the proliferation of payment solutions such as PayPal, Zelle, Venmo, Wise, and others has expanded the opportunities for criminals to commit fraud. Traditional banks are increasingly using APIs, which connect apps and platforms, providing another potential avenue for attack.
Davies stated that criminals employ generative AI to produce convincing messages rapidly, followed by automation to amplify their efforts. The goal is to maximize profits, as demonstrated by the example of sending 1,000 spear phishing emails or CEO fraud attacks and achieving a success rate of one in ten, which could result in millions of dollars in earnings.
A survey by Netacea found that 22% of companies surveyed had been attacked by a fake account creation bot. In the financial services industry, this percentage rose to 27%. Of companies that detected an automated attack by a bot, 99% said they saw an increase in the number of attacks in 2022. Larger companies were most likely to see a significant increase, with 66% of companies with $5 billion or more in revenue reporting a “significant” or “moderate” increase. While all industries said they had some fake account registrations, the financial services industry was the most targeted, with 30% of financial services businesses attacked saying 6% to 10% of new accounts are fake.
The financial industry is combating AI-driven fraud with its own AI systems. Mastercard has developed a new AI model to detect fraudulent transactions by identifying "mule accounts" used by criminals to transfer illicit funds.
Impersonation tactics are increasingly used by criminals to deceive victims into believing that the transfer is legitimate and intended for a real person or company. Banks have found these scams challenging to detect, with customers passing all necessary checks and sending money themselves, without the need for criminals to breach any security measures. Mastercard estimates that its algorithm can help banks reduce the costs associated with identifying and preventing fraudulent transactions.
More detailed identity analysis is needed
While some highly driven attackers may possess insider knowledge, criminals have become extremely advanced, but they cannot fully comprehend a company's internal operations, according to Noel-Tagoe.
To ensure the legitimacy of a money transfer request from the CEO or CFO, employees should follow specific procedures for transferring money. If the usual channels for money transfer requests are through an invoicing platform rather than email or Slack, employees should find another way to contact them and verify.
To distinguish between real and fake identities, companies are exploring a more comprehensive authentication process. Currently, digital identity verification involves presenting an ID and a real-time selfie. In the future, companies may require individuals to perform additional actions, such as blinking or speaking their name, to differentiate between live video and pre-recorded content.
Generative AI is causing a surge in very convincing financial scams, according to cybersecurity experts.
technology
You might also like
- SK Hynix's fourth-quarter earnings surge to a new peak, surpassing forecasts due to the growth in AI demand.
- Microsoft's business development chief, Chris Young, has resigned.
- EA's stock price drops 7% after the company lowers its guidance due to poor performance in soccer and other games.
- Jim Breyer, an early Facebook investor, states that Mark Zuckerberg has been rejuvenated by Meta's focus on artificial intelligence.
- Many companies' AI implementation projects lack intelligence.