The AI frontier has become a breeding ground for cybercrime, but the good guys are closing in on them.

The AI frontier has become a breeding ground for cybercrime, but the good guys are closing in on them.
The AI frontier has become a breeding ground for cybercrime, but the good guys are closing in on them.
  • AI is being used by cybercriminals to carry out highly targeted attacks at scale, resulting in people unknowingly sending money and sensitive information or becoming vulnerable to theft.
  • Criminals can now rent out AI language models developed in the underground community to create text-based scams.
  • As social engineering attacks become more sophisticated and widespread, generative AI is providing defenders with new tools to combat them.

In the age of artificial intelligence, cybercriminals are leveraging AI to execute highly targeted attacks at scale, causing people to unknowingly send money and sensitive information or open themselves up to theft using methods they may not have even known to look out for.

An employee of a Hong Kong IT firm recently transferred over $25 million to a criminal after being tricked by a deepfake video call that impersonated the company's chief financial officer. Similarly, a fake Taylor Swift video has been used to scam Swifties by selling Le Creuset cookware. Believable emails, social media posts, and advertisements are also common ways that scammers use to trick people with perfect grammar and accounts that appear legitimate.

In 2023, business email compromise (BEC) attacks increased from 1% to 18.6% of all threats, representing a growth rate of 1760%, according to Perception Point's latest cybersecurity trends report. The rise in BEC attacks is driven by the use of generative AI tools.

Cybercriminals do not use plain old ChatGPT to create text-based scams, but rather rely on services from the underground cybercrime community. According to Steve Grobman, senior vice president and chief technology officer at McAfee, the cybercrime ecosystem has removed all guardrails.

Original: "I am an AI assistant that helps people find information." Rewritten: "As an AI assistant, my purpose is to assist individuals in locating information."

In 2023, brand impersonation was a common cyberattack method, with over half (55%) of instances involving organizations' own brands, according to a Perception Point report. Cybercriminals can use account takeovers on social media or email to impersonate brands, or they can employ malvertising, which involves planting a malicious ad on Google that mimics and redirects visitors to a fake site.

Perception Point's chief technology officer, Tal Zamir, revealed that criminals can now produce polymorphic malware at scale using AI and automation, and they are also receiving assistance in vulnerability research to make the malware more harmful.

As cybercriminals use AI to enhance and scale their social engineering attacks, defenders are also benefiting from the same technology. Grobman explains that our ability to use digital resources effectively is a result of the cyber defense industry's ability to play an effective cat-and-mouse game with cybercriminals.

How AI-generated email scams are being stopped

Mimecast's senior manager for product management, Kiri Addison, explains that AI can now analyze the sentiment of messages beyond just flagging specific keywords. This process can be automated for maximum effectiveness. Additionally, defenders can use AI to defend against a broader range of issues by feeding data into their existing models or creating new data sets.

Email security firm Addison stated that while it is possible to create impressive emails, it is still crucial to prevent them from reaching the user's inbox to avoid any potential harm.

McAfee is one of the companies developing an AI-detection tool to combat trust in deepfakes. The company unveiled Project Mockingbird at CES 2024, which it claims can detect and expose AI-altered audio within video. However, Grobman, the CEO of the company, compares AI detection to weather forecasting, stating that "When you're working in the world of AI, things are a lot less deterministic."

Perception Point reported that quishing using malicious QR codes accounted for 2% of all threats in 2023. In response, the firm prioritizes QR code detection as soon as one arrives on a device. However, the firm's CEO admitted that traditional security systems are not equipped to detect and follow up on malicious QR codes, meaning quishing remains prevalent and could be propelled by AI and automation.

Cybercrime is a business

Public education is a proactive method for preventing threats from completing their mission, and people can recalibrate their trust in what they see, hear, and read, just as many parents did after the latchkey kid era.

Grobman advises asking questions such as: "Is this deal too good to be true?" and "Can I verify it through a credible news source or a trustworthy individual?"

At the organization level, Addison suggests adopting a risk-based approach. She advises asking: What are your valuable assets? What are the potential reasons for an attacker to target you? Additionally, she recommends keeping one eye on current threats and another on future threats, such as quantum computing attacks, which she predicts will become a significant concern.

Addison stated that providing real examples of such attacks aids in putting things into perspective.

Cybersecurity experts are optimistic despite ongoing and evolving threats, as Zamir stated, "Defenders have an advantage that attackers just cannot have, and we know the organization from the inside."

Both teams have reached a new level of efficiency. As Grobman stated, "It's crucial to view cybercrime as a business." Similarly, legitimate businesses are utilizing AI to enhance their productivity and effectiveness, while cybercriminals are employing similar strategies.

Microsoft says a Russian hacking group is still trying to crack into its systems
by Rachel Curry

Technology