Malicious phishing emails are on the rise due to the use of AI tools like ChatGPT.
- A recent report by cybersecurity firm SlashNext reveals a 1,265% increase in malicious phishing emails and a 967% rise in credential phishing since the fourth quarter of 2022.
- Business email compromise (BEC) and other phishing messages are being crafted with the help of sophisticated tools such as ChatGPT, which are being used by cybercriminals.
- AI-based threats are rapidly increasing in speed, volume, and sophistication, as shown by the report's findings.
Here's an alternative version of the sentence: Cybercriminals are utilizing generative artificial intelligence tools effectively.
A recent report by cybersecurity firm SlashNext reveals a 1,265% increase in malicious phishing emails and a 967% rise in credential phishing since the fourth quarter of 2022.
Cybercriminals are using generative artificial intelligence tools such as ChatGPT to write complex, targeted business email compromise (BEC) and phishing messages, according to a report based on the company's threat intelligence and a survey of over 300 North American cybersecurity professionals.
Research shows that on average, 31,000 phishing attacks were sent daily, and nearly half of cybersecurity professionals surveyed reported receiving a BEC attack, with 77% of them being targets of phishing attacks.
Patrick Harr, CEO of SlashNext, stated that the research confirms the fears about the use of generative AI contributing to a rapid increase in phishing attacks. AI technology allows attackers to speed up and diversify their attacks by modifying malware code or creating numerous variations of social engineering attacks to boost their chances of success.
AI-based threats are rapidly increasing in speed, volume, and sophistication, according to Harr.
The coincidence of the launch of ChatGPT at the end of last year and the exponential growth of malicious phishing emails is not a mere coincidence, according to Harr. Generative AI chatbots have made it easier for novice bad actors to launch targeted, spear-phishing attacks at scale, while providing more skilled and experienced attackers with even more tools to do so.
Billions of dollars in losses
Phishing attacks are successful, according to Harr, who cited the FBI's Internet Crime Report, which reported that BEC alone resulted in $2.7 billion in losses in 2022 and an additional $52 million in losses from other types of phishing.
Phishing and BEC attempts are becoming more popular among cybercriminals due to the attractive rewards they offer, according to Harr.
"Our research shows that threat actors are using tools like ChatGPT to quickly deliver cyber threats and write targeted phishing messages, including BEC attacks," said Harr.
In July, SlashNext researchers found a BEC attack that utilized ChatGPT and a cybercrime tool called WormGPT, which is a malicious version of GPT models designed for malicious activities such as launching BEC attacks.
Reports emerged about another malicious chatbot called FraudGPT, which was marketed as an exclusive tool for fraudsters, hackers, spammers, and similar individuals, boasting an extensive list of features, according to Harr.
SlashNext researchers have uncovered a new threat involving AI "jailbreaks," where hackers remove the legal use guardrails for gen AI chatbots, allowing them to turn tools like ChatGPT into weapons that deceive victims into giving away personal data or login credentials, resulting in further damaging incursions.
Research director Chris Steffen from analyst and consulting firm Enterprise Management Associates stated that cyber criminals are using generative AI tools such as ChatGPT and natural language processing models to create more convincing phishing messages, including BEC attacks.
The "Prince of Nigeria" emails, which were once broken and nearly unreadable, have been replaced by extremely convincing and legitimate-sounding messages that mimic the styles of those being impersonated or official correspondence from trusted sources, such as government agencies and financial services providers, Steffen stated.
Steffen stated that AI can be utilized to analyze past writings and publicly accessible information in order to make emails highly convincing.
An AI-generated email might be used by a cybercriminal to impersonate a supervisor or boss, referencing a company event or personal detail to make the message appear legitimate.
Steffen advised cybersecurity leaders to combat the escalated attacks by offering ongoing instruction to end-users.
Steffen emphasized that cybersecurity professionals must instill a security awareness culture in their environment, where end-users view security as a business priority and feel comfortable reporting suspicious emails and security-related activities. A one-time reminder is not enough to achieve this goal.
Steffen emphasized the need for email filtering tools that use machine learning and AI to detect and block phishing emails. These solutions must be regularly updated and fine-tuned to protect against evolving threats and advancements in AI technology.
Regular testing and security audits of exploitable systems are necessary for organizations to identify vulnerabilities and weaknesses in their defenses, as well as provide employee training, and promptly address known issues to minimize the attack surface, Steffen stated.
To mitigate control gaps and provide defense-in-depth for most organizations, companies should adopt a zero trust strategy.
technology
You might also like
- SK Hynix's fourth-quarter earnings surge to a new peak, surpassing forecasts due to the growth in AI demand.
- Microsoft's business development chief, Chris Young, has resigned.
- EA's stock price drops 7% after the company lowers its guidance due to poor performance in soccer and other games.
- Jim Breyer, an early Facebook investor, states that Mark Zuckerberg has been rejuvenated by Meta's focus on artificial intelligence.
- Many companies' AI implementation projects lack intelligence.