Identifying AI imposters in video, audio, and text: A guide to deepfake technology's mainstream use

Identifying AI imposters in video, audio, and text: A guide to deepfake technology's mainstream use
Identifying AI imposters in video, audio, and text: A guide to deepfake technology's mainstream use
  • Recently, OpenAI launched its Sora video generation tool and, due to potential risks, it released Voice Engine, an audio tool that can replicate an individual's voice from just 15 seconds of audio recording.
  • The number of websites that create deep fakes using AI technology is increasing rapidly on the dark web, producing increasingly realistic video and audio content.
  • Frequent deepfake targets, CEOs and other boardroom executives are expected to be, and steps companies are taking to identify AI imposters are good tips for any individual to learn for video, audio, and text communications.

For over two decades, Carl Froggett served as Citibank's chief information security officer, safeguarding the bank's infrastructure against increasingly complex cyberattacks. Although traditional fraud tactics, such as paper forgery and email scams, have long been a threat to banking and businesses, the emergence of deepfake technology powered by generative AI represents a new and unprecedented challenge.

Froggett, the CIO of Deep Instinct, expressed concern about deepfakes being used in business.

Cybercriminals are increasingly using deepfake technology to steal millions from companies, and boardrooms and office cubicles are becoming a popular target. As a result, these spaces are ideal for testing AI imposters before they can successfully carry out scams.

Froggett stated that the challenge lies in the fact that generative AI is so lifelike.

AI video and audio tools are rapidly improving and being deployed. OpenAI released its video generation tool Sora in February and introduced an audio tool called Voice Engine in March that can accurately replicate an individual's voice from a 15-second soundbite. Due to the potential risks associated with the technology, OpenAI launched Voice Engine to a limited group of users.

Froggett, a native of the United Kingdom, showcases his regional British accent as an example.

He assured me that he uses complex language and words that I am unfamiliar with, but generative AI has consumed things he has made public; I am confident that there is a speech he has given posted somewhere, and from it, it generates hyper-realistic voicemail, email, and video.

OpenAI unveils new text-to-video AI tool Sora

The case of a multinational corporation employee being tricked into transferring $25 million to a fraudulent account after a Zoom call with deepfakes of her colleagues is a warning of what's to come, according to experts.

"These bad guys have just obtained access to these tools, and they are only getting started," Froggett said.

Rupal Hollenbeck, president of Check Point Software, stated that it only takes a brief snippet of someone talking to create a flawless deepfake, and cybercriminals can now access AI-driven deepfake tools for a low cost. However, this is only on the audio side. With the advent of video deepfakes, the game has changed.

The steps that corporations are taking to prevent successful deepfakes can serve as a guide for individuals on how to conduct their lives in a gen AI world and interact with friends, family, and coworkers.

How to identify an AI video imposter

There are many ways to spot an AI imposter, some relatively simple.

If there is any doubt about a person's video veracity, Hollenbeck suggests asking them to turn their head to the right or left, or look backward. If the person complies but their head disappears on the video screen, end the call immediately, according to Hollenbeck.

She stated that she would teach everyone she knows how to look right or left, emphasizing that AI cannot go beyond what is visible. She added that AI is currently flat but very powerful.

But there's no telling how long that will last.

Blackcloak CEO Chris Pierson believes that deepfakes with 3D capability are imminent as the models are improving at an alarming rate.

He advises to request "proof of life" video evidence of authenticity, such as a company report or a newspaper, to verify their identity. If they fail to comply with these basic commands, it is a warning sign.

How use of code words and QR codes can help

Hollenbeck and Pierson advise executive teams at companies to create a code word for each month and store them in encrypted password vaults. If there is any uncertainty about the person being communicated with, request the code word to be sent via text. Additionally, set a threshold for deploying the code word, such as if someone asks for a transaction over $100,000, the code word tactic should be utilized.

By limiting corporate calls to approved company channels, businesses can significantly minimize the risk of falling victim to deepfakes.

Pierson stated that the issue lies in venturing beyond the network.

The use of deepfakes in business is on the rise, according to Nirupam Roy, an assistant professor of computer science at the University of Maryland. These deepfakes are not limited to criminal bank account transfers and can be used for targeted defamation to harm the reputation of a product or a company.

Roy and his team have created a system known as TalkLock that can detect both deepfakes and shallowfakes, which he defines as being based on "less intricate editing methods and more on linking fragments of truth to minor falsehoods."

The solution to detect AI manipulation is an app that embeds a QR code into audiovisual media, such as live public appearances, social media posts, advertisements, and news. This technology can prove authenticity and combat the issue of unofficial recordings, like those taken by audience members at events that cannot be identified by metadata.

How to live a multi-factor authentication life offline

Despite the implementation of additional safeguards, experts anticipate a vicious cycle of deepfakes and deepfake technology. To mitigate the negative effects of deepfakes, companies can implement specific protocols, but these measures may not be as effective in individual situations.

Ironscales CEO Eyal Benishti stated that organizations will increasingly implement segregation of duties to prevent a single person from causing harm to a company. This involves dividing labor processes for handling sensitive data and assets, such as requiring two people to change bank account information used to pay invoices or payroll. Even if an employee falls for a social engineering attack, there will be stop-gaps as different stakeholders are brought in to fulfill their roles in the chain of command.

According to Hollenbeck, organizations and their people should adopt a multi-factor authentication approach to verify reality, which includes multiple ways of authentication. However, old-school methods such as physically visiting the boss in person still work and cannot be easily replicated through deepfaking.

"Nowadays, it's not as easy to believe what you see as it used to be," Hollenbeck remarked.

Deepfakes are the latest in a long line of scams that exploit human vulnerabilities by creating a false sense of urgency. The best way to combat deepfakes, according to Pierson, is to slow down. This tactic is easier for individuals to implement in their personal lives than for employees in their work lives.

"Pausing often results in a clear resolution. Companies should establish a safe harbor policy that allows employees to decline decisions under pressure, contact security, and be protected from consequences," Pierson advised. Often, corporate culture doesn't respect employees' opinions.

"Giving people the ability to stop and say no is crucial. If they don't feel comfortable saying no, it can lead to mistakes, and everyone may struggle to say no," Pierson stated.

A.I. agency creates deepfake doubles of celebrity clients
by Kevin Williams

Technology