Misinformation campaigns during election years are a concern in Asia, with deepfakes posing a significant challenge.

Misinformation campaigns during election years are a concern in Asia, with deepfakes posing a significant challenge.
Misinformation campaigns during election years are a concern in Asia, with deepfakes posing a significant challenge.
  • This year, over 60 countries and more than four billion individuals will cast their ballots for their leaders and representatives.
  • The number of deepfakes worldwide increased by 10 times from 2022 to 2023, with APAC experiencing a 1,530% surge, as per Sumsub's data verification findings.
  • Crowdstrike warned that nation-state actors could launch misinformation or disinformation campaigns to disrupt the upcoming elections.
  • Simon Chesterman, senior director of AI governance at AI Singapore, stated that Asia is not adequately prepared to address deepfakes in elections through regulation, technology, and education.

In the lead-up to the Indonesian elections on February 14th, a video of the late Indonesian president Suharto endorsing the political party he previously led became widely popular online.

On X, a deepfake video featuring his likeness and voice garnered 4.7 million views.

This was not a one-off incident.

In Pakistan, a deepfake of former prime minister Imran Khan was released during the national elections, stating that his party would be boycotting them. Simultaneously, in the U.S., New Hampshire voters heard a deepfake of President Joe Biden urging them not to vote in the presidential primary.

The prevalence of deepfakes featuring politicians has been on the rise, particularly as the 2024 election approaches and promises to be a significant global event.

This year, with at least 60 countries and over four billion people casting their votes, the issue of deepfakes becomes increasingly concerning.

Rise of election deepfake risks

The number of deepfakes worldwide increased tenfold from 2022 to 2023, with APAC experiencing a 1,530% surge in deepfakes during the same period, according to a Sumsub report from November.

Identity fraud rates increased by 274% between 2021 and 2023, with social media and digital advertising seeing the biggest spike. Additionally, professional services, healthcare, transportation, and video gaming were also affected by identity fraud.

Simon Chesterman, senior director of AI governance at AI Singapore, stated that Asia is not adequately prepared to address deepfakes in elections through regulation, technology, and education.

Crowdstrike's 2024 Global Threat Report predicts that nation-state actors, including those from China, Russia, and Iran, are likely to carry out misinformation or disinformation campaigns to disrupt elections this year.

If a major power decides to disrupt a country's election, it would likely have a greater impact than political parties engaging in minor actions.

Most deepfakes will still be generated by actors within the respective countries, he stated.

The principal research fellow and head of the society and culture department at the Institute of Policy Studies in Singapore stated that domestic actors may include opposition parties and political opponents or extreme right wingers and left wingers.

Deepfake dangers

Deepfakes can make it harder for people to find accurate information and form informed opinions about a party or candidate, according to Soon.

The concern is that if a scandalous issue goes viral before it's debunked as fake, voters may be put off by a particular candidate, even if governments have tools to prevent online falsehoods.

""Deep fake pornography involving Taylor Swift can spread incredibly quickly, and regulation is often not enough and incredibly hard to enforce," he said, adding that it's often too little too late."

How easy is it to make a deepfake video?

Adam Meyers, head of counter adversary operations at CrowdStrike, stated that deepfakes may cause confirmation bias in individuals: "Even if they understand it's not true, if it aligns with their beliefs and desires, they won't let it go."

Fake footage depicting election misconduct, such as ballot stuffing, could erode public trust in the legitimacy of an election, as stated by Chesterman.

Candidates may deny negative or unflattering truths about themselves and attribute them to deepfakes, said Soon.

Deepfakes in the 2024 election: What you need to know

Who should be responsible?

Social media platforms have a quasi-public role, and therefore, more responsibility needs to be taken on by them, said Chesterman.

In February 2021, 20 leading tech companies, including Microsoft, Meta, Google, Amazon, IBM, OpenAI, Snap, TikTok, and X, pledged to combat the misuse of AI in elections this year.

The effectiveness of the tech accord signed will depend on implementation and enforcement, said Soon. A multi-prong approach is needed as tech companies adopt different measures across their platforms.

Soon, tech companies will need to be transparent about the processes they implement.

Chesterman argued that it is unreasonable to expect private companies to perform public functions, such as deciding what content to allow on social media. Companies may take months to make this decision, he added.

As deepfakes grow, Facebook, Twitter and Google are working to detect and prevent them

""Establishing regulations and setting expectations for companies is necessary because we cannot solely depend on their good intentions, as Chesterman pointed out," said Chesterman."

The Coalition for Content Provenance and Authenticity (C2PA) has launched digital credentials for content, which will provide verified information such as the creator's details, creation location and time, and whether generative AI was used to create the material.

C2PA member companies include Adobe, Microsoft, Google and Intel.

Early this year, DALL·E 3 images created by OpenAI will have C2PA content credentials implemented.

In a January interview at the World Economic Forum, OpenAI founder and CEO Sam Altman stated that the company was highly focused on preventing its technology from being used to influence elections.

"Our role is distinct from that of a distribution platform, such as a social media site or news publisher, he stated. We must collaborate with them, which means that we generate content here and distribute it here. There must be a productive dialogue between us."

A non-profit technical entity with a bipartisan mission to analyze and identify deepfakes was suggested by Meyers.

"People can rely on some sort of mechanism to send suspected manipulated content to the public," he said. "However, it's not foolproof."

While technology can contribute to the solution, consumers are not yet prepared, according to Chesterman.

Soon also highlighted the importance of educating the public.

She emphasized the importance of maintaining outreach and engagement initiatives to increase public vigilance and awareness when encountering information.

Users should be more vigilant in fact-checking critical pieces of information before sharing them with others, she said.

"Everyone has something to do," Soon said. "It's all hands on deck."

— CNBC's MacKenzie Sigalos and Ryan Browne contributed to this report.

by Chelsea Ong

Technology