Moody's warns that election deepfakes could harm institutional trust.

Moody's warns that election deepfakes could harm institutional trust.
Moody's warns that election deepfakes could harm institutional trust.
  • On Wednesday, Moody's issued a report warning that AI-generated deepfake political content could pose a threat to election integrity and erode the trustworthiness of U.S. institutions.
  • The FCC has proposed a rule that requires political ads on TV, video, and radio to disclose if they used AI-generated content, in an effort to combat deep fakes and manipulated content, but not social media.
  • While Meta and Google are taking steps to regulate AI-manipulated content, there are no clear guidelines in place.

As election season progresses and AI technology continues to advance, the use of AI in political advertising is becoming a growing concern for the market and economy. A recent report from Moody's highlights the potential risks to U.S. institutional credibility posed by the use of generative AI and deepfakes in election integrity.

"The upcoming election is expected to be closely contested, raising concerns that AI deepfakes could be used to deceive voters and intensify divisions, ultimately affecting the legitimacy of U.S. institutions, wrote Gregory Sobel and William Foster, assistant vice president and senior vice president at Moody's."

The government has intensified its efforts to combat deepfakes, with Federal Communications Commission Chairwoman Jessica Rosenworcel proposing a new rule on May 22 that would require political TV, video, and radio ads to disclose if they used AI-generated content. The FCC has been concerned about AI use in this election cycle's ads, with Rosenworcel highlighting potential issues with deep fakes and other manipulated content.

The Federal Elections Commission is considering implementing AI disclosure rules that would apply to all social media platforms. However, the FCC has not previously regulated social media, and a letter to Rosenworcel urged the commission to delay its decision until after the elections. The letter argued that the changes would not be mandatory across digital political ads and could confuse voters if online ads did not have AI disclosures, even if they did.

The FCC's proposal may not directly regulate social media, but it paves the way for other bodies to control ads in the digital realm as the U.S. government seeks to establish itself as a powerful regulator of AI content. This could potentially lead to the extension of these rules to other forms of advertising.

"The ruling could revolutionize disclosures and advertisements on traditional media during political campaigns, as stated by Dan Ives, managing director and senior equity analyst at Wedbush Securities. However, there is concern that once the genie is out of the bottle, it may be difficult to put it back, and unintended consequences may arise from this ruling."

Some social media platforms have already implemented AI disclosure ahead of regulations. For instance, Meta requires an AI disclosure for all advertising and has banned new political ads the week before the November elections. Google, on the other hand, requires disclosures for political ads with modified content that "inauthentically depicts real or realistic-looking people or events," but not for all political ads.

The issue of misinformation on social media is a major concern for brands during the upcoming election cycle, and Google and Facebook are expected to take a significant portion of the $306.94 billion spent on U.S. digital advertising in 2024. According to Ives, this is a complex time for advertising online, and brands must be proactive in addressing the issue of AI misinformation.

AI-manipulated content can slip through the cracks of self-policing platforms due to the overwhelming volume of content posted daily. This includes both AI-generated spam messages and large amounts of AI-generated imagery, making it difficult to detect and remove all of it.

"Tony Adams, Secureworks Counter Threat Unit senior threat researcher, stated that the absence of industry standards and the fast pace of technology development make this task difficult. However, these platforms have reported successes in monitoring the most dangerous content on their sites through technical controls, which are powered by AI."

Moody's warned in May that governments and non-governmental entities were already using deep fakes as propaganda to cause social unrest and, in severe cases, terrorism.

"Abhi Srivastava, Moody's Ratings assistant vice president, wrote that until recently, creating a convincing deepfake required specialized technical knowledge, computing resources, and time. However, with the advent of affordable Gen AI tools, generating a sophisticated deepfake can be done in minutes. This ease of access, coupled with the limitations of social media's existing safeguards against the propagation of manipulated content, creates a fertile environment for the widespread misuse of deep fakes."

In the 2020 presidential primary race in New Hampshire, a deep fake audio was created through a robocall.

While Moody's acknowledges that the U.S. election system is decentralized and has existing cybersecurity policies and general knowledge of the potential cyberthreats, one potential silver lining is that this will provide some protection. However, states and local governments are taking measures to block deepfakes and unlabeled AI content, but free speech laws and concerns over blocking technological advances have slowed down the process in some state legislatures.

Since January, eight states have enacted laws on election interference and deepfakes, with a total of 50 pieces of legislation related to AI being introduced per week in state legislatures, according to Moody's.

The United States ranks 10th out of 192 countries in the United Nations E-Government Development Index, according to Moody's, which has highlighted the country's vulnerability to cyber risks.

The belief that deepfakes can sway political results, despite a lack of evidence, can erode public trust in the electoral process and the legitimacy of government institutions, which is a credit risk, according to Moody's. As people become more concerned about distinguishing truth from falsehood, they are more likely to disengage and mistrust the government. This trend is credit negative and may increase political and social risks, as well as undermine the effectiveness of government institutions, Moody's stated.

"While law enforcement and the FCC may discourage domestic actors from using AI to deceive voters, there's no doubt that foreign actors will continue to meddle in American politics through the use of generative AI tools and systems. To voters, the message is to remain vigilant and exercise their right to vote."

AI-detection technology is needed to catch 'deepfakes' in election, says Pinscreen CEO Hao Li
by Michelle Castillo

Technology