In the spotlight during the American election, social media companies are being closely watched.
- On Tuesday, Americans will go to the polls, and social media companies have been readying to combat false information on their platforms.
- Since 2016, social media companies have been under pressure from U.S. lawmakers to curb the dissemination of false information.
- The social media platforms Meta, TikTok, X, YouTube, Snap, and Reddit have taken steps ahead of Election Day.
On Election Day, social media platforms such as Facebook, TikTok, X, and YouTube are facing immense pressure to manage the anticipated influx of misinformation, exacerbated by the increasing use of artificial intelligence.
Since the 2016 presidential election, foreign adversaries have exploited social media to influence the outcome by spreading false information. Notably, Russian operatives used Facebook to disseminate misleading content about Democratic nominee Hillary Clinton.
Since 2016, Meta has invested over $20 billion in safety and security measures for global elections. Recently, the company has shifted its focus to political content on Instagram and Threads. Additionally, Meta has been collaborating with fact-checkers, promoting verified voting resources, and labeling AI-generated content in preparation for Election Day.
Foreign actors from Russia, Iran, and China have successfully launched viral disinformation campaigns, as stated by Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, in an October briefing.
According to a joint statement from CISA, the FBI, and the Office of the Director of National Intelligence, Russia was responsible for creating a fake video of a person tearing up ballots in Pennsylvania last month, which garnered hundreds of thousands of views within hours of being posted on Elon Musk's social media platform X.
Moscow's broader effort to raise unfounded questions about the integrity of the U.S. election and stoke divisions among Americans includes this Russian activity, the statement said.
In late September, CNBC informed Meta about a series of Facebook posts containing misinformation on voting in North Carolina. In September, a beta feature in the "explore" section was spreading voter fraud conspiracy theories through the platform's AI software. And in October, TikTok failed to catch ads containing false election information despite its ban on political advertising, according to a report from Global Witness.
Easterly stated during the briefing that there is a vast amount of information available, but unfortunately, much of it is misinformation.
Here's how social media companies have been preparing for Election Day.
Meta
Meta has over 40,000 employees working on election safety and security initiatives, and the company collaborates with 11 independent fact-checking partners in the U.S. These partners include PolitiFact, Reuters, and USA Today. However, Meta is not currently working with The Associated Press, as an AP spokesperson previously informed CNBC that their fact-checking agreement with Meta ended in January.
In 2022, the company dismantled a fact-checking tool that would have enabled news services like Reuters and credible experts to verify the trustworthiness of questionable articles by adding comments at the top, as reported by CNBC last year.
Meta announced that it will add fact-check labels to election content that has been debunked on Facebook and Instagram. The reach of posts that are deemed false, altered, or partly false by fact-checkers will be reduced.
Facebook is using in-app notifications and an official Voting Information Center to connect users with information about voter registration and how to vote.
In May, Instagram's Chief Adam Mosseri declared that official third-party fact-checkers would be able to evaluate and assess content on the Threads platform.
"Previously, we relied on Facebook and Instagram's fact-checking to match false content on Threads. Now, our fact-checkers can evaluate Threads content independently."
The company announced that it will remove content promoting electoral violence, voter interference, and misinformation about how to vote.
Direct messaging fact-checking organizations on WhatsApp will help users recognize that the information they receive has not been verified and will reduce the spread of false information.
The company announced that it will include both visible and invisible watermarks on content created using its Meta AI feature. If Meta deems that AI-generated content has a high risk of misleading the public, it may add a more prominent label.
Meta's integrity efforts remain industry-leading, and with each election, the company incorporates lessons learned to stay ahead of emerging threats, a Meta spokesperson told CNBC in a statement.
Meta has prohibited political ads that claim victory early, undermine the legitimacy of the election, or discourage Americans from voting. The company also blocks new electoral, political, and social issue ads during the final week before Election Day. Meta announced Monday that the restriction period will be extended "until later this week."
TikTok
TikTok plans to spend over $2 billion on trust and safety in 2021, including election integrity, as stated in a September blog post.
TikTok's Election Center, launched in partnership with Democracy Works in January, has been viewed more than 7 million times as of Sept. 4. The center includes voting FAQs from official sources and directs users to it when they engage with election content and searches.
TikTok is collaborating with fact-checking organizations to label unverified content and has specialized misinformation moderators with specific training and teams. Additionally, the company is partnering with AP to provide real-time election results within the app.
TikTok prohibits the dissemination of misleading AI-generated content, which includes images of public figures supporting particular political views. TikTok creators must label authentic AI-generated content, and the company has introduced a tool to facilitate this.
TikTok prohibits political advertising, and politicians and political parties are not permitted to monetize their accounts on the platform.
TikTok has been actively working to detect and remove accounts involved in covert influence attempts on the platform. In a May blog post, the company revealed that it had identified 15 influence operations in the first four months of the year and removed over 3,000 associated accounts.
TikTok stated that a significant number of these networks were attempting to sway political discourse among their target audience, particularly in relation to elections.
TikTok's foreign ownership has long been viewed as a national security risk by American lawmakers. Donald Trump, the Republican nominee, attempted to ban the platform through an executive order in 2020, but this effort failed. However, the issue has gained more attention recently as China's power has increased, leading to renewed concerns about national security.
In April, President Biden signed legislation granting ByteDance nine months to find a buyer for TikTok, with the possibility of a three-month extension if negotiations are ongoing. However, TikTok filed a lawsuit against the U.S. government in May, and the app's future in the U.S. remains uncertain as litigation continues.
TikTok declined to comment.
X
The company's global government affairs team stated in an article posted to the platform in September that X has been actively collaborating with election officials, campaigns, law enforcement, and security agencies in preparation for the U.S. elections.
The safety team of the company actively monitors all activity on X to detect fraudulent accounts and spam, and will continue to rely on Community Notes submissions to clarify or correct any misinformation found in posts.
The company stated that they actively work to prevent and disrupt campaigns that threaten the integrity of the platform, whether they are state-affiliated entities or generic spam networks.
In 2022, Musk bought Twitter for $44 billion and subsequently cut more than 80% of its staff, including the trust and safety team, leaving the team with fewer than 20 full-time employees in January 2023.
Musk has been a vocal supporter of Trump and has donated millions of dollars to PACs supporting the Republican nominee. On X, he frequently shares false election information with his more than 200 million followers, including a claim that Democrats are trying to "import" voters into the U.S.
As per X's civic integrity policy, individuals are prohibited from using the platform to manipulate or disrupt civic processes, such as elections, through the sharing of content that incites violence, discourages participation, or misleads people about how to participate.
X does allow political advertising across the platform.
CNBC reported that X stated that the company is "constantly revising" its content policies to "address evolving threats, adversarial practices, and malicious actors."
YouTube
YouTube has been fighting against AI-generated election misinformation by promoting trustworthy content and offering resources for voter registration throughout the year. Additionally, the company introduced new features before Election Day to help voters access the information and context they need to stay informed.
Users will discover a panel above search results for federal election candidates that displays their political party, a link to their YouTube channel, and a clickable link to Google Search. YouTube is owned by parent company Alphabet.
On YouTube's homepage, users will find information on how and where to vote, as well as a panel directing them to Google if they search for "how to vote" or "how to register to vote." Additionally, during Election Day, users will see a shelf of authoritative news channels on the homepage, available in both English and Spanish, the company stated.
As polls close, YouTube will provide context about election results, including links to follow along in real time, under videos and at the top of election-related searches. The company will also temporarily pause ads relating to the election once the last polls close, as stated in the blog post.
YouTube has announced that it will remove any election-related content that goes against its established guidelines, such as videos that deceive voters, incite violence, or promote conspiracy theories, the company stated.
YouTube marks AI-generated content with a label and gives a more prominent label to synthetic content about sensitive topics, such as elections. Additionally, videos with AI-generated content can be removed if they violate YouTube's guidelines.
The company stated that 2024 has been a busy year for elections and that YouTube is committed to empowering and informing voters while protecting them from harmful misinformation or disinformation campaigns.
Google's Threat Analysis Group is collaborating with YouTube to detect and counter foreign adversaries' attempts to interfere with the platform.
YouTube didn't respond to a request for comment.
Snap
According to an April blog post, Snap has been providing users with in-app resources to educate them about the election and local issues.
During 2020, 1.2 million users registered to vote through the app after the company partnered with Vote.org to provide features such as checking registration status and receiving election reminders.
The blog post stated that Snap has been providing election coverage through its news show, "Good Luck America," and has partnered with various trusted media outlets, including NBC News' "Stay Tuned," to provide comprehensive coverage on the platform.
Snapchat allows political ads, which are reviewed by humans, and partners with the Poynter Institute to fact-check the statements in the ads. Additionally, Snap vets the buyers of political ads through a registration and certification process.
Snap didn't respond to a request for comment.
According to a February blog post, the "u/UpTheVote" account and on-platform notification channels have been sharing information about early voting, voter registration, poll worker recruitment, and other election-related resources.
Reddit prohibits content that aims to prevent people from voting, such as inaccurate polling location and time posts, while its search function also displays official voting resources, the company stated.
In addition to hosting AMA sessions with experts and nonprofit organizations, Reddit has also coordinated these events with nonpartisan groups to provide users with accurate election information. To run political ads on the platform, advertisers must have their candidate or an official campaign representative participate in an AMA.
Reddit prohibits political attack ads and AI-generated content that's intended to mislead users, according to the blog post. Political ads that contain AI-generated content must be clearly labeled.
Reddit didn't respond to a request for comment.
— CNBC's Jonathan Vanian contributed to this report.
WATCH: Meta is winner in AI
Technology
You might also like
- TikTok threatens to shut down on Sunday unless Biden takes action.
- Digital Currency Group to pay $38.5 million to the SEC for misleading investors.
- Senators express concerns about OpenAI's efforts to align with Trump.
- TikTok ban is upheld by Supreme Court in a unanimous decision.
- Whitney Wolfe Herd, the founder of Bumble, will be returning as CEO.