In the UK, far-right violence was triggered by the spread of online misinformation.

In the UK, far-right violence was triggered by the spread of online misinformation.
In the UK, far-right violence was triggered by the spread of online misinformation.
  • False information about the attacker's name and background was spread online shortly after the knife attack that resulted in the deaths of three young girls.
  • The day after the claims emerged, police debunked them, stating the suspect was born in Britain, but the narrative had already gained traction.
  • In the U.K., days of riots ensued as far-right groups organized protests against migrants and Islam.

In July, when three young girls were killed in Southport, UK, false claims quickly emerged on social media.

In hours, false information about the attacker's name, religion, and migration status spread widely, leading to days of violent riots in the U.K.

According to a LinkedIn post, the perpetrator of a crime was falsely named as 'Ali al-Shakati,' a Muslim migrant, by X. By 3 p.m. the next day, the false name had garnered over 30,000 mentions on X alone, as reported by Hannah Rose, a hate and extremism analyst at the Institute for Strategic Dialogue (ISD), in an email to CNBC.

The ISD's analysis revealed that false information about the attacker being on an intelligence services watchlist, arriving in the UK on a small boat in 2023, and being known to local mental health services was shared on social media.

The day after the claims emerged, police debunked them, stating the suspect was born in Britain, but the narrative had already gained traction.

Disinformation fueled biases and prejudice

The anti-migration movement in the U.K. has been fueled by a type of false information that is closely aligned with a particular rhetoric, according to Joe Ondrak, research and tech lead for the U.K. at tech company Logically, which is developing artificial intelligence tools to combat misinformation.

He said via video call to CNBC, "It's catnip to them really. Saying the exact right thing can provoke a much angrier reaction than there likely would have been if disinformation wasn't circulated."

Anti-migrant and anti-Islam protests by far-right groups led to riots in the U.K., with attacks on mosques, immigration centers, and hotels housing asylum seekers.

Ondrak stated that incorrect reports tend to flourish during emotionally charged times, as they can tap into pre-existing biases and prejudice.

"Instead of being a case of a false claim being widely believed, the reports are used to justify and perpetuate pre-existing prejudice and bias before any truth can be established."

He remarked that it was irrelevant whether it was accurate or not.

Right-wing protestors argue that the high number of migrants in the U.K. leads to an increase in crime and violence, while migrant rights groups refute these claims.

The spread of disinformation online

Through algorithms and large accounts, social media played a vital role in spreading disinformation, as per ISD's Rose.

She explained that accounts with hundreds of thousands of followers, and the paid-for blue ticks on X, shared false information which was then pushed by the platform's algorithms to other users.

Rose stated that when searching 'Southport' on TikTok, the false name of the attacker was promoted in the 'Others Searched For' section, even after the police confirmed the information was incorrect, for 8 hours.

The incorrect name of the attacker was also highlighted as a trending topic on other platforms, as shown by ISD's analysis of algorithms.

Elon Musk's controversial comments about the violent demonstrations during the riots sparked criticism from the U.K. government, with the country's courts minister urging him to "behave responsibly."

TikTok and X did not immediately respond to CNBC's request for comment.

Ondrak stated that false claims were spread on Telegram, a platform that helps to unite narratives and expose more individuals to "more extreme views."

Ondrak stated that the post-Covid milieu of Telegram saw all of these claims being channeled through, including initially anti-vaxx channels that were later co-opted by far-right figures promoting anti-migrant topics.

Telegram stated that it is not assisting in the dissemination of false information, as its moderators are actively monitoring the situation and removing any channels or posts advocating for violence, which violate its terms of service.

Some of the accounts advocating for participation in the protest were linked to the extreme right-wing, including some associated with the banned right-wing extremist group National Action, which was designated a terrorist organization in 2016 under the U.K.'s Terrorism Act, according to analysis by Logically.

Ondrak observed that several groups who had previously spread false information about the attack were now retracting their statements, claiming it was a hoax.

In cities and towns across the U.K., thousands of people gathered on Wednesday to protest against racism, outnumbering recent protests against immigration.

Content moderation?

The Online Safety Act in the U.K. aims to combat hate speech, but it may not be sufficient to safeguard against certain types of misinformation, as it takes effect early next year.

On Wednesday, Ofcom wrote to social media platforms urging them not to delay implementing new regulations, while the U.K. government called on these companies to take more action.

Platforms have terms and conditions and community guidelines that address harmful content and take action against it to varying degrees.

While companies have a responsibility to prevent hatred and violence on their platforms, Rose emphasized the need for them to take more action in enforcing their rules.

She observed that ISD discovered a variety of content on multiple platforms that violated their terms of service, yet it persisted online.

Riot police officers push back anti-migration protesters outside on Aug. 4, 2024 in Rotherham, U.K.

Henry Parker, VP of corporate affairs at Logically, highlighted the importance of considering nuances for different platforms and jurisdictions when it comes to content moderation. He explained that companies invest varying amounts in these efforts and that there are challenges related to differing laws and regulations.

"There is a dual role to be played. Platforms must take responsibility, adhere to their own terms and conditions, and collaborate with third-party fact checkers," he stated.

"Government must be transparent about their expectations and the consequences of not meeting them, which we have not yet achieved."

by Sophie Kiderlin

Markets