Online disinformation is becoming a lucrative business opportunity for former Meta employees following the outbreak of two wars.

Online disinformation is becoming a lucrative business opportunity for former Meta employees following the outbreak of two wars.
Online disinformation is becoming a lucrative business opportunity for former Meta employees following the outbreak of two wars.
  • The surge in online disinformation during the Israel-Hamas war has increased the demand for trust and safety technology.
  • Online platforms are increasingly being targeted by new startups founded by former Meta and Google engineers, who specialize in selling content moderation technology.
  • Since the top tech companies cut their trust and safety teams earlier this year, there has been a rise in the number of recruits they've seen.
People using their mobile phones outside the offices of Meta, the parent company of Facebook and Instagram, in King's Cross, London.
People using their mobile phones outside the offices of Meta, the parent company of Facebook and Instagram, in King’s Cross, London. (Joshua Bratt | Pa Images | Getty Images)

Lauren Wagner, who worked at during the 2020 U.S. presidential election, focused on information integrity and oversaw products to ensure content was moderated and fact-checked.

She is unable to comprehend the constant deluge of misinformation and violent content spreading across the internet since the war erupted last month between Israel and Hamas. Since Wagner left Facebook parent Meta last year, her work in trust and safety feels like it was from a prior era.

How do you manage the overwhelming volume of visual content, such as long video clips and multiple points of view, when you're in a situation where there's live-streaming terrorism? Wagner questioned.

The issue has intensified due to Meta, Google's parent company, and X, formerly Twitter, cutting jobs related to content moderation and trust and safety as part of cost-cutting measures. As a result, these websites are struggling to keep up with the increasing number of out-of-context videos, fabricated audio in news clips, and graphic videos of terrorist acts being shared by users.

Wagner, the founder of Radium Ventures, is currently raising her first fund and investing in startups developing trust and safety technologies. She stated that many previously considered "innocuous" platforms are now recognizing the need to take action.

If user-generated content is housed, there's a chance for misinformation, charged or damaging information to spread, as Wagner pointed out.

The Israel-Hamas war has led to the need for internet platforms to take precautionary measures, even those that were not typically known for hosting political discussions. Discord and Telegram, popular online messaging and discussion channels, could be exploited by terrorist groups and other bad actors who are increasingly using multiple communication services to create and conduct their propaganda campaigns.

Neither Discord nor Telegram responded to requests for comment.

Recently, thousands of users on a kids gaming site participated in pro-Palestinian protests within the virtual world. This has prompted the company to closely monitor for posts that violate its community standards, as stated by a Roblox spokesperson to CNBC.

The spokesperson stated that Roblox has thousands of moderators and automated detection tools to monitor the site, allowing for expressions of solidarity but not content that endorses or promotes violence, terrorism, hatred against individuals or groups, or calls for supporting a specific political party.

In the trust and safety field, there is no shortage of talent. Many of Wagner's former colleagues at Meta, who lost their jobs, remain committed to the cause.

Cove, founded by former Meta trust and safety staffers, was one of her first investments. This startup is among a few emerging companies developing technology to sell to organizations, following an established enterprise software model. Other Meta veterans have recently started Cinder and Sero AI to target the same general market.

The new crop of trust and safety tools, according to Wagner, a senior advisor at the Responsible Innovation Labs nonprofit, adds more coherence to the information ecosystem by providing standardized processes across companies that enable them to manage user-generated content effectively.

‘Brilliant people out there’

It’s not just ex-Meta staffers who recognize the opportunity.

The founding team of TrustLab includes individuals who previously worked at Google, Reddit, and ByteDance, while the founders of Intrinsic have experience in trust and safety-related issues at Discord and other companies.

In July, San Francisco hosted the TrustCon conference, where tech policy experts and industry professionals gathered to discuss the latest trends in online trust and safety, with a focus on the potential societal impacts of layoffs in the tech industry.

Numerous startups displayed their products in the exhibition hall, showcasing their services, engaging with potential clients, and recruiting talent. ActiveFence, a company that specializes in providing Trust & Safety solutions to safeguard online platforms and their users from harmful behavior and content, had a booth at the conference. Additionally, Checkstep, a content moderation platform, was also present.

Cove also had an exhibit at the event.

Michael Dworsky, CEO of Cove, who founded the company in 2021 after three years at Facebook, stated that the cost-cutting measures have clearly impacted the labor and hiring markets. He emphasized that there are many talented individuals available for hire.

Cove has created a software platform to manage a company's content policy and review process. The platform integrates with various content moderation systems, or classifiers, to detect issues such as harassment, allowing businesses to protect their users without the need for expensive engineers to develop the code. The company, which counts anonymous social media apps YikYak and Sidechat as customers, states on its website that Cove is the solution they wish they had at Meta.

According to Mason Silber, Facebook's chief technology officer at Cove, when Facebook began investing heavily in trust and safety, there were no existing tools on the market that they could have purchased. Instead of building or becoming experts, Facebook did so out of necessity rather than desire, resulting in some of the most robust and trusted safety solutions globally.

A Meta spokesperson declined to comment for this story.

We can't trust Instagram with our teens over child safety: Former Instagram consultant Arturo Béjar

Wagner, who departed Meta in mid-2022 after nearly two and a half years at the company, stated that earlier content moderation was more manageable than it is currently, particularly with the ongoing Middle East crisis. Specifically, she explained that in the past, a trust and safety team member could analyze a picture and determine whether it contained false information through a relatively straightforward scan.

The increasing quantity and speed of photo and video uploads, along with the growing ability to manipulate details through generative AI tools, has resulted in a new set of challenges.

Two ongoing wars, one in the Middle East and another between Russia and Ukraine, are currently overwhelming social media platforms with related content. In addition, these sites must prepare for the upcoming 2024 presidential election, which is less than a year away. As the former President Donald Trump faces criminal charges in Georgia for allegedly interfering in the 2020 election, he is currently leading the race to become the Republican nominee.

Business process services, which encompasses IT-related task outsourcing and call center services, is experiencing rapid growth in the trust and safety segment, according to Manu Aggarwal, a partner at research firm Everest Group.

The overall business process services market is projected to reach $300 billion by 2024, with trust and safety accounting for approximately $11 billion of that total. Companies such as Infosys and Genpact, which provide outsourced trust and safety services and contract workers, currently dominate the market. This is largely due to Big Tech companies' practice of building their own tools, according to Aggarwal.

According to Everest Group practice director Abhijnan Dasgupta, spending on trust and safety tools could reach between $750 million and $1 billion by the end of 2024, up from $500 million in 2023. This estimate is influenced by the adoption of AI services, which may necessitate compliance with emerging AI regulations.

Accel, a venture capital firm, is the lead investor in Cinder, a two-year-old startup that helped build Meta's internal trust and safety systems and worked on counterterrorism efforts. Tech investors are showing interest in this opportunity.

The team that played a major role in defining Facebook's Trust and Safety operations is the best team to solve this challenge, as stated by Accel's Sara Ittelson in a press release announcing financing in December. Ittelson also expects the trust and safety technology market to grow as more platforms see the need for greater protection and as the social media market continues to fragment.

New content policy regulations have also spurred investment in the area.

Large online platforms in the EU must now document and detail their methods for moderating and removing illegal and violent content, or face fines of up to 6% of their annual revenue.

The Digital Services Act requires online businesses to streamline and document their content moderation procedures, and Cinder and Cove are promoting their technologies as solutions to help businesses comply with these regulations.

‘Frankenstein’s monster’

Dworsky of Cove stated that many companies have attempted to personalize Zendesk, a customer support software, and Google Sheets to establish their trust and safety protocols. However, this approach can be time-consuming and inefficient, resulting in a "very manual, unscalable process," he explained, likening it to creating a "Frankenstein's monster."

Even the most advanced trust and safety technologies are not a solution to the widespread problem of violent content and disinformation, as industry experts know. A recent survey by the Anti-Defamation League found that 70% of respondents had encountered at least one type of misinformation or hate related to the Israel-Hamas conflict on social media.

The ongoing issue of determining the boundary between free speech and unlawful or unacceptable content is becoming increasingly challenging for companies as the problem grows.

Companies should be truthful about their content moderation efforts, in addition to maintaining integrity on their sites, according to Alex Goldenberg, the lead intelligence analyst at the Network Contagion Research Institute.

Striking a balance is challenging, but achievable. I recommend transparency on social platforms, particularly when third-party access and understanding of large-scale activities are necessary.

Discord CEO Jason Citron: 15% of our workforce is dedicated to trust and safety

Last year, Noam Bardin, the former CEO of Waze, founded the social news-sharing and real-time messaging service Post. Bardin, who's from Israel, said he's been frustrated with the spread of misinformation and disinformation since the war began in October.

Social media shapes the way we perceive events, leading to an overwhelming influx of propaganda, disinformation, and AI-generated content that blurs the lines between conflicts.

Meta and X have faced challenges in managing and removing questionable posts, particularly with the increase in video content.

Since the inception of his company, Bardin has been utilizing moderation tools, automated tools, and processes at Post, which is similar to Twitter. He employs services from ActiveFence and OpenWeb, both based in Israel.

Bardin explained that the trust and safety software on our platform analyzes any comments or posts from an AI perspective to determine the content's harm, including pornography and violence.

Video game sites, online marketplaces, dating apps, and music streaming sites have seen the emergence of active online communities with live-chatting services, which may expose them to harmful content from users.

Cinder co-founder Brian Fishman stated that militant groups utilize a variety of services to disseminate propaganda, including Telegram and platforms with less advanced technology like Rumble and Vimeo, which are not as advanced as Facebook.

Representatives from Rumble and Vimeo didn’t respond to requests for comment.

Fishman stated that customers are increasingly viewing trust and safety tools as an integral part of their cybersecurity budgets.

Fishman stated that some of the investment in insurance is not providing a full return every day, but instead, it is being used to invest more during difficult times, which allows companies to have the necessary capabilities when they truly need it.

WATCH: Lawmakers ask social media and AI companies to crack down on misinformation

Lawmakers ask social media and AI companies to crack down on misinformation
by Jonathan Vanian

technology