Over 250,000 presidential candidate image generations were rejected by ChatGPT before Election Day.

Over 250,000 presidential candidate image generations were rejected by ChatGPT before Election Day.
Over 250,000 presidential candidate image generations were rejected by ChatGPT before Election Day.
  • In the run-up to Election Day, OpenAI reported that ChatGPT turned down over 250,000 requests for generating images of the 2024 U.S. presidential candidates.
  • OpenAI rejected image-generation requests for President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz, and Vice President-elect JD Vance.
  • The increasing use of generative artificial intelligence has sparked worries about the potential impact of misinformation generated through the technology on upcoming global elections in 2024.

In the run-up to Election Day, OpenAI reported that ChatGPT turned down over 250,000 requests for generating images of the 2024 U.S. presidential candidates.

OpenAI rejected image-generation requests for President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz, and Vice President-elect JD Vance.

The increasing use of generative artificial intelligence has sparked worries about the potential impact of misinformation generated through the technology on upcoming global elections in 2024.

According to Clarity, a machine learning firm, the number of deepfakes has increased by 900% annually. Some of these videos were created or funded by Russians with the intention of disrupting the U.S. elections, as stated by U.S. intelligence officials.

OpenAI reported in a 54-page October report that it had disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." The threats included AI-generated website articles and social media posts by fake accounts. Despite these efforts, none of the election-related operations were able to attract "viral engagement," the report noted.

OpenAI stated in its Friday blog that there was no evidence of covert operations using the company's products to influence the U.S. election outcome and go viral or build a "sustained audience."

With the emergence of ChatGPT in late 2022, lawmakers have become increasingly worried about the spread of misinformation generated by large language models. Despite being a relatively new technology, these models are known to frequently produce inaccurate and unreliable information.

Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, advised voters not to rely on AI chatbots for voting information due to concerns about accuracy and completeness, as stated in a CNBC interview last week.

The likelihood of AI being less regulated and more volatile may increase under a second Trump presidency.

AI likely to be less regulated and more volatile under second Trump presidency
by Hayden Field

Technology