Over 250,000 presidential candidate image generations were rejected by ChatGPT before Election Day.
- In the run-up to Election Day, OpenAI reported that ChatGPT turned down over 250,000 requests for generating images of the 2024 U.S. presidential candidates.
- OpenAI rejected image-generation requests for President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz, and Vice President-elect JD Vance.
- The increasing use of generative artificial intelligence has sparked worries about the potential impact of misinformation generated through the technology on upcoming global elections in 2024.
In the run-up to Election Day, OpenAI reported that ChatGPT turned down over 250,000 requests for generating images of the 2024 U.S. presidential candidates.
OpenAI rejected image-generation requests for President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz, and Vice President-elect JD Vance.
The increasing use of generative artificial intelligence has sparked worries about the potential impact of misinformation generated through the technology on upcoming global elections in 2024.
According to Clarity, a machine learning firm, the number of deepfakes has increased by 900% annually. Some of these videos were created or funded by Russians with the intention of disrupting the U.S. elections, as stated by U.S. intelligence officials.
OpenAI reported in a 54-page October report that it had disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." The threats included AI-generated website articles and social media posts by fake accounts. Despite these efforts, none of the election-related operations were able to attract "viral engagement," the report noted.
OpenAI stated in its Friday blog that there was no evidence of covert operations using the company's products to influence the U.S. election outcome and go viral or build a "sustained audience."
With the emergence of ChatGPT in late 2022, lawmakers have become increasingly worried about the spread of misinformation generated by large language models. Despite being a relatively new technology, these models are known to frequently produce inaccurate and unreliable information.
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, advised voters not to rely on AI chatbots for voting information due to concerns about accuracy and completeness, as stated in a CNBC interview last week.
The likelihood of AI being less regulated and more volatile may increase under a second Trump presidency.
Technology
You might also like
- Tech bros funded the election of the most pro-crypto Congress in America.
- Microsoft is now testing its Recall photographic memory search feature, but it's not yet flawless.
- Could Elon Musk's plan to reduce government agencies and regulations positively impact his business?
- Some users are leaving Elon Musk's platform due to X's new terms of service.
- The U.S. Cyber Force is the subject of a power struggle within the Pentagon.