OpenAI CEO Sam Altman states that ChatGPT does not require New York Times data during the ongoing lawsuit.
- Before the lawsuit news broke, Sam Altman stated that OpenAI had been actively communicating with The New York Times in a productive manner.
- OpenAI aimed to compensate the outlet significantly for featuring ChatGPT's content, as per Altman's account.
- Last year, Microsoft and OpenAI were sued by The Times for copyright infringement, with allegations that their use of the publication's articles as training data was unlawful.
- OpenAI refutes the Times' accusations and claims that the occurrence of "regurgitation," or repeating entire "memorized" sections of specific articles, is a rare issue that they are actively working to eliminate.
The New York Times' lawsuit against OpenAI, which uses artificial intelligence models, caught Sam Altman off guard. He stated that the company's models didn't require training on the publisher's data.
OpenAI had been in productive negotiations with the Times before news of the lawsuit came out, according to Altman. OpenAI wanted to pay the outlet "a lot of money to display their content" in ChatGPT, the firm's popular AI chatbot.
The OpenAI leader was surprised, like everyone else, to read that they were being sued in The New York Times. It was a strange occurrence, he said on stage at the World Economic Forum in Davos on Thursday.
OpenAI stated that it isn't overly concerned about the Times' lawsuit and that resolving the issue with the publisher isn't a top priority for the company.
"Training AI on The New York Times is something we're open to, but it's not our top priority," Altman stated to a packed audience at Davos.
He stated that we don't need to use their data for training, as any source doesn't significantly impact our results.
Last year, both The Times and OpenAI were sued by the newspaper for alleged copyright infringement, as it claimed that the companies used its articles as training data for their AI models.
Microsoft and OpenAI are being sued by a news outlet for "billions of dollars in statutory and actual damages" due to "unlawful copying and use of The Times's uniquely valuable works."
The Times presented evidence that ChatGPT produced nearly identical versions of their articles. However, OpenAI has refuted these claims.
Ian Crosby, a partner at Susman Godfrey representing The New York Times as lead counsel, stated that Altman's commentary regarding the lawsuit implies that OpenAI is acknowledging the use of copyrighted content to train its models and is essentially benefiting from The New York Times' investments in journalism.
Crosby stated in an email to CNBC on Thursday that OpenAI has admitted to using The Times' copyrighted works to train their models in the past and will continue to do so in the future when scraping the Internet.
He called that practice “the opposite of fair use.”
OpenAI's legal action has sparked concerns that other media publishers may pursue similar claims, prompting some outlets to partner with the firm through licensing agreements instead of engaging in legal battles. For example, Axel Springer has a deal with OpenAI where it licenses its content.
OpenAI addressed the Times' lawsuit by stating that instances of "regurgitation," or repeating specific content or articles word-for-word, are rare and they are working to eliminate this issue.
The AI developer stated that they collaborate with news organizations to create new revenue and monetization opportunities, while also providing an opt-out for training as it is considered fair use.
Altman's remarks at a Bloomberg event in Davos this week echoed the comments made by the AI leader earlier. He stated that he was not concerned about the Times' lawsuit, denied the publisher's allegations, and asserted that there would be numerous ways to monetize news content in the future.
Altman stated that while there are negative aspects to people behaving like this, the positives are that there will likely be innovative ways to consume and profit from news and other published content.
For every New York Times situation, there are numerous more productive individuals who are enthusiastic about constructing the future and avoiding theatrics.
OpenAI's GPT models can be tweaked by Altman to prevent them from repeating stories or features word-for-word online.
We don't want to simply replicate someone else's content," he stated. "However, the issue is not as straightforward as it may seem in a vacuum. I believe we can significantly reduce that number, bringing it to a very low level. And that seems like a reasonable metric to assess our performance.
special-reports
You might also like
- The NFL is revolutionizing the media industry through streaming.
- 'The Davos Underground': A Comprehensive Look at the Exclusive Parties of the World's Elite
- AI experts predict the emergence of a technology that surpasses human capabilities, yet remain uncertain about its appearance.
- By regulating AI, blockchain technology may have discovered its most effective application.
- Saudi Arabia is making a significant presence in Davos as it aims to establish itself as a leading AI technology hub in the Middle East.