The U.S. AI Safety Institute will test and evaluate new models created by OpenAI and Anthropic.

The U.S. AI Safety Institute will test and evaluate new models created by OpenAI and Anthropic.
The U.S. AI Safety Institute will test and evaluate new models created by OpenAI and Anthropic.
  • On Thursday, the U.S. AI Safety Institute reached a testing and evaluation agreement with OpenAI and Anthropic.
  • The agreement enables the institute to obtain exclusive access to significant new models from each company before and after their public availability.
  • AI industry is becoming more commercialized, and some developers and researchers are raising safety and ethical concerns.

The U.S. AI Safety Institute has been granted permission by OpenAI and Anthropic, the two most valuable AI startups, to evaluate their new models prior to public release, in response to growing industry concerns regarding AI safety and ethics.

The Department of Commerce at the National Institute of Standards and Technology (NIST) announced that it will obtain "access to major new models from each company before and after their public release" through the institute.

The group was formed following the Biden-Harris administration's issuance of the U.S. government's first-ever executive order on artificial intelligence in October 2023, which mandated new safety assessments, equity and civil rights guidance, and research on AI's impact on the labor market.

OpenAI CEO Sam Altman announced in a post on X that the company has reached an agreement with the US AI Safety Institute for pre-release testing of its future models. Additionally, OpenAI confirmed to CNBC on Thursday that it has doubled its number of weekly active users to 200 million, a figure first reported by Axios.

Reports indicate that OpenAI is in discussions to raise a funding round that could value the company at over $100 billion. Thrive Capital is leading the round and will invest $1 billion, according to a source who requested anonymity due to the confidential nature of the information.

Anthropic, founded by ex-OpenAI research executives and employees, was recently valued at $18.4 billion. Anthropic has a leading investor, while OpenAI is heavily backed by.

According to a release from Thursday, the collaborative research between the government, OpenAI, and Anthropic will focus on evaluating capabilities and safety risks, as well as developing methods to mitigate those risks.

Jason Kwon, OpenAI's chief strategy officer, stated that the company strongly supports the U.S. AI Safety Institute's mission and is eager to collaborate with the organization to establish safety best practices and standards for AI models.

Jack Clark, co-founder of Anthropic, stated that the company's collaboration with the U.S. AI Safety Institute helps them rigorously test their models before widespread deployment, which in turn strengthens their ability to identify and mitigate risks, advancing responsible AI development.

AI developers and researchers have raised concerns about safety and ethics in the growing profit-driven AI industry. An open letter published by current and former OpenAI employees on June 4 highlighted potential issues with the rapid progress in AI and the absence of oversight and whistleblower protection.

"The authors argued that AI companies have strong financial motivations to evade effective oversight, and bespoke structures of corporate governance are not enough to change this. They pointed out that AI companies currently have minimal obligations to share information with governments and none with civil society, and they cannot be trusted to share it voluntarily."

The FTC and the Department of Justice are set to launch antitrust investigations into OpenAI, Microsoft, and Nvidia, according to a source close to the matter. FTC Chair Lina Khan has described the agency's action as a "market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers."

On Wednesday, the California legislature passed a contentious AI safety bill, which now awaits Governor Gavin Newsom's decision to either veto or sign it into law by September 30th. The bill, which imposes safety testing and other safeguards on AI models with a certain cost or computing power, has been met with opposition from some tech companies due to its potential to hinder innovation.

WATCH: Google, OpenAI and others oppose California AI safety bill

Google, OpenAI & others oppose California AI safety bill
by Hayden Field

Technology