An independent board oversight committee focused on safety has been announced by OpenAI.

An independent board oversight committee focused on safety has been announced by OpenAI.
An independent board oversight committee focused on safety has been announced by OpenAI.
  • An independent board oversight committee has been established by OpenAI's Safety and Security Committee.
  • Zico Kolter, director of the machine learning department at Carnegie Mellon University, will chair the group.
  • According to sources, OpenAI is currently seeking funding with a valuation of over $150 billion.

On Monday, OpenAI announced that its Safety and Security Committee, which was established in May amidst controversy over security procedures, will now function as an independent board oversight committee.

Zico Kolter, director of the machine learning department at Carnegie Mellon University's school of computer science, will chair the group. Other members include Adam D'Angelo, an OpenAI board member and co-founder of Quora, former NSA chief and board member Paul Nakasone, and Nicole Seligman, former executive vice president at Sony.

OpenAI has released the committee's findings on its safety and security processes in a public blog post. The committee recently completed a 90-day review of OpenAI's processes and safeguards and made recommendations to the board.

OpenAI, the startup behind ChatGPT and SearchGPT, is currently seeking funding with a valuation of over $150 billion. Thrive Capital is leading the round and plans to invest $1 billion, while Tiger Global is also considering participation. Microsoft, Nvidia, and Apple are reportedly in talks to invest as well.

The committee's five recommendations for safety and security included establishing independent governance, enhancing security measures, increasing transparency, collaborating with external organizations, and unifying safety frameworks.

OpenAI released o1, a preview version of its new AI model designed for reasoning and solving complex problems. The company stated that the committee evaluated the safety and security standards used to determine o1's suitability for launch, as well as the results of the safety assessment.

The committee, in conjunction with the full board, will have the authority to delay model launches until safety concerns are resolved.

Since its launch of ChatGPT in late 2022, OpenAI has experienced both rapid growth and controversy, as well as high-level employee departures, with some current and former employees expressing concerns about the company's ability to operate safely due to its rapid expansion.

In July, Democratic senators wrote to OpenAI CEO Sam Altman about safety concerns, while a group of current and former employees published an open letter in June expressing concerns about oversight and whistleblower protections.

In November, a former OpenAI board member claimed that Altman provided the board with inaccurate information about the safety measures in place at the company.

OpenAI disbanded its team focused on long-term AI risks just a year after announcing it, with team leaders Ilya Sutskever and Jan Leike leaving in May. Leike wrote on X that OpenAI's safety culture and processes have been neglected in favor of product development.

WATCH: OpenAI is indisputable leader in AI supercycle

OpenAI is the indisputable leader in the AI supercycle, says Altimeter Capital's Apoorv Agrawal
by Hayden Field

Technology