Another safety team is disbanded by OpenAI, and the head advisor for 'AGI Readiness' resigns.
- OpenAI is disbanding its "AGI Readiness" team, which provided advice on the company's ability to handle advanced AI and the world's preparedness to manage the technology.
- Miles Brundage, AGI Readiness' senior advisor, announced his departure from the company and stated that he believes his research will have a greater impact outside of the organization.
- In May, OpenAI disbanded its Superalignment team, which focused on the long-term risks of AI, just one year after announcing the group.
The "AGI Readiness" team at OpenAI is being disbanded, as announced by the team's head. This team provided advice to OpenAI on the company's ability to handle increasingly powerful AI and the world's readiness to manage that technology.
Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company through a Substack post. He stated that his primary reasons were the high opportunity cost, the desire for greater impact through external research, a desire for less bias, and the accomplishment of his goals at OpenAI.
Brundage stated that neither OpenAI nor any other leading lab is prepared for AGI readiness, and the world is not ready either. He intends to establish a nonprofit or join an existing one to concentrate on AI policy research and advocacy. He emphasized that AI will not be safe and advantageous unless a concerted effort is made to ensure it is so.
According to the post, former AGI Readiness team members will be transferred to other teams.
"An OpenAI spokesperson stated to CNBC that they fully support Miles' decision to pursue his policy research outside industry and are deeply grateful for his contributions. His plan to focus solely on independent research on AI policy will allow him to have a broader impact, and they are excited to learn from his work and follow its impact. They are confident that in his new role, Miles will continue to set a high standard for policymaking in industry and government."
In May, OpenAI disbanded its Superalignment team, which focused on the long-term risks of AI, just one year after it was announced, a source close to the matter told CNBC.
The AGI Readiness team's disbandment was announced after the OpenAI board considered restructuring the company as a for-profit business. Three executives, including CTO Mira Murati, research chief Bob McGrew, and research VP Barret Zoph, left the company on the same day last month.
In October, OpenAI raised $6.6 billion from a variety of investment firms and tech giants, valuing the company at $157 billion. Additionally, the company secured a $4 billion revolving line of credit, bringing its total liquidity to over $10 billion. Despite this, OpenAI is projected to lose about $5 billion on $3.7 billion in revenue this year, according to a source familiar with the matter.
In September, OpenAI declared that its Safety and Security Committee, which was established in May to address security concerns, would become an independent board oversight committee. After conducting a 90-day review of OpenAI's processes and safeguards, the committee made recommendations to the board, which were then released in a public blog post.
The departure of executives and changes in the board of OpenAI, which is involved in a race to develop generative AI with companies such as Google, Meta, and others, coincides with a summer of mounting safety concerns and controversies surrounding the company. This market, predicted to reach $1 trillion in revenue within a decade, is driving companies in every industry to adopt AI-powered chatbots and agents to stay competitive.
Aleksander Madry, one of OpenAI's top safety executives, was reassigned to a job focused on AI reasoning in July, according to sources familiar with the situation.
Madry was the head of preparedness at OpenAI, a team responsible for tracking, evaluating, forecasting, and protecting against catastrophic risks related to frontier AI models, as stated on a Princeton University AI initiative website. Despite his new role, Madry will continue to work on core AI safety work, as OpenAI informed CNBC at the time.
At the same time that Democratic senators wrote to OpenAI CEO Sam Altman about safety concerns, the company reassigned Madry.
CNBC viewed a letter that requested additional information from OpenAI regarding its safety commitments, internal evaluation of progress, and cybersecurity threat mitigation measures.
Microsoft has stepped down from its observer seat on OpenAI's board in July, citing satisfaction with the revamped board structure following the upheaval that led to the ouster of Altman and threatened Microsoft's significant investment in the company.
In June, a group of current and former OpenAI employees published an open letter expressing concerns about the rapid advancement of the artificial intelligence industry, despite the lack of oversight and whistleblower protections for those who wish to speak up.
The employees stated that AI companies have financial motivations to evade effective supervision, and we believe that unique corporate governance structures are not enough to alter this.
A source close to the matter revealed to CNBC that the Federal Trade Commission and the Department of Justice were planning to launch antitrust probes into OpenAI, Microsoft, and Nvidia, focusing on their behavior.
Lina Khan, the FTC Chair, has characterized the agency's investigation as a "market inquiry into the partnerships and investments being formed between AI developers and major cloud service providers."
The June letter written by both current and former employees stated that AI companies possess "significant confidential information" regarding their technology's capabilities, the safety precautions implemented, and the potential risks associated with its use for various forms of harm.
"The companies currently have weak obligations to share some information with governments and none with civil society, and we do not believe they can all be relied upon to share it voluntarily, as these technologies pose serious risks."
The Superalignment team at OpenAI, which was announced last year and disbanded in May, aimed to achieve "scientific and technical advancements to guide and control AI systems that outsmart us." At the time, OpenAI pledged to allocate 20% of its computing resources to the initiative over a four-year period.
OpenAI's leaders, Ilya Sutskever and Jan Leike, announced their departures from the startup in May, and Leike wrote on X that the startup's "safety culture and processes have been overshadowed by the development of new products."
At the time, Altman expressed sadness upon Leike's departure and acknowledged that OpenAI had more work to do. Following this, co-founder Greg Brockman posted a statement attributed to both Brockman and the CEO, stating that the company had raised awareness of the risks and opportunities of AGI to better prepare the world for it.
"I joined OpenAI because I believed it was the best place in the world to conduct research. However, I have been at odds with the company's leadership over its core priorities for some time, until we reached a breaking point."
He believes that the company should prioritize more of its bandwidth towards security, monitoring, preparedness, safety, and societal impact, as stated by Leike.
"These problems are quite challenging to solve, and I am worried that we are not on track to achieve the desired outcome," he wrote at the time. "Over the past few months, my team has been facing numerous obstacles while trying to complete the research. We often struggled for computing resources, which made it increasingly difficult to make progress."
Leike added that OpenAI must become a "safety-first AGI company."
"The development of machines that surpass human intelligence carries inherent risks, as stated in his writing on X. OpenAI bears a significant responsibility for humanity, but safety considerations have been overshadowed by the pursuit of impressive products in recent years."
Technology
You might also like
- Tech bros funded the election of the most pro-crypto Congress in America.
- Microsoft is now testing its Recall photographic memory search feature, but it's not yet flawless.
- Could Elon Musk's plan to reduce government agencies and regulations positively impact his business?
- Some users are leaving Elon Musk's platform due to X's new terms of service.
- The U.S. Cyber Force is the subject of a power struggle within the Pentagon.