Aleksandr Madry, a top AI safety executive at OpenAI, has been reassigned to a new role focused on AI reasoning.

Aleksandr Madry, a top AI safety executive at OpenAI, has been reassigned to a new role focused on AI reasoning.
Aleksandr Madry, a top AI safety executive at OpenAI, has been reassigned to a new role focused on AI reasoning.
  • One of OpenAI's top safety executives, Aleksander Madry, was removed from his role last week and reassigned to a job focused on AI reasoning, sources confirmed to CNBC.
  • Madry served as OpenAI's head of preparedness, responsible for monitoring, assessing, predicting, and safeguarding against potential hazards associated with advanced AI systems, as stated in his bio.
  • A group of Democratic senators wrote to OpenAI CEO Sam Altman about safety concerns just days before the decision was made.

One of OpenAI's top safety executives, Aleksander Madry, was removed from his role last week and reassigned to a job focused on AI reasoning, sources confirmed to CNBC.

Madry served as OpenAI's head of preparedness, responsible for monitoring, assessing, predicting, and safeguarding against potential hazards associated with advanced AI systems, as stated in his bio.

OpenAI informed CNBC that Madry will continue to focus on core AI safety work in his new position.

According to MIT's website, Madry is currently on leave from his roles as director of MIT's Center for Deployable Machine Learning and faculty co-lead of the MIT AI Policy Forum.

A letter from a group of Democratic senators to OpenAI CEO Sam Altman about safety concerns was sent less than a week after the decision to reassign Madry.

The letter, sent Monday and viewed by CNBC, requested additional information from OpenAI regarding the measures the company is taking to fulfill its public safety commitments, the internal evaluation of its progress on those commitments, and the identification and mitigation of cybersecurity threats.

OpenAI did not immediately respond to a request for comment.

By August 13, OpenAI was requested by lawmakers to provide a series of answers regarding its safety protocols and financial obligations.

The summer of 2023 has seen a growing number of safety concerns and controversies surrounding OpenAI, which, along with Google, Microsoft, and Meta, is at the forefront of the generative AI arms race. This market, predicted to reach $1 trillion in revenue within a decade, has driven companies in every industry to adopt AI-powered chatbots and agents to stay competitive.

Microsoft has stepped down from its observer seat on OpenAI's board after being satisfied with the changes made to the startup's board in the past eight months, following a controversy that led to the brief removal of CEO Sam Altman and threatened Microsoft's significant investment in OpenAI.

An open letter was published by a group of current and former OpenAI employees last month, expressing concerns about the rapid advancement of the artificial intelligence industry, despite the lack of oversight and whistleblower protections for those who wish to speak up.

The employees stated that AI companies have financial motivations to evade effective supervision, and we believe that unique corporate governance structures are not enough to alter this.

A source close to the matter informed CNBC that the FTC and DOJ were planning to launch antitrust probes into OpenAI, Microsoft, and Nvidia, focusing on their behavior.

Lina Khan, the FTC Chair, has characterized the agency's investigation as a "market inquiry into the partnerships and investments being formed between AI developers and major cloud service providers."

The June letter written by both current and former employees stated that AI companies possess "significant confidential information" regarding their technology's capabilities, the safety precautions implemented, and the potential risks associated with its use for various forms of harm.

"The companies have weak obligations to share information with governments and none with civil society, and we do not trust them to share it voluntarily, as there are serious risks associated with these technologies."

In May, OpenAI disbanded its team focused on the long-term risks of AI, just one year after it announced the group, a source confirmed to CNBC.

Some team members are being reassigned to other teams within the company, according to a person who spoke on condition of anonymity.

OpenAI's leaders, Ilya Sutskever and Jan Leike, announced their departures from the startup in May, and Leike wrote on X that the startup's "safety culture and processes have been overshadowed by the development of new products."

At the time, CEO Sam Altman expressed sadness upon Leike's departure and acknowledged that the company still had work to do. Following this, OpenAI co-founder Greg Brockman posted a statement attributed to both Brockman and Altman on X, stating that the company had raised awareness of the risks and opportunities of AGI to better prepare the world for it.

"I joined OpenAI because I believed it was the best place in the world to conduct research. However, I have been at odds with the company's leadership over its core priorities for some time, until we reached a breaking point."

He believes that the company should prioritize more of its bandwidth towards security, monitoring, preparedness, safety, and societal impact, as stated by Leike.

"These problems are quite challenging to solve, and I am worried that we are not on track to achieve the desired outcome," he wrote. "Over the past few months, my team has been facing numerous obstacles while working on our research. We often struggled for computing resources, which made it increasingly difficult to make progress."

Leike added that OpenAI must become a "safety-first AGI company."

"At the time, he wrote that creating machines more intelligent than humans is inherently risky. He argued that OpenAI bears a significant responsibility for all of humanity. However, safety considerations have been neglected in favor of developing impressive products over the past few years."

The Information first reported about Madry's reassignment.

by Hayden Field

Technology