Ex-OpenAI staff members caution about the dangers of AI and the need for greater supervision.

Ex-OpenAI staff members caution about the dangers of AI and the need for greater supervision.
Ex-OpenAI staff members caution about the dangers of AI and the need for greater supervision.
  • An open letter published by a group of current and former OpenAI employees on Tuesday raised concerns about the rapid advancement of the artificial intelligence industry, despite the absence of oversight and whistleblower protections for those who wish to speak up.
  • The employees wrote in an open letter that AI companies have financial motivations to evade effective supervision, and we believe customized corporate governance structures are not enough to alter this.
  • A $1 trillion market for generative AI is predicted to emerge within a decade, with companies such as OpenAI, Google, and Microsoft leading the charge.

An open letter published by a group of current and former OpenAI employees on Tuesday raised concerns about the rapid advancement of the artificial intelligence industry, which they say is happening without proper oversight and without adequate protections for whistleblowers.

The employees wrote in an open letter that AI companies have financial motivations to evade effective supervision, and we believe customized corporate governance structures are not enough to alter this.

The generative AI market is predicted to reach $1 trillion in revenue within a decade, and companies in various industries are racing to add AI-powered chatbots and agents to their offerings to stay competitive.

The former and current employees of AI companies possess "substantial non-public information" regarding the capabilities of their technology, the safety measures they have implemented, and the potential risks associated with their technology for various types of harm.

"The companies have weak obligations to share information with governments and none with civil society, and we do not trust them to share it voluntarily, as there are serious risks associated with these technologies."

The letter highlights the concerns of current and former employees in the AI industry regarding insufficient whistleblower protections. They argue that without effective government oversight, employees are uniquely positioned to hold companies accountable.

"The signatories stated that broad confidentiality agreements prevent them from expressing their concerns, except to the companies that may not be addressing these issues. They argued that standard whistleblower protections are inadequate because they primarily focus on illegal activities, while many of the risks they are concerned about are not yet regulated."

The letter requests that AI companies pledge not to implement or enforce non-disparagement agreements; establish confidential procedures for current and former employees to express grievances to the board, regulatory bodies, and other stakeholders; foster a culture of constructive criticism; and refrain from retaliating against public whistleblowers if internal reporting mechanisms fall short.

The letter was signed by four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright, and Daniel Ziegler. Additionally, signatories included Ramana Kumar, who previously worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three renowned computer scientists who have made significant contributions to the field of artificial intelligence also endorsed the letter: Geoffrey Hinton, Yoshua Bengio, and Stuart Russell.

An OpenAI spokesperson stated to CNBC that the company recognizes the importance of rigorous debate regarding the significance of this technology and will continue to engage with governments, civil society, and other communities worldwide. Additionally, OpenAI has an anonymous integrity hotline and a Safety and Security Committee led by members of the board and OpenAI leaders.

Microsoft declined to comment.

Mounting controversy for OpenAI

OpenAI reversed its decision to require former employees to sign a non-disparagement agreement that would never expire, or retain their vested equity in the company. An internal memo, obtained by CNBC, was sent to former and current employees.

Each former employee received a memo stating that they were required to sign a general release agreement with a non-disparagement provision in order to retain their Vested Units at the time of their departure from OpenAI.

An OpenAI spokesperson apologized to CNBC at the time, stating that they were sorry for only changing the language now, as it did not align with their values or the company they aspired to become.

Last month, OpenAI disbanded its team focused on the long-term risks of AI, just one year after the -backed startup announced the group. A person familiar with the situation confirmed this to CNBC.

Some team members are being reassigned to multiple other teams within the company, according to a person who spoke on condition of anonymity.

The departure of OpenAI co-founders Ilya Sutskever and Jan Leike from the startup last month led to the disbandment of the team. In a post on X, Leike criticized the company's "safety culture and processes" and claimed that they have been neglected in favor of "shiny products."

On X, CEO Sam Altman expressed his sadness over Leike's departure and stated that the company still had a lot of work to do. Following this, OpenAI co-founder Greg Brockman posted a statement attributed to himself and Altman on X, stating that the company had raised awareness of the risks and opportunities of AGI so that the world could better prepare for it.

"I joined OpenAI because I believed it was the ideal location for conducting this research. However, I have been at odds with the company's leadership over its core priorities for some time, culminating in a breaking point."

He believes that the company should prioritize more of its bandwidth towards security, monitoring, preparedness, safety, and societal impact, as Leike wrote.

"These problems are quite challenging to solve, and I am worried that we are not on track to achieve the desired outcome," he wrote. "Over the past few months, my team has been facing numerous obstacles while working on our research. We often struggled for computing resources, which made it increasingly difficult to make progress."

Leike added that OpenAI must become a "safety-first AGI company."

"OpenAI is taking on a massive responsibility for humanity, but safety has been neglected in favor of product development."

After experiencing a leadership crisis involving Altman, OpenAI has seen several high-profile departures.

In November, the board of OpenAI removed Sam Altman from his position, stating that he had not been truthful in his interactions with the board.

While Sutskever focused on ensuring that AI would not harm humans, others, including Altman, were more eager to push ahead with new technology, as reported by The Wall Street Journal and other media outlets.

Altman's removal from OpenAI led to a wave of resignations and threats of resignations, including an open letter signed by almost all employees. There was also uproar from investors, including Microsoft. Within a week, Altman was reinstated at the company, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever, who had voted to remove Altman, were ousted. Sutskever remained on staff but no longer as a board member. Adam D'Angelo, who had also voted to remove Altman, remained on the board.

OpenAI recently released a new AI model and desktop version of ChatGPT, along with an updated user interface and audio capabilities. One week after debuting the range of audio voices, the company announced it would remove one of the viral chatbot's voices named "Sky."

Scarlett Johansson's voice in "Her" sparked controversy, with some accusing OpenAI of stealing it despite her refusal to grant permission.

by Hayden Field

Technology