OpenAI Whistleblowers Describe Reckless and Secretive Culture
What specific concerns have the OpenAI insiders raised about the company's approach to developing artificial intelligence?

The OpenAI insiders have raised several specific concerns about the company's approach to developing artificial intelligence. These concerns include:
-
Culture of recklessness and secrecy: The insiders claim that OpenAI has a culture that does not prioritize safety and transparency when it comes to the development of their AI systems2.
-
Lack of priority on safety: The insiders believe that OpenAI is prioritizing profits and growth over ensuring that their AI systems are safe and do not pose a threat to humanity.
-
Inadequate protection for whistleblowers: The group alleges that OpenAI has used aggressive tactics, such as restrictive nondisparagement agreements, to prevent employees from voicing their concerns about the potential dangers of the technology.
-
Rushing to build powerful AI systems: The insiders are concerned that OpenAI is rushing to build the most powerful AI systems ever created, potentially neglecting crucial safety measures in the process.
These concerns highlight the need for increased transparency, accountability, and ethical considerations in the development of artificial intelligence.
How many current and former OpenAI employees are involved in voicing these concerns?

Nine current and former OpenAI employees are involved in voicing concerns about the company's approach to developing artificial intelligence systems.
What actions have the dissenting group of OpenAI employees taken to address their concerns?

The dissenting group of OpenAI employees, comprising nine current and former members, has taken several actions to address their concerns about the potential risks posed by the company's artificial intelligence systems. They have collectively signed an open letter calling for more transparency within AI companies and better protection for whistleblowers who wish to raise concerns about the power of AI. This letter outlines four key demands that they want OpenAI and other AI companies to adopt in order to mitigate the existing issues of inequality and misinformation in the AI space.
The four demands are:
- The company will not enter into or enforce any agreement that prohibits "disparagement" or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit.
- The company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise.
- The company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company's board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
- The company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.
The letter emphasizes the need for responsible development of AI technology and highlights the serious risks it poses, ranging from further entrenchment of existing inequalities to manipulation and misinformation, and even the potential loss of control of autonomous AI systems, which could result in human extinction. The letter has also been endorsed by two prominent figures in the AI field, Yoshua Bengio and Geoffrey Hinton, sometimes referred to as the "Godfathers of AI."