

Former OpenAI board members Helen Toner and Tasha McCauley argued in The Economist that AI companies like OpenAI cannot self-regulate effectively. They cited a toxic work culture under CEO Sam Altman as a key issue. The two advocate for government intervention to ensure AI development is safe and beneficial for humanity.
They also highlighted concerns about the influence of profit-driven motives on self-governance within AI companies. In response to these issues, they call for independent policymaking to avoid regulatory capture and ensure competition and innovation are not stifled. This comes amidst broader discussions on the need for robust AI regulations to manage risks associated with advanced AI technologies.

The Artificial Intelligence Safety and Security Board is a 22-member board established by the Department of Homeland Security to provide recommendations for the safe and secure development and deployment of AI throughout the US's critical infrastructures4. The board's composition is significant as it includes major tech CEOs and representatives from tech nonprofits1. Some of the notable members include:
The inclusion of these prominent tech leaders signifies the importance the government is placing on AI development and its impact on critical infrastructure. However, some critics argue that the overrepresentation of profit-motivated companies could lead to policies that prioritize industry interests over human safety.

Toner and McCauley have expressed concerns about the potential negative impacts of poorly designed AI regulations, which might stifle competition and innovation, particularly affecting smaller companies. They emphasize the importance of policymakers acting independently from leading AI companies when crafting new regulations. This independence is crucial to avoid creating regulatory "moats" that could protect established companies from new competitors and prevent regulatory capture, where regulations are unduly influenced by the industry they are supposed to regulate3. Their advocacy for independent policy-making aims to ensure that new regulations are fair and do not unduly burden emerging companies, thereby maintaining a healthy competitive environment in the AI sector3.