
OpenAI employees have raised concerns about the company's safety practices, claiming that the development of AI systems is not being done responsibly. They argue that safety has been deprioritized, with the company rushing through safety tests and celebrating products before ensuring their safety. The employees have called for better safety and transparency practices from the startup.

OpenAI's safety team was dissolved following the departure of co-founder Ilya Sutskever and Jan Leike6. The team was then integrated into other research efforts to help the company achieve its safety goals. This decision came amid a string of recent departures from OpenAI, reviving questions about the company's approach to balancing speed and safety in developing AI products.

OpenAI's charter claims that safety is core to its mission, stating that the organization will assist other entities in advancing safety if artificial general intelligence (AGI) is reached at a competitor, instead of continuing to compete3. It is dedicated to solving safety problems inherent in large, complex AI systems.