SSI has chosen to operate as a for-profit entity to enable swifter decision-making and more efficient resource allocation, which are crucial in the competitive AI industry. This approach allows SSI to attract investors who share their mission and understand the importance of AI safety. In contrast, OpenAI initially launched as a non-profit organization in 2015 but later restructured itself due to the need for significant computing resources. The for-profit model allows SSI to be more agile and focused in achieving its goals, while also addressing the potential risks associated with AI development.
Ilya Sutskever, the co-founder and former chief scientist of OpenAI, left the company and started Safe Superintelligence Inc. (SSI) due to differences in approach to AI safety with OpenAI's leadership. Sutskever, along with Jan Leike, who co-led OpenAI's Superalignment team, left the company dramatically in May3. Sutskever has been focused on AI safety for a long time, predicting that AI with intelligence exceeding that of humans could arrive within the decade and may not be benevolent. SSI's mission is to create a safe and powerful AI system, approaching safety and capabilities in tandem.
Ilya Sutskever focused on improving AI safety with the rise of "superintelligent" AI systems while at OpenAI. He worked alongside Jan Leike, co-leading OpenAI's Superalignment team. His new venture, Safe Superintelligence Inc. (SSI), continues this focus, aiming to approach safety and capabilities in tandem as technical problems to be solved through revolutionary engineering and scientific breakthroughs. SSI's mission is to ensure AI safety, security, and progress are insulated from short-term commercial pressures.