
According to the recent internal shuffle at OpenAI, employees who break non-disclosure agreements may forfeit their rights to their vested equity in the company. This could have significant financial implications for the employees, as the value of their vested equity could far outweigh their salaries.

The signatories of the 'Right to Warn' letter are advocating for several protections for whistleblowers within AI companies3. These protections include:
These demands aim to promote transparency, open dialogue, and a culture of open criticism within major generative AI companies, allowing employees to voice their concerns about potential risks associated with AI without fear of retaliation.

The 'Right to Warn' letter suggests that AI companies should handle internal criticisms and concerns about risks by implementing four key principles:
The letter emphasizes the importance of transparency, oversight, and protection for employees who speak out about potential risks. It also highlights the need for a verifiable process for employees to provide anonymous feedback on their activities, as ordinary whistleblower protections are insufficient due to their focus on illegal activity, whereas many of the risks associated with AI are not yet regulated.