

Global Misuse of AI Technology for Propaganda
State Actors Utilizing AI
Groups from Russia, China, Iran, and Israel have been using OpenAI's technology to influence global political discourse, particularly as the 2024 presidential election approaches.
Efforts to Counteract Misuse
OpenAI has taken actions like removing accounts linked to known propaganda groups and developing technologies to identify AI-generated deepfakes, although these technologies are still being perfected.
Limited Impact So Far
Despite these efforts, the social media accounts linked to these propaganda campaigns have achieved minimal reach, suggesting a limited impact on public opinion at this stage.
Potential for Future Abuse
While current misuse has not been highly effective, there is ongoing concern that these technologies could evolve, potentially leading to more successful influence operations if not adequately monitored.

The influence operations conducted by the Russian group "Bad Grammar" involved the use of OpenAI's technology to facilitate automated posting on the messaging app Telegram3. The group utilized this technology to generate and disseminate posts and comments in both Russian and English. The content created by "Bad Grammar" primarily focused on promoting the narrative that the United States should not support Ukraine. This operation is part of a broader trend where various groups exploit AI technologies to enhance the efficiency and reach of their propaganda efforts.

According to Ben Nimmo, the principal investigator on OpenAI's intelligence and investigations team, the social media accounts associated with the groups attempting to use OpenAI's technology for influence operations were not very effective in reaching users. Nimmo reported that these accounts reached few users and had only a handful of followers. This suggests that despite their attempts to leverage AI technology for generating and disseminating content, these operations did not gain significant traction or visibility on social media platforms.