
AI-driven simulations in polling can have significant implications on democratic processes. One major concern is the potential distortion of polling results, which can affect citizens' conclusions and influence their decisions in the democratic space. For instance, biased polling data could sway public opinion or impact voting behavior.
AI models, relying on past data, might reinforce existing ideas and preferences, leading to feedback loops and echo chambers. This could result in a recursive spiral where biased information influences human preferences and decisions in the democratic sphere, such as what people believe in or who they may vote for.
Moreover, AI-driven simulations might not accurately predict the public's reaction to something or an election outcome, especially when the data is not well-established in the model. This could lead to misinformed decision-making and a lack of trust in democratic processes.
In conclusion, while AI has the potential to revolutionize various aspects of democracy, it is crucial to understand its limitations and adopt ethical norms for its development and deployment to ensure the integrity and sustainability of democratic processes.

John Wihbey, an associate professor of media innovation and technology, highlights several potential risks of AI in the field of journalism:
Reinforcement of Past Ideas and Preferences: AI and large language models are trained on past data about people's values and interests. As a result, they might continuously reinforce past ideas and preferences, creating feedback loops and echo chambers. This risk of feedback loops could remain recurrent.
Over-moderation of Social Media Content: AI moderators, conditioned by outdated data, might over-moderate and erase users' posts and commentary on social media platforms. This is a concern because social media is a vital space for modern human deliberation.
Distortion of Polling Results: AI-driven simulations in polling could distort results, leading to inaccurate conclusions about public opinion. This could influence citizens' preferences and decisions in a democratic space, such as their beliefs or voting choices.
Inability to Accurately Predict Public Reaction: AI models inherently cannot accurately predict the public's reaction to something or an election outcome. This limitation is particularly significant in political and social life, where much of what is important is fundamentally emergent.
Wihbey emphasizes the importance of understanding AI's limits and adopting ethical norms for its development and deployment to mitigate these risks and ensure that democracy is not jeopardized.

According to John Wihbey, AI-generated content and moderation could have several impacts on democracy. Firstly, AI systems might replace humans in information fields like journalism, social media moderation, and polling2. This could lead to a "lock-in" of public knowledge and understanding, as AI will continuously reinforce past ideas and preferences, creating feedback loops and echo chambers2.
In journalism, the integration of AI might help in discovering and verifying information, categorizing content, and conducting large-scale analysis of social media. However, AI moderators on social media platforms might not align with the latest human preferences and could over-moderate, potentially erasing valuable user posts and commentary.
Furthermore, AI-driven simulations in polling could distort results, affecting citizens' conclusions and influencing their preferences and decisions in the democratic space. This could create recursive spirals that impact what people believe in or who they may vote for.
Wihbey also emphasizes that AI models cannot accurately predict the public's reaction to something or an election outcome, as they rely on pattern extrapolation from past data, which may not account for emergent social and political phenomena.