Target groups : Educators & Teachers, Politicians, Students, Youth, social organisations, media professionals.
Democracy in the Age of AI
On April 3, 2025, a diverse group of volunteers, civil society organizations, politicians, youth, and students gathered in Darmstadt to discuss one of the most pressing challenges of our era: the role of artificial intelligence (AI) in the spread of disinformation and its consequences for democracy and society. Over the course of one hour and forty-five minutes, the workshop unfolded across five main themes, each shedding light on a different facet of this complex issue.

*On April 3, 2025
1. Introduction to disinformation
The workshop began by examining why humans are naturally susceptible to disinformation. Our brains tend to fill in gaps and, when we hear repeated statements, they can become part of our perceived reality-regardless of their truth. This makes everyone vulnerable to manipulation. Understanding key concepts-disinformation (deliberately false information), misinformation (incorrect but not intentionally misleading information), and malinformation (genuine information used maliciously)-is crucial for recognizing and resisting these threats. Recent examples, such as widespread disinformation during the Covid-19 pandemic, illustrated the real-world impact of these phenomena.
2. Artificial Intelligence and Disinformation
AI has become an integral part of daily life, offering both benefits and risks. The workshop highlighted how AI makes it alarmingly easy to create fake images and videos, accelerating the production and dissemination of disinformation. With billions of people active on social media platforms like Facebook, the speed and reach of false content are unprecedented. AI-driven algorithms further complicate the landscape by manipulating what information users see, often reinforcing existing beliefs and biases.
3. Algorithms: Filter Bubbles & Echo chambers, Deepfakes, and Social Bots
Social media algorithms tend to group users with similar viewpoints, showing them content that aligns with their beliefs. This creates “filter bubbles” and “echo chambers,” increasing polarization and undermining democratic values, which thrive on diversity and dialogue. Deepfakes-AI-generated fake videos, images, or audio-can easily deceive users, especially in fast-paced online environments. Social bots, with their ability to interact with thousands of users simultaneously, further accelerate the spread of disinformation.
4. A Journey Through Misinformation, Disinformation, and Malinformation
The workshop featured real-world cases: deepfake videos, scam stories using AI, and fake news created with manipulated images. These examples showed how disinformation can cause tangible harm-from stock market losses to public panic and eroded trust in media. The discussion emphasized that combating disinformation is everyone’s responsibility.
5. Strategies: Prebunking and Debunking
Participants learned practical techniques to recognize and counter disinformation. “Prebunking” involves critically analyzing the language of news stories-disinformation often appeals to emotions and lacks neutrality. Checking the source and author can provide important clues. “Debunking” relies on fact-checking websites to verify claims. The workshop aimed to provide participants with the skills to identify and resist AI-generated disinformation.
Interactive Elements and Collective Action
To reinforce learning, participants took a quiz to test their ability to spot disinformation, followed by group discussions on how individuals and governments can raise awareness and fight back. The event concluded with a reflection on the importance of critical thinking, media literacy, and digital literacy in building societal resilience.
This workshop highlighted that while AI poses new challenges for democracy, an informed and vigilant society-equipped with the right educational tools-can effectively counter the spread of disinformation.
