No products in the basket.

EducationENGLISHimmune 2 infodemic

Navigating the Digital Landscape: Media Literacy & AI-driven Mis/Disinformation

Immune 2 Infodemic WP5 - Building Resilience Against AI-Driven Disinformation

In a digital era where the line between truth and illusion is increasingly blurred, artificial intelligence (AI) stands as both a promise and a peril. It empowers creativity, accelerates knowledge, and enhances problem-solving. Yet, in the wrong hands—or without critical awareness—it also amplifies manipulation, deepens polarization, and weakens public trust.

The Immune 2 Infodemic 2 project, funded by the European Union under the CERV program, was created to strengthen citizens’ capacities to recognize, understand, and counter misinformation and disinformation. The initiative is a collaboration between Beyond the Horizon, Dare to be Grey, Facta Bari, and Januam gUG, who jointly explore how digital literacy, media awareness, and community-based learning can help societies resist the “infodemic” that now spreads faster than any virus.

Within this project, Work Package 5 (WP5) was led and implemented by Januam gUG. The goal was clear: to promote media and AI literacy, enhance critical thinking, and empower citizens to become active defenders of truth. Two key online workshops, conducted in September and October 2025, served as the cornerstone of this effort.

These workshops—“Navigating the Digital Landscape: Media Literacy and AI-Driven Mis/Disinformation” and “AI vs. Reality – How Communities Can Outsmart Disinformation”—brought together participants from sixteen countries, creating a European and global learning space. Through expert input, participatory discussion, and interactive practice, WP5 proved that resilience against AI-driven manipulation can be cultivated through knowledge, empathy, and collaboration.

webinar disinformation
Immune 2 Infodemic

AI vs. Reality

Raise Your Awareness: Attend the webinar to increase your understanding of AI-generated disinformation and its potential effects on communities.

16 September 2025

15.30 - 17.30 CET

Online

Topics & Guest Speakers

Building Psychological Resilience Against AI-Based Misinformation

👤 Melisa Basol, PhD – Behavioural Scientist
Learn how psychological and cognitive tools can help individuals critically engage with digital content. Discover practical, evidence-based strategies—beyond simple fact-checking—that build lasting mental defenses against manipulation and AI-generated disinformation.
🔗 Melisa Basol – LinkedIn

Social Environments & Community-Based Response

👤 Prof. Dr. Daniel O. Livvarcin – Professor & Social Entrepreneur

Examine the critical role of communities and grassroots leadership in detecting and countering disinformation. Understand how trust, collaboration, and local networks can act as powerful shields against false narratives.

🔗 Danielle O. Livvarcin – LinkedIn

Disinformation, Democracy & Security

👤 Christopher Nehring – Disinformation Expert, Cyberintelligence Institute

Gain insights into how disinformation, amplified by AI technologies, affects democracy, political stability, and security. Explore the latest developments, risks, and responses from a disinformation research and policy perspective.

🔗 MChristopher Nehring– LinkedIn

ai vs reality how communities can outsmart disinformation registration
1. Objectives of WP5

The overarching objective of WP5 was to build awareness and capacity among citizens, educators, and community leaders to identify and counter AI-driven misinformation and disinformation. Specific aims included:

  1. Raising awareness about the scale, speed, and psychological mechanisms of AI-enabled manipulation.
  2. Developing practical skills for recognizing deepfakes, synthetic voices, and algorithmic bias.
  3. Promoting psychological resilience—the ability to critically process information and resist emotional manipulation.
  4. Encouraging cross-sector collaboration between academia, civil society, educators, and digital professionals.
  5. Generating actionable recommendations for integrating AI literacy into educational materials and community initiatives.

The WP5 workshops were designed to be interactive, cross-disciplinary, and inclusive, ensuring that both technical and social perspectives were represented.

2. Context: AI, Disinformation, and Society

AI’s capacity to generate text, images, and video has revolutionized communication. Unfortunately, the same technologies now power a new wave of disinformation. Generative models can fabricate news articles, clone voices, and produce deepfakes that deceive millions before verification mechanisms even react.

According to the World Economic Forum’s 2024 Global Risks Report, AI-generated misinformation ranks as the world’s second-largest global risk, after climate change. With 2025 labeled a “super-election year,” where over three billion people vote globally, the stakes for information integrity could not be higher.

Misinformation has always existed, but AI makes it industrial—faster, cheaper, and more persuasive. It exploits what behavioural scientists call confirmation bias and cognitive depletion, leveraging emotional triggers to bypass rational thought. The danger lies not only in false content but also in the erosion of shared reality.

Januam’s WP5 addressed this challenge head-on. Its philosophy was that the fight against misinformation cannot rely solely on technology or regulation; it requires an informed and empowered public able to navigate complexity with critical empathy.

3. Workshop 1 Overview – Navigating the Digital Landscape

Date: 16 September 2025
Format: Online (Webinar)
Registered: 202 (from 18 countries)

The first workshop, titled “Navigating the Digital Landscape: Media Literacy & AI-Driven Mis/Disinformation,” marked the official implementation of WP5. It gathered community leaders, educators, digital-literacy advocates, researchers, and professionals working to build organizational resilience.

Moderated by Ömer Evrey, cybersecurity consultant and Januam volunteer, the session featured three distinguished speakers:

  • Dr. Melisa Basol, behavioural scientist and Cambridge researcher known for her pioneering work on prebunking and inoculation theory.

  • Dr. Christopher Nehring, disinformation and cyber-intelligence expert associated with the Konrad Adenauer Foundation.

  • Prof. Dr. Daniel O. Livvarcin, Canadian social entrepreneur and founder of Vectors Group, advocating collaborative, ethical AI use in the non-profit sector.

Agenda and Format

The 2-hour online session combined keynote presentations, Q&A discussions, and interactive reflection. Each segment focused on a distinct dimension of digital resilience: behavioural, technological, and social.

Participants joined from sixteen countries across Europe and beyond—including Germany, Turkey, Belgium, Finland, Spain, Italy, Greece, Norway, Switzerland, Malta, Poland, Denmark, the Netherlands, and the United Kingdom—illustrating the cross-border nature of AI-driven challenges.

4. Key Insights from Workshop 1
4.1. Behavioural Science and Psychological Resilience

Dr. Melisa Basol opened the event with a keynote titled “Building Psychological and Technical Resilience Against AI-Based Misinformation.” She explained that the traditional fight against fake news has evolved into a defense of reality itself.

“It’s no longer about distinguishing truth from lies,” she said. “Every image, every sound, every claim may now be synthetic. We are trying to defend reality itself.”

 

She introduced a three-tier model of manipulation:

  1. Manipulation by Design – algorithmic structures that privilege emotional, polarizing content.
  2. Manipulation by Deployment – intentional misuse of AI to produce and spread falsehoods.
  3. Manipulation by Interaction – psychological trust developed between humans and AI companions.

Basol argued that disinformation spreads through structural bias, deliberate intent, and relational dependency. Therefore, countermeasures must be equally layered: ethical algorithms, AI-based prebunking, and participatory governance through citizens’ councils.

“Disinformation thrives on opacity,” she concluded. “We need transparency, auditing, and empathy—three pillars for a trustworthy digital ecosystem.”

 

4.2. Real-World Examples of AI Manipulation

Moderator from Januam followed with a presentation titled “AI vs Reality.” He shared viral examples of deepfakes and synthetic news that had influenced public opinion and even financial markets. Moderator emphasized that AI makes disinformation scalable:

“It’s faster to produce, cheaper to distribute, and more convincing than ever before. When false trust spreads, it shakes democracy itself.”

 

He then introduced the twin strategies of prebunking (building psychological immunity before exposure) and debunking (fact-checking after exposure), encouraging participants to view themselves as digital first responders.

4.3. The Industrialization of Disinformation

Dr. Christopher Nehring expanded the discussion by mapping the industrial structure of global AI disinformation. Drawing on recent research, he revealed networks of automated fake-news websites, synthetic social-media profiles, and translation bots used in coordinated campaigns.

“We identified over a hundred cases across Europe and Africa,” he said. “Some operations generate ninety thousand AI-produced articles per month. This is not a niche phenomenon—it’s a factory.”

 

Nehring introduced the concept of “cognitive DDoS attacks”—flooding fact-checkers and journalists with fake leads to exhaust their resources. “The objective is not persuasion but paralysis,” he explained. “By overwhelming the system, truth becomes just one voice among noise.”

He advocated multi-layered countermeasures: labeling and watermarking of AI content, legislative enforcement of transparency, and the institutionalization of disinformation management within organizations.

4.4. Community-Based Responses

Prof. Dr. Daniel O. Livvarcin concluded the workshop by offering a human-centered response. Drawing from his book Understanding and Using AI for Non-Profit Leaders, he used a simple metaphor:

“AI is like a smart toddler,” he said. “It learns from everything we show it. We must teach it well before it grows into a teenager that no longer listens.”

 

He urged NGOs and civil-society groups to cooperate rather than compete:

“If every organization is a vector, alignment matters. When we move together, we amplify our power. When we move apart, we cancel each other out.”

 

Livvarcin’s motto—“Don’t be afraid, be prepared”—became a recurring phrase among participants. He emphasized that small, consistent actions such as correcting AI systems, fact-checking content, and sharing verified information can collectively shift the digital landscape toward responsibility.

Immune 2 Infodemic

Democracy in the Age of AI Battling Mis/Disinformation

In today’s digital era, Artificial Intelligence (AI) is transforming how we access, share, and understand information. While AI offers enormous opportunities for innovation and progress, it also brings new challenges for democracy — especially in the fight against misinformation and disinformation.

Join us for an insightful webinar exploring how AI influences public opinion, elections, and civic participation. Together, we will discuss how individuals, educators, journalists, and organizations can strengthen digital resilience and promote informed democratic engagement in the AI age.

🎙️ Key Topics

  • Understanding the role of AI in shaping information ecosystems
  • How misinformation spreads faster with AI-generated content
  • Building public trust and digital literacy
  • Practical tools and strategies to detect and counter disinformation

29 October 2025

18.00 - 19.00 CET

Online

Free Webinar Januam
5. Workshop 2 Overview – Democracy in the Age of AI – Battling Mis/Disinformation

Date: 29 October 2025
Format: Online (Google Meet, due to Azure outage)
Registered: 32 (from 3 countries)

The second WP5 workshop transformed the lessons of the first into a hands-on learning experience. Smaller in size but richer in dialogue, it brought together students, educators, journalists, and civil-society representatives.

The sudden shift from Microsoft Teams to Google Meet—caused by a temporary global disruption in Microsoft Azure—ironically illustrated one of the session’s themes: resilience through adaptability.

Objectives
  • The event aimed to:
  • Reinforce critical-thinking and verification skills;
  • Demonstrate practical tools to detect AI manipulation;
  • Foster inclusive dialogue around democracy and technology;
  • Encourage peer-to-peer learning through guided exercises.
6. Key Insights from Workshop 2

The session combined expert input with exercises. Facilitator presented real-life examples of AI-generated falsehoods, while participants practiced identifying inconsistencies in images, metadata, and linguistic cues.

Several participants noted that this interactive approach made the topic more tangible. One participant wrote in the chat:

“AI literacy should be as fundamental as reading or writing. Every classroom needs it.”

Participants also explored community strategies—how local organizations can act as information first-aid centers for vulnerable groups. Discussions emphasized empathy when correcting misinformation, particularly among older audiences or those with limited digital access.

By the end of the session, attendees participated in a short visual quiz prepared by Januam. Using ten illustrated questions, they practiced distinguishing between AI-generated and real images, testing their ability to recognize subtle signs of manipulation.

7. Cross-Cutting Themes

Across both workshops, several themes emerged as central pillars for defending truth in the age of AI:

7.1. Layers of Manipulation

Participants understood manipulation as multi-layered—embedded in algorithmic design, weaponized by malicious actors, and reinforced by human-AI interactions.

7.2. Societal Impact

AI disinformation corrodes public trust, influences elections, and accelerates social polarization. Participants recognized that defending shared reality is essential for democracy.

7.3. Digital Resilience

Critical thinking, prebunking, and community-based fact-checking were identified as scalable tools to counter manipulation at both personal and systemic levels.

7.4. The Role of Civil Society

NGOs, local networks, and advocacy groups play a crucial role in extending AI literacy to marginalized or linguistically isolated communities.

7.5. Shared Responsibility

Participants concluded that safeguarding information integrity is not solely a government or corporate duty; it is a collective civic responsibility.

8. Outcomes and Impact
8.1. Quantitative Reach

Total Participants: 234 (from 18 countries)

Female: 76; Male:158

Total Page Views: > 1 200

Registrations: 202 (first event) and 32 (second)

Overall Satisfaction: Many times we have received positive feedback

8.2. Qualitative Results

Increased Awareness: Participants reported deeper understanding of AI’s role in shaping information ecosystems.

Enhanced Skills: Attendees gained concrete methods for verifying content and identifying synthetic media.

Cross-Sector Dialogue: Educators, researchers, and civil-society actors established networks for continued cooperation.

Initiative Proposals: The idea of an AI Literacy for Communities and Good consortium emerged as a direct outcome.

Curriculum Integration: Several educators committed to embedding AI literacy modules into their teaching materials.

8.3. Emotional and Ethical Impact

Many participants expressed relief in discovering that misinformation can be confronted collectively. They left with a sense of empowerment rather than fear. 

“This was not just a training. It was a reminder that technology must serve people, not control them.”

9. Recommendations and Follow-Up Actions

Based on discussions and evaluations, WP5 formulated a series of recommendations for policymakers, educators, and civil-society actors:

9.1. Educational Integration

Embed AI literacy and media-resilience modules in school curricula and adult-learning programs.

Use multilingual resources to reach diverse audiences, including migrants and vulnerable groups.

9.2. Institutional Cooperation

Establish a European consortium for AI literacy, linking NGOs, universities, and training providers.

Support quarterly network meetings for knowledge exchange and joint advocacy.

9.3. Policy and Governance

Promote transparency standards and independent audits for AI systems.

Encourage participatory oversight via citizen AI councils and ethical-tech observatories.

9.4. Technological Tools

Develop open-source LMs tools for education, watermark detection, and AI content labeling.

Advocate for interoperability between national fact-checking networks.

9.5. Community Engagement

Support local workshops and online hubs to continue grassroots education.

Foster digital empathy—teaching respectful dialogue when confronting misinformation.

10. Conclusion – Toward a Digitally Resilient Europe

Work Package 5 of Immune 2 Infodemic 2 demonstrated that countering AI-driven disinformation is not solely a technological task—it is a human and civic mission.

Through two workshops, over one hundred participants across Europe built shared understanding, practical skills, and collaborative spirit. They learned that while AI can distort truth, it can also defend it—if guided by ethics, transparency, and human values.

Dr. Basol summarized it best:

“If we don’t have shared reality, we don’t have democracy

11. Visual Storytelling for Awareness – Two Short Documentaries

As part of the Immune 2 Infodemic 2 (WP5) activities, the Januam team produced two short documentaries designed to make complex topics such as misinformation, disinformation, and AI literacy accessible to wider audiences.
These visual pieces complement the workshops by offering engaging, reflective, and educational narratives that can be shared freely across platforms and learning environments.

1. Infodemic Explained: Misinformation, Disinformation & Malinformation

We live in the age of information—where news, posts, and content spread within seconds. But is every piece of information true? 
This short documentary explores the meaning and impact of the “Infodemic,” unpacking three key concepts that shape our information ecosystem:

  • Misinformation – false but well-intentioned information,

  • Disinformation – deliberately misleading or manipulative information,

  • Malinformation – true information shared with harmful intent.

Through simple visuals and clear narration, Infodemic Explained invites viewers to pause and reflect before sharing. It emphasizes the civic principle behind WP5’s core message:

“Think before you share.”

 

The film concludes with an invitation to deepen understanding by joining Januam’s educational activities, including the webinar “AI vs. Reality: How Communities Can Outsmart Disinformation.”

🎥 Watch the documentary here

2. Digital Shadows: The Dark Side of Social Media

A screen, billions of connections, and a world full of shining moments… yet behind the glow, hidden shadows wait.
In this short documentary, Digital Shadows, Januam explores the unseen risks of social media and the psychological impact of the digital world.

Key themes include:

  • The rapid spread of misinformation and disinformation,

  • Deepfake videos powered by artificial intelligence,

  • Pressure, anxiety, and depression among young users,

  • Our shared responsibility to protect truth and digital well-being.

The documentary challenges viewers to reflect critically on their daily digital habits. “Social media is more than entertainment—it shapes our thoughts, our identity, and how we see the world.”

Its closing message echoes the guiding philosophy of the Immune 2 Infodemic initiative:

“In the digital age, the greatest danger is not the lie itself—but failing to see it.”

 

🎬 Watch the documentary here

These two productions serve as accessible educational tools within WP5, helping audiences visualize the abstract mechanisms of disinformation and connect emotionally with the importance of critical thinking, ethics, and community awareness.
Together with the workshops, they complete the broader educational cycle of Immune 2 Infodemic 2 — from learning and dialogue to reflection and action.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Privacy Overview
    januam.org

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

    Strictly Necessary Cookies

    Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.