No products in the basket.

ENGLISHFamilyimmune 2 infodemic

The Effects of Social Media Intergenerational Usage and AI-Driven Disinformation in the Digital Society

The Central Role of Social Media in a Digitalized Society

With digitalization, social media platforms have assumed a central position in the dissemination of information and the shaping of public perception. Over 60% of people are now connected via social media as of 2024, creating an unprecedented information ecosystem that fundamentally shapes how societies communicate and understand the world around them. Particularly among younger generations, social media serves both as an everyday source of information and as a tool for social interaction. However, these platforms have also become dynamic and rapidly evolving spaces for the spread of disinformation and AI-driven misinformation.

Intergenerational Social Media Usage in Germany

Data from 2024 highlight significant differences in social media usage across generations in Germany:

  • Generation Z (1995-2012): Over 70% engage in daily social media use
  • Millennials (1980-1994): Roughly 65% are active daily
  • Generation X (1965-1979): Usage intensity approaches 60%
  • Baby Boomers (1946-1964): Less than 50% use social media daily, with a noticeable increase in the proportion of “infrequent users”

These intergenerational differences extend beyond mere usage patterns to vulnerability to misinformation. Recent research analyzing 26 million tweets from over 2 million U.S. users reveals that Generation Z shows the highest vulnerability to misinformation, with only 11% achieving high scores on misinformation detection tests compared to 36% of adults aged 65 and older. This paradox suggests that digital nativity does not necessarily translate to digital discernment.

Global Platforms and Trends

The world’s most used social media platforms and their monthly active user counts are as follows:

  • Facebook: 3,070 million users
  • YouTube: 2,530 million users
  • Instagram, WhatsApp: 2,000 million users each
  • TikTok: 1,590 million users

This data reflects a significant shift in the information ecosystem, with younger populations gravitating toward short-form video platforms such as TikTok and Instagram, which present unique challenges for information verification and fact-checking due to their visual and ephemeral nature.

The Quantified Public Health Impact of Social Media Disinformation

While enabling rapid information exchange, social media also facilitates the rapid spread of false information (disinformation) in fields such as health, science, and politics, posing direct and measurable threats to public health. Recent modeling studies using empirical Twitter data across 341 U.S. counties demonstrate that misinformation could cause an additional 14% of the U.S. population—approximately 47 million Americans—to become infected during an epidemic scenario.

The financial implications are equally staggering. Healthcare costs associated with vaccine misinformation alone exceed $143 billion, using COVID-19 cost data as a baseline. Peak infection rates can be amplified by 6-fold and accelerated by 2 weeks in high-misinformation scenarios, illustrating how digital misinformation translates directly into physical health outcomes and economic burden.

During the COVID-19 pandemic, disinformation visibly fueled anxiety, fear, and distrust in science within society. Platform algorithms, by showing users primarily content that aligns with their existing viewpoints, intensify the “filter bubble” and “echo chamber” effect, creating what researchers term a “SMIR” model—where populations move from Susceptible to Misinformed to Infected to Recovered states, with misinformation exposure significantly affecting each transition.

Empowering Youth Against Misinformation

According to data from NewsGuard, an alarming 20% of news content on popular topics contains false or misleading information. This is not limited to health, but spans politics, school shootings, and many reports that reinforce prejudices fueling various forms of discrimination. The situation has become more critical with the emergence of AI-generated content, with NewsGuard reporting a 10-fold increase in AI-generated fake news sites in 2023 alone.

As data shows, a large portion of young people are exposed to such content, especially through social media, and the frequency is far from negligible. Being subjected to misinformation is not something we can simply brush off; it deeply undermines trust, reduces voter participation, and leads to polarization by reinforcing biases. This impacts not only the direction of society but also worsens individuals’ psychological well-being over time.

It often starts with disinformation, but reality begins to shift when people start treating others with prejudice based on false stories, or when widespread distrust toward all political parties breeds hopelessness. This is particularly dangerous for the youth, as they represent the future of our societies—their thoughts, ideas, and hopes shape what’s to come.

This is why it’s crucial to develop media literacy from an early age. Being proficient in navigating social media is not just a personal skill, but a necessity for the health of society as a whole. This is exactly what The IMMUNE 2 INFODEMIC project aims to achieve: empowering youth with the knowledge and skills to discern truthful information, paving the way for a more resilient and hopeful future for all.

The Rise of AI-Driven Disinformation: Detection and Challenges

Advancements in artificial intelligence have made disinformation more potent and dangerous, but they have also provided new tools for detection and analysis. Recent comprehensive research analyzing detection methods shows that machine learning approaches can achieve remarkable accuracy rates—XGBoost models using TF-IDF vectorization achieve 99% accuracy for fake news detection, while CNN models using ResNet50 and DenseNet121 architectures achieve 98% and 87% accuracy respectively for deepfake detection.

However, the sophistication of AI-generated content continues to evolve:

Automated Content Production: AI can swiftly generate deepfake videos, fake images, and convincing but false content. Analysis of over 20,000 articles reveals that 60% of negative-polarity content was fabricated versus 53% of positive-polarity content, indicating systematic exploitation of negative emotions.

Bots and Automated Dissemination: Automated accounts are used to maximize the spread and viral impact of misinformation in political or commercial campaigns.

Algorithmic Steering: Algorithms ensure that audiences are exposed only to content matching their interests, amplifying polarization and facilitating the continual reinforcement of misinformation.

Economic Incentives and Content Market: Viral disinformation increases the earnings of both content creators and platforms due to high engagement rates.

Core Mechanisms Fueling Disinformation

Research has identified several key mechanisms that enable disinformation to thrive:

  • The rapid virality of unverified content
  • Platforms’ reluctance to manage and moderate content effectively
  • Human psychology: Cognitive biases such as confirmation, repetition, and truth bias
  • The drive to become an influencer
  • Systematic exploitation of negative emotions (fear, anger, sadness) for viral spread

Prevention and Counteraction Strategies

Based on scientific findings and international recommendations, a three-tiered public health approach to combating disinformation is prominent:

1. Tertiary Prevention

Continuous Monitoring and Labeling: Applying warnings, fact-checking, or source tags to misleading content using advanced detection algorithms that can achieve near-perfect accuracy rates.

Active Debunking: Promptly publishing corrective content and informing the public using evidence-based approaches.

2. Secondary Prevention

Nudging and User Awareness: Issuing accuracy prompts prior to sharing and encouraging news literacy, particularly targeting younger demographics who show higher vulnerability to misinformation.

Digital Literacy Education: Widespread media and information literacy training beginning in school-age populations, with evidence-based programs specifically designed for youth digital literacy development.

3. Primary Prevention

International and Local Regulation: Ensuring transparency, accountability, and algorithmic regulation for social media companies. Recent assessments of the EU Code of Practice on Disinformation reveal significant compliance gaps, with platforms averaging only 1.9 out of 3.0 compliance scores.

Systemic and Legal Interventions: Employing measures such as the EU Digital Services Act and forging global agreements. Current regulatory frameworks show substantial deficiencies, with 55% of platform measures missing qualitative information and 64% missing quantitative data in their compliance reports.

Platform Accountability and Regulatory Effectiveness

Recent systematic analysis of major platform compliance with disinformation regulations reveals concerning gaps between stated commitments and actual implementation. Among major platforms (Google, Meta, Microsoft, TikTok, and Twitter), Google performed best with a score of 2.1/3.0, while Twitter scored lowest at 1.0/3.0 before withdrawing from the EU Code of Practice in May 2023.

Particularly troubling are substantial deficiencies in content moderation reporting, with questionable methodologies for calculating fake account prevalence. For instance, TikTok claims only 0.0067% fake accounts while Meta reports approximately 5%, suggesting either vastly different detection capabilities or reporting standards.

The Role of the IMMUNE 2 INFODEMIC Project

The IMMUNE 2 INFODEMIC project offers an integrative model aimed at fortifying society against disinformation in the age of artificial intelligence, focusing on media literacy, critical thinking, and resilience. Through educational panels, workshops, and technical solutions informed by the latest research on detection algorithms and intergenerational vulnerability patterns, this program aims to strengthen society’s resistance to misinformation.

Conclusion and Recommendations

Disinformation and AI-driven misinformation in the digital age threaten both individual and public health as well as democracy itself. The quantified impacts—potentially affecting 47 million Americans during health crises and costing over $143 billion in healthcare expenses—underscore the urgency of evidence-based interventions. Analyzing intergenerational differences and platform preferences is crucial to establishing a resilient information ecosystem, particularly given the paradoxical finding that digital natives are most vulnerable to misinformation.

Recommended actions include:

  • Widespread digital media literacy education targeting younger demographics with evidence-based programs
  • More active content management by platforms with standardized reporting methodologies and compliance monitoring
  • Enhanced regulatory frameworks addressing current compliance gaps and enforcement mechanisms
  • Continuous fact-checking and public awareness campaigns utilizing high-accuracy AI detection tools
  • Investment in detection technologies that can achieve near-perfect accuracy rates for both fake news and deepfake content

Addressing these challenges of the digital era requires multi-disciplinary and sustainable strategies that prioritize public health and are grounded in empirical research rather than theoretical frameworks alone.

References
  • Abraham, T.M., Wen, T., Wu, T. et al. (2025). Leveraging data analytics for detection and impact evaluation of fake news and deepfakes in social networks. Humanities and Social Sciences Communications, 12, 1040. https://doi.org/10.1057/s41599-025-05389-4
  • DeVerna, M.R., Pierri, F., Ahn, Y.Y., et al. (2025). Modeling the amplification of epidemic spread by individuals exposed to misinformation on social media. npj Complexity, 2, 11. https://doi.org/10.1038/s44260-025-00038-y
  • Emily Denniss, Rebecca Lindberg (2025). “Social media and the spread of misinformation: infectious and a threat to public health.” Health Promotion International, Volume 40, Issue 2, April 2025. Published: 31 March 2025. https://academic.oup.com/heapro/article/40/2/daaf023/8100645
  • Mündges, S., & Park, K. (2024). But did they really? Platforms’ compliance with the Code of Practice on Disinformation in review. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1786
  • Statista. “Social media usage frequency 2024, by generation – Germany.” Statista, July 2025.
  • Statista. “Social media usage 2024 by brand, by generation – Germany.” Statista, July 2025.
  • Statista. “Most popular social networks worldwide as of February 2025, by number of monthly active users (in millions).” Statista, July 2025.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Overview
januam.org

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.