Back to InsightsIndustry

When Reality Can Be Fabricated: The New Challenge of Disinformation in the Age of Artificial Intelligence

The rapid spread of artificial intelligence tools capable of generating highly realistic images, audio, and video is transforming the nature of disinformation. The ability to fabricate convincing but entirely false content poses new challenges for governments, companies, and institutions operating in a digital environment where information spreads at unprecedented speed.

Soros Gabinete
March 13, 2026
5 min read
When Reality Can Be Fabricated: The New Challenge of Disinformation in the Age of Artificial Intelligence

A new technological shift in disinformation

Disinformation is not a new phenomenon. Rumors, propaganda, and manipulated information have existed for decades. What has changed in recent years is the technological context in which information is produced and distributed.

Advances in generative artificial intelligence now make it possible to create highly convincing images, audio recordings, and videos in a matter of minutes. Content that once required significant technical expertise can now be generated using widely accessible tools. As a result, fabricated statements, manipulated visuals, or entirely staged scenes can circulate online with a level of realism that makes them difficult to distinguish from authentic material.

This has led to the rapid growth of so-called deepfakes, synthetic media designed to imitate real people, events, or situations.

The scale of this phenomenon is expanding rapidly. Estimates suggest that the number of deepfake files circulating online could grow from around 500,000 in 2023 to as many as 8 million by 2025, representing an increase of more than 1,500% in just two years.

The implications go far beyond isolated cases of misinformation: they point to a structural shift in how false narratives can be created and disseminated at scale.

Research illustrates how quickly this phenomenon is evolving. According to the World Economic Forum, misinformation and disinformation are among the most significant global risks in the short term. Earlier analysis by the deepfake detection company Deeptrace found that the number of deepfake videos online doubled in a single year, surpassing 14,000 identified videos as early as 2019. Since then, the rapid development of generative AI models has significantly accelerated the creation of synthetic media.

At the same time, the structure of digital platforms amplifies the potential reach of such content. A study conducted by the Massachusetts Institute of Technology examining the spread of information on social media found that false news stories are 70% more likely to be shared than true ones, and they travel significantly faster across online networks. In practice, this means that misleading narratives can reach millions of users before verification or fact-checking mechanisms have time to respond.

Implications for organizations in a rapidly evolving information environment

Although disinformation is often discussed in relation to politics or public debate, its impact increasingly extends to companies and institutions. In highly interconnected digital ecosystems, a manipulated image, a fabricated video, or a coordinated narrative can rapidly influence public perception and generate reputational risks.

For organizations, this creates a new communication challenge. Monitoring the information environment, identifying emerging narratives, and responding effectively to misleading content are becoming essential components of reputation management and strategic communication.

Given the scale of online content production, manual monitoring alone is no longer sufficient. As a result, technological solutions capable of analyzing large volumes of digital information are becoming increasingly important. Artificial intelligence systems are now being used to detect patterns associated with manipulated narratives, coordinated campaigns, or suspicious sources of information.

A number of companies and research initiatives are developing such tools, including platforms created by organizations like Trueflag, which aim to help identify and monitor emerging disinformation dynamics across digital environments.

A growing challenge for the digital information ecosystem

The ability to generate realistic yet entirely fabricated content, combined with the speed of online information diffusion, has fundamentally altered the nature of the disinformation problem.

The challenge today is no longer limited to correcting inaccurate information. Increasingly, it involves identifying content deliberately designed to appear authentic and capable of spreading rapidly across digital networks.

In this evolving landscape, preserving trust in information and ensuring the integrity of public discourse has become a strategic issue not only for governments and media organizations but also for companies and institutions operating in complex digital environments.

Need help with your funding application?

Our team of experts is ready to help you secure funding for your innovation project.

Contact Us

Stay ahead of funding opportunities

Get the latest insights on R&D funding, grants, and innovation financing delivered directly to your inbox. No spam, just valuable information.

Monthly insightsFunding alertsUnsubscribe anytime