Disinformation, defined as intentionally misleading or manipulated information, has become a significant factor across various domains, including politics, business, media, education, religion, and public discourse. The spread of disinformation has grown exponentially, facilitated by social media platforms and a global industry dedicated to the dissemination of unfounded information. Today, anyone can launch a large-scale, sophisticated influence campaign. To create and spread disinformation is cheap, effective and often fully legal(!).
Digitalization has driven societal changes - creating both opportunities and risks. Despite the widespread adoption of social media over the past 15 years, we still struggle to manage the unforeseen challenges it presents. The internet's ability to provide access to vast amounts of information and connect like-minded individuals is a double-edged sword; it also enables antagonists to spread their messages. For instance, serious criminals now use social media to "market" their violent activities.
Navigating today's information environment requires patience, systematic approaches, and technical expertise. Source criticism is not a magical solution; even experts struggle to determine the authenticity of videos, images, or texts. In an era where intelligence experts and leading fact-checkers face significant challenges, expecting individuals to accurately identify disinformation is unrealistic.
The challenge lies not in making individuals more critical but in maintaining trust within public and private organizations, media, and between individuals. Trust is a fundamental building block of democratic discourse, a functioning rule of law, and a welfare society.
Encouraging individuals to increase their media and information literacy, with a focus on source trustworthiness, is crucial.
The development of malign information influence highlights that it is not just about the "opponent" but also about society and its citizens. Organizations must understand their target audiences needs and interests to create the right conditions to meet them. Malicious actors exploit vulnerabilities without considering the target audience interests, a key distinction that sets apart ethical actions from those of antagonists.
Disinformation is spread by a diverse range of actors, including states like Russia, China, and Iran, which have extensive disinformation ecosystems. Criminal and commercial actors also play a significant role, often motivated by financial gain. Even Western states, business actors, and politicians sometimes use misleading methods to achieve their goals.
The development of generative AI, such as large language models (LLMs), presents both opportunities and risks. AI enables the creation of synthetic media, which can be used for malicious purposes, including bypassing biometric barriers, manipulating satellite photos, and conducting large-scale deep voice attacks. The line between harmless use of modern technology and security risks is increasingly blurred.
Building resilience against Disinformation
While stopping disinformation entirely may be impossible, we can strengthen our resilience through various measures. Increasing general knowledge about information influence, understanding the intertwined nature of physical, digital, and cognitive domains, and utilizing existing frameworks and tools are essential steps. Proactive communication, strategic planning, and collaboration across sectors can help build and maintain trust and resilience.
How should companies think and act regarding these issues?
In recent years, the landscape of information has transformed dramatically, driven by digitalization and the rise of social media. This transformation has created a complex and blurry environment where distinguishing between legitimate information and malicious disinformation is increasingly challenging. In this blog, I will discuss and give examples of how malign information influence and disinformation have evolved, the inherent risks they present to society, and how we, as individuals and organizations, can enhance our resilience against these threats.
The information environment can be compared to a playing field without lines, referees, or rules. Each year, it becomes more complex and harder to navigate. Legitimate opinion formation (including satire, lobbying, sales and marketing) now coexists with malicious communication like fake grassroots movements, state media propaganda, and proxy actors. Algorithms often exacerbate the issue by promoting harmful content or being manipulated to do so.