Navigating the complex world of malign information and disinformation

Guest Writer:  the Lynx


Disinformation, defined as intentionally misleading or manipulated information, has become a significant factor across various domains, including politics, business, media, education, religion, and public discourse. The spread of disinformation has grown exponentially, facilitated by social media platforms and a global industry dedicated to the dissemination of unfounded information. Today, anyone can launch a large-scale, sophisticated influence campaign. To create and spread disinformation is cheap, effective and often fully legal(!).


Digitalization has driven societal changes - creating both opportunities and risks. Despite the widespread adoption of social media over the past 15 years, we still struggle to manage the unforeseen challenges it presents. The internet's ability to provide access to vast amounts of information and connect like-minded individuals is a double-edged sword; it also enables antagonists to spread their messages. For instance, serious criminals now use social media to "market" their violent activities.


Navigating today's information environment requires patience, systematic approaches, and technical expertise. Source criticism is not a magical solution; even experts struggle to determine the authenticity of videos, images, or texts. In an era where intelligence experts and leading fact-checkers face significant challenges, expecting individuals to accurately identify disinformation is unrealistic.


The challenge lies not in making individuals more critical but in maintaining trust within public and private organizations, media, and between individuals. Trust is a fundamental building block of democratic discourse, a functioning rule of law, and a welfare society.


Encouraging individuals to increase their media and information literacy, with a focus on source trustworthiness, is crucial.


The development of malign information influence highlights that it is not just about the "opponent" but also about society and its citizens. Organizations must understand their target audiences needs and interests to create the right conditions to meet them. Malicious actors exploit vulnerabilities without considering the target audience interests, a key distinction that sets apart ethical actions from those of antagonists.


Disinformation is spread by a diverse range of actors, including states like Russia, China, and Iran, which have extensive disinformation ecosystems. Criminal and commercial actors also play a significant role, often motivated by financial gain. Even Western states, business actors, and politicians sometimes use misleading methods to achieve their goals.


The development of generative AI, such as large language models (LLMs), presents both opportunities and risks. AI enables the creation of synthetic media, which can be used for malicious purposes, including bypassing biometric barriers, manipulating satellite photos, and conducting large-scale deep voice attacks. The line between harmless use of modern technology and security risks is increasingly blurred.


Building resilience against Disinformation

While stopping disinformation entirely may be impossible, we can strengthen our resilience through various measures. Increasing general knowledge about information influence, understanding the intertwined nature of physical, digital, and cognitive domains, and utilizing existing frameworks and tools are essential steps. Proactive communication, strategic planning, and collaboration across sectors can help build and maintain trust and resilience.


How should companies think and act regarding these issues?

  • Organizational resilience is achieved by identifying which values are at risk, to what extent, and addressing the security chain for each value. Building resilience is not just a resource issue. Rather, it is a structural and understanding issue. The desired goal is not achieved merely by having more employees but by having better employees who understand the challenges.

  • Trust is everything. Identify which target audiences are particularly vulnerable and/or critical for the operation to function. Consider employees, the brand, and trust as cognitive protected objects and build trust within (and outside) your organization before an incident occurs.

  • Work strategically and proactively with communication. Who are our target groups, what are their needs and interests? Which communication arenas should we be active on, and which should we monitor?

  • Ensure analytical capability. Establish, maintain, and further develop your own capability for environmental analysis and threat intelligence, including OSINT (online investigations), IT forensics, and web analysis.

  • Educate your staff and build personal firewalls. Conduct information and education initiatives to raise awareness, thereby increasing your resilience.

  • Practice often, Practice simply and practice advanced. From short scenario discussions to systematic “Red Teaming”.

  • Collaborate and meet in new constellations. Contribute to creating conditions for dialogue and cooperation between the public sector, business, and universities/colleges, but also with key individuals in civil society, entrepreneurs, startups, non-profit organizations, and enthusiasts. This generates innovation and knowledge. Build resilience together.

In recent years, the landscape of information has transformed dramatically, driven by digitalization and the rise of social media. This transformation has created a complex and blurry environment where distinguishing between legitimate information and malicious disinformation is increasingly challenging. In this blog, I will discuss and give examples of how malign information influence and disinformation have evolved, the inherent risks they present to society, and how we, as individuals and organizations, can enhance our resilience against these threats.


The information environment can be compared to a playing field without lines, referees, or rules. Each year, it becomes more complex and harder to navigate. Legitimate opinion formation (including satire, lobbying, sales and marketing) now coexists with malicious communication like fake grassroots movements, state media propaganda, and proxy actors. Algorithms often exacerbate the issue by promoting harmful content or being manipulated to do so.