
One of the most revealing aspects of modern conflict is not always what happens on the battlefield, but what unfolds in the information space. Every major war today is accompanied by a parallel struggle to shape narratives, control perception, and influence opinion. Social media has become one of the primary battlegrounds in this contest.
Recent tensions involving the United States/Israel and Iran, together with the ongoing conflicts in Ukraine with Russia, once again illustrate how information is weaponised. News feeds, messaging platforms, and online forums are flooded with dramatic footage, confident analysis, and sensational claims. Much of it is unreliable. Some of it is entirely artificial intelligence (AI) fabricated.
Propaganda itself is nothing new. Warfare has always been accompanied by attempts to influence public perception. Governments historically used newspapers, radio broadcasts, leaflets, and rumours to shape how events were understood. What has changed is the speed and reach of modern communication. Social media allows narratives to reach millions of people within minutes. Once a story gains traction online, correcting it becomes extremely difficult.
A large part of the problem lies in human psychology. People are naturally inclined to accept information that confirms what they already believe. Psychologists refer to this as confirmation bias. When individuals encounter a story that aligns with their political views or emotional preferences, they are less likely to question it. Instead, they accept it and pass it along.
During conflict this tendency becomes particularly powerful. Supporters of one side readily circulate stories that portray their side as competent, moral, and successful while depicting the opponent as incompetent or malicious. Even questionable claims gain traction because they fit the narrative people want to believe.
Emotion further amplifies the process. Content that provokes anger, fear, outrage, or moral indignation spreads far more rapidly than careful and verified reporting. Social media platforms reward engagement. Posts that generate reactions are pushed further by algorithms, which means the most emotionally charged messages often receive the greatest visibility regardless of their accuracy.
Repetition also plays an important role. When people encounter the same claim across multiple posts, accounts, and platforms, it begins to feel credible. Psychologists describe this as the illusory truth effect. Familiarity creates the impression of accuracy. In the digital environment this process accelerates because coordinated networks farms of AI or coordinated human accounts can repeat identical narratives simultaneously. The result is the appearance of consensus, even when none exists.
Group identity reinforces these dynamics. Information is rarely evaluated purely on evidence. It is filtered through the lens of belonging. If a particular narrative becomes widely accepted within a political or ideological group, rejecting it can feel like rejecting the group itself. People therefore defend claims not necessarily because they have verified them, but because those claims signal loyalty.
Social media also creates an illusion of expertise. A confident tone, some military terminology, and a few maps or satellite images can create the appearance of authority. During any conflict the online space quickly fills with self-appointed analysts who confidently discuss strategy, operations, and intentions. In reality, very few people outside government or military planning circles possess reliable insight into what is actually unfolding.
AI is now accelerating this dynamic in ways that were unimaginable only a few years ago. AI tools make it possible to generate convincing text, images, audio, and video within seconds. What once required teams of specialists can now be done by a single individual in minutes with access to the right software.
This has several important implications for propaganda. Firstly, AI allows disinformation to be produced at scale. Thousands of posts, comments, and articles can be generated rapidly and distributed across platforms. The sheer volume creates the illusion of widespread agreement or confirmation.
Secondly, AI-generated images and video make deception far more convincing. Photographs of battlefield scenes, destroyed equipment, or political figures can be fabricated with remarkable realism. A recent example was the circulation of photos claiming US Delta Force soldiers were captured in Iran. It was obvious to the trained eye that the photos were AI generated but the news was believed and spread like wildfire on social media.
Related to this is the increasing misuse of footage from highly realistic digital war games. Modern titles such as Arma 3 are designed with detailed environments, weapon systems, and visual effects that closely resemble real combat footage. As a result, gameplay clips are sometimes presented online as authentic battlefield recordings. A recent example involved footage circulated on social media of what appeared to be a United States aircraft carrier engulfed in flames. The video was later traced back to gameplay from Arma 3. The combination of cinematic camera angles, realistic explosions, and convincing audio effects can easily mislead those unfamiliar with the game.
For the average viewer, distinguishing between authentic material and manipulated or digitally generated content is becoming increasingly difficult.
Thirdly, AI enables the automation of influence. Networks of automated accounts can circulate narratives, amplify claims, and respond to criticism continuously. These systems operate without pause and create the impression that certain viewpoints are far more widely held than they are.
The result is an information environment where verifying authenticity becomes increasingly challenging. As AI-generated material becomes more sophisticated, even genuine evidence can be dismissed as fabrication. In such a climate the distinction between truth and narrative becomes blurred.
It is also worth remembering that the primary target of wartime propaganda is not always the enemy. Much of it is directed at domestic audiences and neutral observers. Confusion, doubt, and polarisation can be strategic objectives. If enough contradictory narratives circulate, the public becomes uncertain about what is true and rather stays neutral than taking any action. In that environment, manipulation becomes easier.
South Africa is not immune to these dynamics. False claims, misleading videos, and sensational stories regularly circulate through WhatsApp groups, TikTok clips, and Facebook posts. Even after something has been proven false, it often continues to spread because people share it without verification, seeking both a sense of empowerment and the reassurance of belonging within their social group.
In the end, the greatest vulnerability in the modern information environment is not technology. It is human behaviour. Social media did not invent propaganda. It simply created the most efficient delivery system propaganda has ever had.
Until people become more sceptical consumers of information and more disciplined about verifying what they share, the information battlefield will remain crowded with rumours, exaggerations, and deliberate deception. In modern conflict, the struggle for truth is often as important as the struggle for territory.
Prof Dewald Venter from the Vaal University of Technology is Associate Professor in Tourism Management and Military Heritage Researcher.








