In this environment, truth is the first casualty, long before human life is lost. What unites the modern wars is not only the existence of fake news, but it is also the speed at which it is spread. Clout is the new kind of weapon. Viral events can build international pressure, provide a pretext for retaliation, or raise domestic support. Hashtags transform into rallying cries and shape public opinions. TikTok videos are made with dramatic music and gut-wrenching visuals that emerge as recruitment tools. Social media influencers and activists unknowingly or knowingly act as digital foot soldiers, earning millions with each individual social media post. In a digital war, the battlefield is an algorithm. Emotions always justify the reasons because the most shared story is the most powerful, regardless of how much it is detached from reality.
Artificial intelligence is another catalyst for algorithmic dominance in the digital war. Today, AI algorithms determine not just what people see, but what they believe, because the echo chambers reinforce certain opinions and modify the realities. Engagement-maximising recommendation systems have a disturbing tendency: the more polarising or flammable the content, the more likely it is to be pushed to the top. When it is war, AI doesn’t check; it boosts. It directs attention to the most audacious claim, the most lurid image, the loudest voice, not the most accurate. In war, that means the story most likely to inflame passions wins; the truth does less ‘viral’ in social media than lies.
Support kami, ada hadiah spesial untuk anda.
Klik di sini: https://indonesiacrowd.com/support-bonus/
AI-driven bots are far more dangerous and smart agents that impersonate real users. These bots can conduct conversations, incite emotions, and inject false information into real-time conversations. Amid recent flare-ups of conflict in South Asia and the Middle East, thousands of bot accounts were seen in trending hashtags with a systematic disinformation campaign. They act like humans to comment, retweet for propaganda, and join to give the impression of a consensus. Unlike traditional propaganda, which took manpower and tactics, these bots are tireless, fast, and nearly impossible to trace. Unlike conventional wars, there is no Geneva Convention; there are no established rules to fight, like who can be targets, how civilians will be protected, or how prisoners will be treated. There is no uniform or flag, just shadows and codes with fake identities in the digital war.
As the boundaries between fiction and fact, offence and defence, war and propaganda blur, the international order is thrust into a place for which it is ill-suited. Orders are haphazard, responses are often reactive, and citizens are increasingly unsure whom – or what-to trust. Media organisations lag behind the surge of viral content, and fact-checkers can’t keep up with the pace of the lie. When the enemy might not be in fatigues with arms and troops, they might be behind the firewalls, VPNs, and anonymous accounts; the real problem is to design the combatting strategies and see the way forward. To navigate the evolving digital battlefield, there are several plans that we need to sound out and actually take up:
The first firewall is mindfulness. Individuals need to learn tools to distinguish fact from propaganda or fiction. Building digital literacy from the ground up is crucial for a resilient and informed society. Media literacy campaigns must be embedded in school curricula, public outreach for those who are not in any formal education, and journalism ethics training, teaching people to think critically, evaluate sources, and identify manipulation. There are many instances when AI itself is giving the wrong information due to the available sources or the content due to limitations of the English language. So, it is crucial to make people aware to check the sources of information compiled by certain AI tools. Government and institutions must respond to fake narratives quickly without any censorship. Open-data initiatives, AI fact-checking, and vigilant, credible digital spokespersons can be helpful to diffuse the false narrative immediately. Silence in cross-border conflicts is always taken as a weapon.
Support us — there's a special gift for you.
Click here: https://indonesiacrowd.com/support-bonus/
If AI bots are becoming dangerous digital soldiers, the world needs a rule of engagement for machine agents as well. Civil society and international researchers must come forward to propose ethical frameworks that limit how AI can be used in cyber warfare, especially against civilians.
As with traditional wars, cyber conventions and treaties are also necessary to delineate digital war crimes, limit targeting of civilian infrastructure online, and make states liable for backing disinformation campaigns.
It is an ideal solution, but let’s imagine a cross-border collaboration and digital peacekeeping force, composed of forensic analysts, fact checkers, and ethical hackers, operating globally during war and conflict to monitor, evaluate, debunk, and defend to combat borderless information war. It certainly requires an alliance that go beyond national interests.
Until the world adapts new rules and defences to this digital battleground, fake news, disinformation, deepfakes, cyber offensives and algorithmic manipulation will keep shaping not just how wars are fought but why they are fought to begin with. Let’s help each other to navigate the emerging threats and mitigate the impact of digital warfare.
Dr. Ayesha Ashfaq
The writer is the Chairperson and Associate Professor at the Department of Media and Development Communica-tion at the University of the Punjab.
Provided by SyndiGate Media Inc. (
Syndigate.info
).