In a Europe marked by real fears and social tensions, the new information war is fought through the invisible amplification of our own anxieties.
(Photo by Steve Johnson on Unsplash)
In March 2025, a German Green Party politician received death threats after voting for continued Ukraine aid. The threats didn’t come from Russian trolls or obvious foreign accounts.
They came from her own constituents, university students and pensioners who’d spent months marinating in Telegram channels that looked locally run but traced back to servers in Moscow. She had no idea that the “grassroots opposition” to her vote had been carefully cultivated, amplified, and directed by foreign operations that never once posted in Russian.
This is what modern information warfare looks like: your neighbor’s genuine anger, weaponized by someone else’s algorithm.
Welcome to 2025, where the biggest threat to European democracy isn’t tanks or hackers, it’s the invisible hand that decides which of your anxieties get amplified until they tear your society apart. Russia, China, Iran, and Qatar have figured out something crucial: they don’t need to convince you of anything. They just need to find what you already believe, make it louder, and ensure you never hear the other side.
And they’re doing it through platforms you trust, causes you care about, and people you know.
Here’s something intelligence analysts started noticing in 2024 that went mainstream in 2025: Russian and Chinese state media don’t coordinate their lies, yet somehow they sound identical. When CGTN publishes a piece about “NATO expansion provoking the Ukraine war,” within hours Russian channels are sharing it.
When RT frames Western human rights criticism as hypocrisy, Chinese diplomats amplify it on X. It’s like watching two pianists play the same song without sheet music.
This “narrative convergence” hit peak sophistication during the Czech elections in early 2025. Pro-Russian content claiming NATO was planning to draft Czech citizens for Ukraine appeared simultaneously on Chinese-language forums and Russian Telegram channels, then got translated and spread by accounts that looked Czech but showed coordination patterns linking back to both Moscow and Beijing operations.
Czech voters trying to research the claims found what looked like global consensus from independent sources, when really they were seeing the same lie refracted through different mirrors.The mechanism is elegant in its simplicity. AI tools now generate localized versions of the same core narrative, adjusted for different European audiences.
A story about “NATO aggression” gets framed as a sovereignty issue in France, a financial burden in Germany, and an American manipulation in Poland. The content feels native to each country because it’s designed to exploit that country’s specific anxieties.
By the time fact-checkers debunk the original claim, seventeen versions are already circulating in local languages, shared by people who have no idea they’re spreading foreign propaganda.
What makes this particularly effective is how it exploits Europe’s linguistic diversity. A debunking published in English rarely reaches the Romanian or Swedish audiences already convinced by localized versions of the lie. Russian and Chinese operations don’t need Europeans to speak their languages; they’ve learned to speak ours, all of them, simultaneously, saying slightly different versions of the same thing until it sounds like truth through repetition.
The Telegram Problem nobody wants to solve
You might be thinking: surely people recognize Russian propaganda when they see it? That’s the thing, they don’t see it. Not anymore. While everyone watched Meta and X tighten moderation, Russian and Chinese operations migrated to platforms that feel safer, more independent, more authentic. Telegram became ground zero.
Consider how information actually flows now.
In September 2024, Telegram channels with names like “Prague Truth” or “Berlin Real News” started posting content that looked locally produced: complaints about energy prices, immigration concerns, frustration with EU bureaucracy. Nothing obviously foreign. These channels grew to tens of thousands of followers, people who’d left mainstream platforms because they felt censored or manipulated. The content felt refreshingly honest, unfiltered, real.
Then, gradually, these channels started mixing in content from banned Russian sources. Not labeled as Russian, just shared as “what they don’t want you to know.” A RT documentary about Ukraine becomes “independent investigation mainstream media ignores.” A Sputnik article about NATO becomes “military analyst explains real agenda.” The followers, thinking they’ve found authentic alternative news, share it to friends and family. Within weeks, content that would be instantly flagged on Facebook spreads through European networks like wildfire, amplified by people who genuinely believe they’re sharing truth.
Telegram reported 50 million EU users in 2024, but consistently underreports to dodge Digital Services Act requirements. The real number is likely double that. And here’s what makes it dangerous: the platform’s encryption and loose moderation mean that by the time anyone notices a coordinated campaign, it’s already reached millions.
The 2,000% surge in scams during early 2025 wasn’t coincidental. Financial fraud and political disinformation merged into a toxic ecosystem where people couldn’t tell the difference between a Bitcoin scam and a foreign influence operation, because often they were the same thing.
Your uncle who shares those Telegram posts about refugee crime statistics? He’s not a Russian agent. He’s a retired teacher worried about his neighborhood who found a channel that validates his concerns. He has no idea the channel’s administrator is running fifty similar channels across Europe, all seeded with content that originated in Moscow but looks local enough to feel trustworthy. The operation succeeds because it doesn’t look like an operation.
The Normalisation Machine: When Foreign Becomes Local
The really sophisticated stuff doesn’t announce itself at all. It looks like a cultural festival, a climate protest, or a trending app. China spent 2025 doubling its diaspora influence networks across Europe, hitting 72% of countries. These aren’t spy rings. They’re Lunar New Year celebrations, student associations, and community centers that do genuine cultural work while subtly normalizing Beijing’s positions on Taiwan, Xinjiang, and trade policy.Walk through any major European city and you’ll find events sponsored by organizations with benign names, offering free cultural programming that draws hundreds of locals. The organizers aren’t lying about the cultural content.
But between the dance performances and calligraphy demonstrations, attendees hear framing about China’s “peaceful rise” or “internal affairs” that shouldn’t concern outsiders. It’s influence that feels like friendship, which is precisely why it works.
Meanwhile, AI bots evolved past the point where you can spot them. Remember when fake news had obvious tells, broken English and weird phrasing? Those days ended.
In 2025, AI-generated content arguing that EU climate regulations are “elitist scams hurting working families” reads exactly like a concerned citizen’s blog post. Because the AI learned from millions of real citizen posts, absorbing not just language patterns but argumentative structures, cultural references, and emotional resonance.
France and Germany saw concentrated campaigns of this throughout 2025, timed perfectly with debates over the Green Claims directive. Posts appearing on Mastodon and BlueSky (platforms that feel progressive and independent) argued that climate policy was class warfare disguised as environmentalism.
The arguments were sophisticated, citing real economic data and genuine hardships. They spread through leftist networks, amplified by people who sincerely believed they were fighting elitism. Nobody realized Russian and Chinese operations had identified which economic arguments would resonate, generated hundreds of variations, and seeded them across platforms where they’d be amplified by authentic activists.
China’s version focused on offering “affordable alternatives,” flooding feeds with Shein fast fashion content while lobbying against trade tariffs. The environmental costs stayed hidden behind accessibility messaging: “Why should working families pay more for clothes when cheaper options exist?” It’s a reasonable question that happens to align perfectly with Chinese state interests in avoiding EU trade restrictions.
The people making this argument mostly have no idea where it came from or whose interests it serves.Gaza: Where Every Playbook ConvergedIf you want to see how all these tactics work together, look at Gaza.
Since October 2023, the information landscape around the war revealed something genuinely new: Russian, Iranian, and Qatari sources coordinating without coordinating, each amplifying the others because their interests temporarily aligned around fracturing European consensus.
The mechanism is almost beautiful in its efficiency. Russian sources provide ideological framing about Western hypocrisy: “Europe condemns our actions in Ukraine while supporting Israeli operations in Gaza, exposing liberal democracy as a sham.” Iranian networks supply the emotional content, accounts posing as independent journalists posting genuine footage of Gaza destruction mixed with misleading context, doctored timestamps, and false attribution of responsibility.
These accounts gained millions of European followers throughout 2024, particularly among younger audiences on Instagram and TikTok, by seeming to document atrocities mainstream media ignored.
Then Al Jazeera, backed by Qatari state funding, provides institutional legitimacy and professional presentation that makes the narrative feel credibly diverse.
A Berlin university student scrolling Instagram sees a post from what looks like a Palestinian journalist documenting civilian casualties.
The footage is real, the horror is genuine, the moral outrage is justified. She shares it. What she doesn’t see: Iranian intelligence identified that journalist’s content, amplified it through coordinated accounts, ensured it reached audiences predisposed to share, and timed the amplification to coincide with German parliamentary debates on Middle East arms sales. Her authentic activism becomes infrastructure for foreign influence whether she realizes it or not.
This goes beyond typical disinformation because it exploits real humanitarian catastrophe. European sympathy for Palestinian suffering is genuine, not manufactured. But foreign operations don’t need to create these feelings; they just need to amplify them selectively, strip away complexity, and channel them toward politically useful conclusions.
Every nuanced analysis of Hamas’s military tactics gets buried under coordinated campaigns framing such analysis as “genocide apologia.” Think tanks publishing research on Iranian regional strategy face harassment campaigns that appear to come from student activists but show clear foreign amplification patterns: identical phrasing, simultaneous posting across platforms, targeting that spreads from initial foreign accounts to genuine European users within hours.
The result? European academic institutions become terrified of research that might complicate preferred narratives. Professors acknowledging operational complexity in urban warfare face calls for dismissal.
Politicians who express support for Israel’s security while calling for civilian protection get framed as genocide enablers, not by foreign agents but by genuine activists fed curated content that stripped away every complicating factor. The middle ground where most security professionals actually operate, recognizing legitimate concerns on multiple sides while seeking pragmatic harm reduction, becomes politically radioactive.
You might think this sounds paranoid, like seeing Russian agents behind every protest sign. But that’s not what’s happening. The protesters are real, their concerns are real, their moral outrage is justified. What’s invisible is the infrastructure that decides which protests get amplified, which arguments dominate discourse, and which voices get buried. Iranian operations don’t create anti-war movements; they ensure specific anti-war movements achieve outsized visibility at politically convenient moments.
Russian sources don’t invent criticism of Israeli actions; they make sure that criticism drowns out every other conversation Europe should be having about Ukraine aid, defense integration, or sanctions enforcement.
The Economic Thread That Ties Everything Together
None of this happens in a vacuum. Europe’s economic entanglement with China creates leverage that makes every information operation more effective. When Europeans buy cheap Chinese goods, measured in trillions annually, a portion of that economic relationship flows toward sustaining Russia’s war in Ukraine. Beijing supplies 80% of Moscow’s dual-use technology, civilian equipment like drone components that enable military operations.
This creates a vicious cycle. Chinese imports undercut European manufacturing, weakening industrial capacity and making member states more dependent on continued trade. That dependence makes aggressive sanctions politically costly. When German automakers rely on Chinese supply chains, when French retailers depend on Chinese consumer goods, when Spanish ports process Chinese imports, each economic tie becomes leverage that complicates unified European responses to either Russian aggression or Chinese coercion.
The 2025 EU rules requiring Chinese firms to share technology secrets for market access tried to address this, but previous deals already transferred critical “tacit knowledge,” the unwritten factory expertise that never appears in patents. Germany’s KUKA robotics acquisition by Midea showed the pattern: Chinese firms gained manufacturing know-how that strengthens their competitive position while creating dependencies that make European pushback on human rights or regional security prohibitively expensive.
The information operations and economic leverage reinforce each other. Disinformation makes European publics oppose sanctions. Economic dependence makes politicians reluctant to enforce them. Together, they create a strategic vulnerability that autocracies exploit whenever European unity matters most.What Invisible Actually MeansHere’s the thing that keeps intelligence analysts awake at night: most people participating in these campaigns have absolutely no idea they’re doing so.
The German politician receiving death threats wasn’t targeted by obvious Russian trolls. She was targeted by her own constituents whose fears had been identified, cultivated, and amplified until they became uncontrollable. The Berlin student sharing Gaza content isn’t an Iranian agent. She’s a humanities major with genuine humanitarian concerns whose emotional responses are being systematically exploited.
Your uncle on Telegram isn’t compromised. He’s a retiree whose anxieties make him vulnerable to content designed specifically to validate and radicalize those feelings.
The invasion is invisible because it looks like activism, feels like moral clarity, and spreads through people you know and trust. Foreign operations don’t need to convince Europeans to adopt foreign ideas. They just need to identify existing fracture points in European society and amplify the voices already arguing from those positions.
A French farmer genuinely worried about regulations, a Polish worker authentically anxious about cultural change, a Swedish pensioner really concerned about energy costs, none of these people are foreign agents. But when their genuine concerns get amplified by coordinated campaigns at strategic moments, they become unwitting participants in operations designed to fracture European cohesion.
This is what modern information warfare looks like: not armies of obvious trolls but subtle amplification of your own society’s existing conflicts until they become insurmountable. The autocrats are betting that open societies can’t defend themselves without becoming closed, that democracies can’t maintain shared factual reality when everyone’s algorithm feeds them different truths, that Europe will tear itself apart over genuine disagreements that foreign powers merely amplified at the right moments.
Understanding this doesn’t require dismissing anyone’s concerns or questioning activists’ sincerity. It requires recognizing that authentic outrage becomes infrastructure for foreign influence when authoritarian states identify which emotions to amplify, which facts to suppress, and which moments create maximum disruption.
The German politician’s constituents had real concerns about military spending. The Berlin student was right to care about civilian casualties. Your uncle’s neighborhood really did change. Their feelings are valid. What’s invisible is the hand on the volume dial, deciding which feelings become loud enough to drown out everything else.



