continuous unrestricted warfare across an AI-integrated substrate
Gray, Cognitive and Drone Warfare
Economy or Military
Selection Pressures
(1.) Gray, cognitive and drone warfare
Can you describe what Gray Zone Warfare is and/or contemporary information warfare, like what Russia and China are good at, in this contemporary moment?
Gray zone warfare is the space between peace and open war where states pursue strategic goals while deliberately staying below the threshold that would trigger direct military conflict. The term was coined by U.S. Special Operations Command in a 2015 white paper, but the phenomenon it describes is older and more fundamental than the label. The intelligence community now defines it as the deliberate use of coercive or subversive instruments of power — by or on behalf of a state — to achieve political or security goals at the expense of others, in ways that exceed or exploit gaps in international norms but are intended to remain under the threshold of armed conflict. It is, in essence, carefully scripted brinkmanship, designed to win campaigns without ever crossing the line that would trigger a conventional response.
Let's break down what this actually looks like in the hands of its most accomplished practitioners.
1. Russia's Approach — Hybrid Warfare and Active Measures
Russia's methods descend directly from Soviet-era "active measures" — covert strategies used during the Cold War to spread disinformation and influence foreign governments through psychological operations, front organizations, and media control. Post-Cold War, these tactics were refined with digital advancements, culminating in large-scale disinformation campaigns targeting Western democracies as part of a well-established toolkit of subthreshold, noncontact warfare.
Some analysts distinguish Russia's "hybrid warfare" from Chinese "gray zone" activity on the grounds that Russia is willing to use armed violence — Ukraine, Syria, Libya — while China's gray zone avoids kinetic force. That distinction is slippery; Russia operates across the full spectrum simultaneously. Its current playbook includes:
Disinformation and FIMI (Foreign Information Manipulation and Interference): Russian troll farms like the Internet Research Agency pioneered the industrial-scale manipulation of online discourse. Russia has now operationalized what the U.S. State Department called "perpetual adversarial competition in the information environment" — a disinformation and propaganda ecosystem with five interlocking pillars (state media, proxy sources, weaponized social media, cyber-enabled disinformation, and intelligence services) designed to create a media multiplier effect.
Election interference as doctrine: Russia has interfered in elections in Romania, Moldova, Georgia, Poland, Germany, and across the South East Europe and Black Sea region in 2024-2025 alone. Romania's first-round presidential vote in November 2024 was annulled by its constitutional court over evidence of Russian foreign interference — a genuinely remarkable moment where a European democracy concluded that a vote had been corrupted by external influence operations.
Operation Doppelganger and bot networks: Sophisticated, centrally coordinated fake account networks on X and other platforms, spun up to mimic legitimate discourse around target elections. These networks operate from distributed geographic origins to evade detection.
Gender-based disinformation: Russia deploys narratives about sexuality, sexual violence, and gender norms as hybrid warfare instruments. Before Germany's 2025 federal election, Russian operations spread deepfake videos falsely accusing Foreign Minister Annalena Baerbock and Economy Minister Robert Habeck of sexual misconduct, timed to benefit the AfD — which ran on restoring Russian energy supplies and ending support for Ukraine, and made significant gains.
Physical sabotage and infrastructure attrition: Airspace incursions into Poland, Estonia, Denmark, and Norway; undersea cable cuts in the Baltic; GPS jamming; arson and sabotage campaigns against European logistics; assassinations. European officials have described these tactics at levels "unprecedented" since the Cold War.
AI-enabled model poisoning: Perhaps the most novel development — Russia has begun seeding web content designed not for human readers but for Western AI chatbots to ingest and reproduce. By flooding search results and web crawlers with Kremlin narratives, Russia seeks to manipulate the training and retrieval layers of AI models themselves. This is information warfare that targets the substrate rather than the surface.
2. China's Approach — The Doctrine of Comprehensive Pressure
China's gray zone operates on a different logic. It is not primarily about achieving negative outcomes (stopping NATO expansion, collapsing Ukrainian sovereignty) but about incremental positive gain — gradually acquiring territory, legitimacy, or leverage without provoking armed conflict. Grey-zone actions by China are not tactical freelancing; they are carefully scripted brinkmanship approved and controlled at the highest levels of the Communist Party and the PLA.
The Chinese doctrine rests on three interlocking concepts often called the "Three Warfares":
Public Opinion Warfare (舆论战): Shaping domestic and international narratives about China's rise, legitimizing its actions, and delegitimizing critics.
Psychological Warfare (心理战): Using fear, intimidation, and coercion to weaken adversary will and resolve — often through military posturing calibrated to signal capability without triggering conflict.
Legal Warfare (法律战 / "lawfare"): Using legal and quasi-legal frameworks to establish fait accompli claims — the South China Sea nine-dash line, historical water claims, domestic Hong Kong security law applied extraterritorially.
These are paired with operational tactics including:
Maritime gray zone pressure: Coast guard and maritime militia vessels ramming, water-cannoning, and harassing Philippine, Vietnamese, and Japanese craft in contested waters — engaging in physical altercations (wooden clubs, thrown stones in the 2020 India border incident) calibrated to inflict damage without crossing into "war."
Taiwan encirclement rehearsals: PLA air and naval activity around Taiwan functions both as coercion and as rehearsal for a possible invasion or blockade. Taiwan's 2025 Han Kuang exercises for the first time explicitly incorporated six types of Chinese gray zone tactics as pre-conflict scenarios.
Disinformation ecosystems: China has adopted Russian-style bot networks and AI-generated content, with its FIMI networks often mutually amplifying Russian disinformation — a genuine convergence between two previously distinct information warfare traditions.
Economic coercion as substrate capture: Belt and Road, rare earth dominance, technology transfer requirements, and selective trade retaliation are themselves gray zone instruments, used to make adversaries and partners dependent on Chinese-controlled infrastructure.
3. What Makes This the Characteristic Form of Conflict Now
Three observations about why gray zone has become the dominant mode, not an edge case.
First, nuclear weapons created the gray zone by making total war unaffordable between great powers. When direct conflict risks civilizational destruction, competition displaces into every other domain — information, economics, infrastructure, cyberspace, proxy conflicts. The gray zone is what conflict looks like when you can't have the real thing.
Second, the digital substrate has dramatically expanded the available surface area for subthreshold action. Every social media platform, every undersea cable, every satellite constellation, every election database, every AI training corpus is now a potential front. A state that wants to damage an adversary no longer needs to cross a border — it can operate through information, code, and capital flows that were not contested terrain in previous eras.
Third, and most importantly for the argument we're building, gray zone warfare specifically targets the substrate on which society and intelligence are being built. Election interference degrades democratic decision-making. Infrastructure sabotage degrades physical substrate. AI model poisoning degrades the epistemic substrate. Disinformation degrades the cognitive substrate. The gray zone is not a peripheral harassment strategy — it is a direct attack on the conditions of possibility for coherent collective thought and action.
4. The Convergence Problem
What's new in the 2025 landscape is that Russia and China — historically operating with distinct doctrines and rarely coordinating — are now visibly converging. Their FIMI networks amplify each other. Their gray zone campaigns against NATO and Taiwan have begun to look coordinated in timing and tempo. Both are now using AI-generated content, deepfakes, and model-poisoning tactics. And both have recognized that degrading the West's information environment is strategically equivalent to degrading its military capability — perhaps more cost-effective.
This convergence matters because it transforms gray zone warfare from a set of national tactics into something closer to a shared authoritarian methodology for contesting the global substrate. The "axis of autocracies" talk is often overstated, but the technical convergence of their gray zone toolkits is real and accelerating.
So: gray zone warfare is the form conflict takes when total war is impossible, digital infrastructure is ubiquitous, and the substrate of collective thought has become contested ground. It is, in a sense, the dark twin of the Novascene hypothesis — if humans are unconsciously gardening the conditions for a new kingdom of life, gray zone actors are consciously salting the soil.
Your next question would naturally be: what changes when this doctrine meets an actual shooting war? Which is where Ukraine comes in.
Is this different from information warfare? What is the latest and most contemporary evolution of the information warfare space?
Yes — and this is actually one of the most important distinctions to draw, because the concepts have been evolving so fast that the labels have started to stack up and blur. Let me disentangle them, and then get to where the frontier actually is right now.
1. The Nested Categories
Think of it as a set of concentric circles, each broader than the last.
Information Warfare is the oldest and narrowest. It refers to the manipulation of information flows themselves — propaganda, psychological operations, signals intelligence, electronic warfare, jamming, cyberattacks on information systems. The goal is to control what your adversary sees, hears, and knows. The classic Cold War playbook. It targets the information layer.
Gray Zone Warfare is broader. It includes information warfare, but also physical sabotage, economic coercion, legal warfare, proxy operations, migration weaponization, assassinations, and any other subthreshold action. Gray zone is defined by where it sits on the peace-war spectrum, not by what instruments it uses. Information warfare is one tool in the gray zone toolkit.
Cognitive Warfare is something genuinely new — and this is the frontier that matters most for your theory. Where information warfare targets the flow of information, cognitive warfare targets the mind that processes information. The distinction is crucial. The Chinese PLA, which developed the concept earliest and most systematically, states the matter with striking clarity: cognitive warfare "takes people's will, spirit, and psychology as the goal of confrontation." The objective is not to make you believe specific lies, but to reshape how you perceive, reason, and decide.
James Lewis of CSIS put it succinctly back in 2018: the goal in future conflict is not a kinetic effect — achieved with shells and bombs — but a cognitive effect. The strategic goal is to influence morale, cohesion, political stability, and ultimately to reduce the opponent's will to counteract. You don't need to convince anyone of anything in particular. You just need to make coherent collective thought impossible.
Russia's Chief of Staff General Valery Gerasimov — famously associated with the so-called "Gerasimov Doctrine" — argued in 2013 that "the role of nonmilitary means of achieving political and strategic goals has grown, and in many cases, they have exceeded the power of force of weapons in their effectiveness." That statement now reads as a prediction that has been thoroughly borne out.
The US intelligence community tracks this under the term Cognitive Domain Operations (CDO), which the DNI's annual threat assessments in 2022 and 2024 explicitly identify as a Chinese doctrine combining psychological warfare, public opinion warfare, and legal warfare with AI-driven personalization. The PLA places CDO on par with the other warfighting domains — land, air, sea, space, cyber — and considers it potentially decisive.
So: information warfare targets information. Cognitive warfare targets cognition itself. This is not a semantic distinction. It is a substantive shift in what the target is.
2. Why The Distinction Matters
The old information warfare model assumed a roughly stable human mind processing a manipulable information environment. You poisoned the well, but the well-drinkers were the same kind of creature they'd always been. The job of counter-disinformation was therefore to clean up the water supply — fact-check, debunk, provide correct information, and trust that a rational public would update.
Cognitive warfare abandons this model. It assumes the human mind itself is a hackable system, and that the goal is to exploit its hardware — cognitive biases, emotional triggers, attention mechanisms, identity attachments, in-group loyalties — to produce desired behavioral outcomes. Fact-checking is nearly irrelevant because the target isn't belief, it's the conditions under which belief formation happens.
This is why RAND began publishing studies on "Virtual Societal Warfare" and "Next Generation Psychological Warfare." The battle has shifted from the information channel to the cognitive apparatus.
3. The 2025 Frontier — What's Actually New
Here's where the contemporary moment gets genuinely unprecedented. Several developments have compounded in the last eighteen months to create something qualitatively different from what came before.
First: the AI-generated content threshold has been crossed. Over 50% of web content is now AI-generated. Bot traffic surpassed human activity in 2025, reaching 51% of internet traffic. Deepfake incidents in Q1 2025 alone exceeded all of 2024. NewsGuard has identified over 2,089 AI-generated news sites operating across 16 languages with almost no human oversight. We have passed the point where the information ecosystem can be characterized as "human content with AI contamination." It is now an AI-generated environment with human contributions distributed through it.
Second: precision cognitive targeting at scale. In September 2025, documents leaked from a Beijing-based firm called GoLaxy revealed what they internally called a "Smart Propaganda System" — an army of AI personas engineered to look and think like real people. These personas use millions of data points to build psychological profiles of their targets. One leaked dossier showed the system targeted 2,000 public figures and 117 members of the US Congress. Crucially, GoLaxy uses a technique called "LLM grooming" — saturating search results with biased data so that when a target searches for a topic, the top results confirm the fabricated narrative. This is targeted cognitive warfare aimed at specific decision-makers, not mass propaganda. The PLA has explicitly stated interest in "impacting key decision-makers, rather than just the general public."
Third: the poisoning of AI itself as a new attack vector. This is the most important development, and it speaks directly to your Novascene thesis. Adversaries are no longer primarily trying to convince humans directly. They are flooding the training and retrieval corpora of AI systems — the substrate that humans increasingly use to understand the world — with narrative content designed to propagate through those systems. Russia has operationalized this approach explicitly. In August 2025, leading chatbots were found to relay false claims 35 percent of the time, up from 18 percent a year earlier. AI-generated content accounted for over 20 percent of all debunked material in 2025, more than double the figure from the previous year. Attackers have recognized that compromising the epistemic substrate is strategically more efficient than compromising individual minds — because the substrate then does the work of compromising minds at scale.
This creates a self-reinforcing cycle: adversarial AI content enters the web, future AI systems train on that content, those systems produce outputs contaminated with adversarial framings, humans consume those outputs, humans produce content influenced by what they've consumed, that content enters the next training corpus. The feedback loop is real and measurable.
Fourth: AI-enabled radicalization at algorithmic speed. In May 2025, Europol identified 2,000 extremist links specifically targeting minors across 16 European countries. "Violence-as-a-Service" platforms like 764/Com Networks specifically identify vulnerable minors aged 8-17 on mainstream platforms and groom them through AI-personalized psychological coercion. The traditional bottleneck for extremism — the resource-intensive process of sustained human recruitment — has been eliminated. A single operator can now run thousands of personalized radicalization campaigns simultaneously.
Fifth: the collapse of the information-kinetic distinction. During the so-called "Twelve-Day War" between Iran and Israel in mid-2025, AI deepfakes were integrated directly with military operations in real time — synthetic media generated and deployed as part of the kinetic campaign rather than as a separate information layer. The Pahalgam terror attack in Kashmir on April 22, 2025 was followed within hours by a flood of synthetic narratives on Telegram and X — deepfake videos of senior military officials, AI-generated images of fabricated military outcomes. The information war no longer trails the kinetic war; it is interwoven with it at operational tempo.
4. The Asymmetric Advantage
There is a structural reason cognitive warfare heavily favors attackers, and it needs to be named clearly. Democratic systems' information transparency enables adversaries to collect targeting data through open sources, while authoritarian states restrict equivalent access. Every piece of open legislative data, every public speech, every social media footprint, every campaign donation record, every journalistic investigation — all of it is raw material for building targeting profiles on democratic decision-makers. Authoritarian regimes simply do not offer equivalent surfaces.
Creating convincing deepfakes costs significantly less than developing and maintaining detection systems. This is an asymmetry of cost, not just of ethics. The attacker needs to win once; the defender needs to win every time.
Can you describe the evolution of autonomous warfare in Ukraine during this conflict, going up to the present day?
The Russo-Ukrainian War has become the first high-intensity, industrial-scale laboratory for autonomous warfare in human history. What began in February 2022 as an attempt by Ukrainian troops to compensate for artillery shortages by strapping explosives to off-the-shelf consumer quadcopters has, by early 2026, evolved into something qualitatively unprecedented: a battlefield where an estimated 10,000 drones are deployed per day, where operator-to-drone ratios are shifting from 1:1 toward 1:many, and where the entire character of frontline combat is being reshaped by semi-autonomous and increasingly autonomous machines.
Let me trace this evolution in its actual phases, because the transformation has been neither linear nor accidental.
Phase 1 (2022) — The Army of Drones and Mass Improvisation
The opening phase was defined by desperation and improvisation. Ukrainian troops, outgunned in traditional artillery, began using commercial DJI Mavic quadcopters for reconnaissance and artillery spotting within weeks of the invasion. Within months, they figured out how to strap explosives onto them, turning $500 hobbyist drones into effective low-cost killing machines.
The critical organizational innovation was Ukraine's "Army of Drones" program, launched through crowdfunding, which distributed unmanned aerial vehicles down to the company level and trained thousands of operators. This was doctrinally radical — instead of concentrating drones in specialized units (the Russian approach initially), Ukraine pushed them down to small-unit level, creating a decentralized, adaptive drone ecosystem.
The dominant platform was the First Person View (FPV) drone: a simple quadcopter with a 7-inch propeller diameter, paired with a remote control station and VR-style goggles allowing the operator to see through the drone's camera. Cost: $200 to $1,000. Range: 5-15 kilometers. Payload: small but sufficient for anti-personnel and anti-vehicle strikes. These FPV drones were not autonomous in any meaningful sense — they were remotely piloted — but their mass deployment changed the economics of the battlefield in ways that autonomy would later deepen.
Phase 2 (2022-2023) — Saturation, Attrition, and the Emergence of Electronic Warfare
As both sides saturated the front with drones, both sides developed countermeasures. Russia and Ukraine built out tactical electronic warfare — what Russians now call "trench-level warfare" — with soldiers carrying portable EW jammers in backpacks to disrupt the radio links between drones and their operators. Large older medium-altitude long-endurance drones (like Turkey's TB-2, which had been prominent early in the war) virtually disappeared from the tactical battlefield because they became too vulnerable.
In their place rose kamikaze systems and loitering munitions — Russia's Lancet, and the Iranian-supplied Shahed-136 drones deployed in swarms. The battlefield became a saturated space where a drone's lifespan was measured in flights, sometimes single flights. A crucial fact: FPV operator effectiveness varied wildly, from 10% hit rates for inexperienced pilots to 30-50% for experienced ones. Forecasts suggested AI-controlled drones could reach 80%. This delta became the entire impetus for autonomy.
Phase 3 (2023-2024) — The FPV as Universal Platform
By 2024, the FPV drone had become a universal platform rather than a specific weapon. A basic quadcopter chassis could be adapted through modular payloads: mount a bomb and it became a bomber; mount a camera and it became an ISR (intelligence, surveillance, reconnaissance) asset; mount a relay antenna and it became a communications node; mount an autonomy module and it became a semi-autonomous strike weapon.
Drones also grew larger. The 7-inch propeller diameters of 2022 gave way to 9-10 inch quadcopters by 2024, and eventually 13-inch variants, because heavier payloads and longer ranges demanded more lift. Ukraine produced approximately 2 million drones in 2024 alone. 96.2% of UAVs used by the Ukrainian armed forces in 2024 were domestically produced. The country now has at least 500 drone producers, plus numerous small-scale and home-based manufacturers. Production is expected to reach 4 million in 2025 with manufacturing capacity for up to 10 million systems.
A crucial technological countermove during this period was the emergence of fiber-optic FPV drones. Instead of radio links, these drones trail a physical fiber-optic cable for communication, making them completely immune to jamming and electronic warfare. Russia, partnering with Chinese manufacturers, scaled fiber-optic drone production faster than Ukraine, and by late 2024 fiber-optic drones had become a significant tactical challenge in the Kursk and Donetsk theaters. The only reliable counters are physical: shotguns, nets, or cutting the wire.
Phase 4 (2024-2025) — The Rise of Autonomy
This is where the war crosses into genuinely new territory. Starting in 2024 and accelerating through 2025, both sides began integrating AI modules into FPV drones. The key technical insight: AI-based autonomy eliminates the vulnerability of radio links without requiring physical cables. An autonomous drone with onboard image recognition doesn't need to be continuously controlled; it flies its own path, identifies its own target, and executes the strike independently.
These autonomy modules, produced by Ukrainian companies like The Fourth Law and others, provide two main functions: autonomous navigation (including "last-mile" terminal guidance) and automatic target recognition. A human operator still typically selects the target and approves the strike. But once approval is given, the drone completes the attack without radio communication, rendering electronic warfare useless against it.
Russian drones have also increasingly incorporated these capabilities — terminal guidance, target lock, and image recognition — even if the Russian approach remains more top-down and centralized than Ukraine's distributed innovation ecosystem.
Ukraine also deployed a suite of AI-enabled battlefield systems in this period: the Zvook acoustic reconnaissance system, which detects drones and missiles by sound; ePPO, a mobile app that collects and analyzes civilian reports of incoming missiles and drones, effectively crowdsourcing air defense intelligence; ZIR software for automatic target detection; and AI-enabled navigation systems for operations in GPS-denied environments.
The Watershed: Operation Spiderweb (June 1, 2025)
The single most important event in the evolution of autonomous warfare so far occurred on June 1, 2025. Ukraine's Security Service (SBU) executed Operation Spiderweb (also called "Spider's Web"), an attack 18 months in the planning that struck five Russian airbases simultaneously across five time zones — from Belaya in Eastern Siberia (over 4,300 kilometers from Ukraine) to bases much closer to the border.
The operational architecture deserves close examination because it represents a genuine doctrinal innovation. Ukraine smuggled approximately 150 small FPV drones, modular launch systems, and 300 explosive payloads into Russia over covert logistical routes. The drones were concealed inside wooden modular cabins loaded onto standard cargo trucks. The SBU reportedly recruited unwitting Russian truck drivers to deliver the cargo to pre-designated locations near the target airbases. At the moment of attack, the wooden cabin roofs retracted remotely and the drones launched.
117 drones hit the targets. Forty-one aircraft were damaged or destroyed, including Tu-95, Tu-22M3, and Tu-160 strategic bombers, and A-50 AWACS early-warning aircraft. These are Cold War-era platforms that haven't been produced since the dissolution of the Soviet Union in 1991, making them effectively irreplaceable. Estimated damage: approximately $7 billion, inflicted at a cost of a few thousand dollars per drone. The Financial Times assessed that the strike eliminated roughly 20% of Russia's operational long-range aviation fleet. Zelensky claimed 34% of Russia's strategic cruise-missile carriers had been hit.
The AI dimension of Spiderweb is what makes it the watershed. Reports indicate the drones were programmed using AI to identify and strike the weak points of specific Russian aircraft types — fuel tanks in the wings — with Ukraine reportedly using museum pieces of the old Soviet aircraft as training data for target recognition. The drones' autopilot system was ArduPilot, an open-source platform used by drone hobbyists. This allowed the drones to autonomously continue their flight paths even when human pilots temporarily lost signal.
The strategic implications are profound. The operation demonstrated that:
Strategic depth is no longer protection. 4,300 kilometers behind enemy lines is no longer safe from coordinated attack.
Low-cost precision strikes at strategic scale are now accessible to almost any state or non-state actor with drone technology, AI, and logistics.
The economics of military power have inverted. $7 billion in strategic bomber fleet destroyed by drones costing perhaps $500,000 in total. A cost-exchange ratio of roughly 14,000:1.
The technology used was not cutting-edge. ArduPilot is old. Commercial trucks are old. Decades-old image recognition algorithms were sufficient. The innovation was the integration, not any single component.
Airbase security worldwide — including in NATO countries — is fundamentally inadequate for this threat vector.
It has been compared to Israel's 2024 exploding-pager attack on Hezbollah and called Russia's "Pearl Harbor" by Russian military bloggers.
Phase 5 (Late 2025 - Early 2026) — The Drone Wall and Ground Robots
By late 2025, Ukraine had consolidated a defensive concept called the "drone wall" — a layered no-man's-land stretching 15-25 kilometers from the front line (now extending up to 40 kilometers), saturated with semi-autonomous FPV drones capable of detecting and striking anything that crosses. Russian drones have created a comparable kill zone extending roughly 9 miles from the front on their side. An "unmanned systems commander" in Ukraine's 12th Azov Brigade put the reality bluntly: sending an armored vehicle to evacuate an infantry position means being engaged on the way, 100 percent of the time.
This has triggered the next evolutionary leap: unmanned ground vehicles (UGVs). Because aerial drones have made it nearly impossible for soldiers to operate in the forward kill zone, Ukraine (struggling with severe infantry shortages) began deploying ground robots in 2024. By early 2026, thousands of wheeled and tracked ground robots are operating across the gray zone along the Eastern front. Ukraine has introduced more than 40 ground robotic platforms since February 2022.
Their primary uses:
Logistics (~80% of some battalions' logistics needs are now met by UGVs): delivering ammunition, food, fresh drones to aerial drone teams, batteries, and equipment.
Medical evacuation: the second most common use, and one that has saved many lives in positions that are otherwise impossible to reach.
Armed variants: machine-gun-mounted UGVs, including a notable February 2026 incident in which a Ukrainian ground robot used its thermal camera to autonomously detect and kill a Russian soldier in the dark with a heavy machine gun round. Assault UGVs with turrets are being tested by multiple brigades, with analysts predicting 2026 will be "the year of the assault UGV."
UGVs remain more expensive than aerial drones ($30,000-40,000 for a UGV versus $500-1,000 for an FPV — a ratio of 30-50:1), and they face challenges: terrain, electronic warfare adaptation, weapons jams, battery changes that require humans. But the direction is clear.
Phase 6 (Emerging) — Swarms and the Inflection Point
The current moment, as of April 2026, is characterized by what analysts describe as a rapidly approaching inflection point in autonomous warfare. The Kyiv-born engineer Yaroslav Azhnyuk, founder of the autonomy-module company Anduril Industries rival The Fourth Law, describes the emerging vision plainly: "swarms of autonomous drones carrying other autonomous drones to protect them against autonomous drones, which are trying to intercept them, controlled by AI agents overseen by a human general somewhere." He also envisions autonomous submarines, each carrying hundreds of drones, suddenly emerging off the coast of California or Britain and discharging their cargoes en masse into the sky.
The economic logic driving this is simple. If one operator can launch 100, 50, or even 20 drones simultaneously, the cost-exchange ratio becomes impossible for traditional defenses to sustain. The US Navy in the Red Sea has been forced to shoot down $2,000 Houthi drones with $2 million missiles — a cost-exchange ratio of 1,000:1 in favor of the attacker. No defense budget on earth can sustain that indefinitely.
Already in deployment:
Mother drones, which carry FPV drones or serve as radar relays, nested like Russian dolls.
Maritime drones (unmanned surface vessels), which have allowed Ukraine — a country whose entire conventional navy Russia destroyed in 2022 — to push the Russian Black Sea Fleet back from Crimean waters. Ukraine essentially invented a form of naval warfare that makes traditional surface fleets obsolete.
Long-range deep strike drones like the Liutyi, which already execute parts of their flight routes independently.
AI-assisted battlefield analysis systems like Ukraine's Delta platform, which integrates satellite imagery, drone footage, and crowdsourced intelligence using machine learning.
The Strategic Gap
What is genuinely alarming in early 2026 is the gap between the Ukrainian-Russian autonomous warfare frontier and everyone else. Azhnyuk's assessment: "Europe and the United States have progressed, in the best-case scenario, from the winter-of-2022 technology to the summer-of-2022 technology. The gap is getting wider." NATO militaries still largely rely on expensive air defense systems designed for a different threat environment. The UK's recent Strategic Defence Review refers to drones 28 times — compared to just three mentions of tanks — but operational capability lags behind recognition.
Meanwhile, China is watching closely. Chinese military planners are developing swarm-capable drones, autonomous targeting systems, and long-range maritime UAVs. China controls much of the global drone supply chain and has been a key supplier to Russian fiber-optic drone production. Chinese doctrine is shifting visibly toward drone-centric warfare beyond proxy conflict zones — presumably with Taiwan in mind.
What This Evolution Actually Represents
Four summary observations about what has happened in Ukraine from 2022 to 2026:
First, the cost structure of warfare has fundamentally inverted. When a $500 drone can destroy a $100 million strategic bomber, traditional military economics collapse. Mass, not quality, is the decisive variable — and mass is now cheap.
Second, the human role in the kill chain is being progressively compressed. In 2022, a human operator flew the drone to the target. By 2024, a human operator approved the target and the drone finished the strike. By early 2026, in some operations, a human operator defines mission parameters and autonomous systems handle both target selection and strike. The direction is clear, even if full autonomy has not yet been reached on the Ukrainian battlefield.
Third, the innovation cycle has compressed to weeks. A ground robot design that worked a month ago may not work now because Russian electronic warfare or counter-tactics have adapted. This is not five-year procurement cycles. This is iterative, software-style warfare with continuous deployment.
Fourth, and most importantly, the distinction between combatant and non-combatant, front and rear, military and civilian target has collapsed. Operation Spiderweb used civilian trucks and unwitting Russian drivers to strike strategic airfields 4,000 km from the front. Ukrainian factories making drones are legitimate targets. Russian oil refineries are legitimate targets. Undersea cables are legitimate targets. Election infrastructure is a legitimate target. The battlefield is now everywhere the substrate reaches — which in 2026 is effectively everywhere.
Ukraine is not a peripheral conflict. It is the genesis event for a new form of warfare whose doctrines, platforms, and cost structures are being exported, rapidly, to every other potential conflict zone on the planet.
Your next question might want to sit with what this means when these tools meet not just Russia and Ukraine, but the gray zone actors and cognitive warfare operators we talked about earlier — or, alternatively, what the implications are for states that are not currently at war but whose infrastructure and territory will nonetheless be contested ground.
What is the relationship between gray, cognitive and drone warfare? Now unpack the connections, intersections, thoughts between these things. Are we approaching a new era of warfare, does it have a name?
Let's take these seriously as three things that have been emerging in parallel and only recently become legible as a single phenomenon.
Until about 2022, most strategic analysts treated gray zone warfare, cognitive warfare, and drone warfare as more or less separate domains. Gray zone was a foreign-policy concern about geopolitical coercion below the threshold of war. Cognitive warfare was an emerging concept within information operations scholarship. Drone warfare was a tactical innovation being tested in low-intensity counterinsurgency. Each had its own literature, its own doctrinal authors, its own institutional homes. They rarely cross-referenced each other.
The Russo-Ukrainian War, in combination with the simultaneous maturation of generative AI, has fused them. What we are watching, in real time, is the emergence of a single integrated form of warfare in which these three elements are not separate domains but interlocking layers of a single apparatus.
Let me unpack what that apparatus actually looks like.
1. The Structural Relationship — Depth, Surface, Edge
The simplest way to understand the relationship is spatially, as three layers of a single attack surface.
Cognitive warfare operates at the deepest layer — the layer of minds and meaning. Its target is the will, perception, and decision-making capacity of populations and elites. It operates on timescales of months to decades. It works by shaping the epistemic substrate — the narratives, trusted sources, training data, and reflexive assumptions through which societies interpret reality. Cognitive warfare is patient. It does not require immediate effect.
Gray zone warfare operates at the middle layer — the layer of political and infrastructural reality. Its target is sovereignty, alliance cohesion, economic stability, and the integrity of critical systems. It operates on timescales of weeks to years. It works by accumulating subthreshold pressure — sabotage, coercion, proxy operations, election interference, infrastructure attacks — that is individually deniable but collectively corrosive. Gray zone actions must remain ambiguous enough to deter response but real enough to deliver effect.
Drone warfare operates at the surface layer — the layer of bodies, vehicles, and physical infrastructure. Its target is kinetic: ships, planes, tanks, soldiers, buildings, oil refineries. It operates on timescales of seconds to minutes. It works by delivering physical force at unprecedented cost ratios and geographic reach. Drone warfare is the sharp end of the apparatus — the place where all the upstream effects finally become measurable in destroyed hardware and dead bodies.
These layers are not separate wars. They are the same war fought on different strata of the same substrate — cognitive, political, physical — with each layer shaping the conditions of the others.
2. The Operational Integration — How They Feed Each Other
Look at how they interlock in practice, because the connections are more concrete than the layer metaphor suggests.
Cognitive warfare prepares the ground for gray zone operations. Years of disinformation targeting a society's trust in its elections, its media, and its government create the political space in which gray zone operations can be conducted without effective response. The reason Russia can operate drone incursions into Poland, Estonia, and Denmark in 2025 with limited consequences is partly because decades of cognitive warfare have made NATO publics uncertain about the stakes, fractious about the response, and primed to distrust their own governments' analyses of what is happening.
Gray zone operations create the physical conditions for drone operations. Operation Spiderweb — the June 2025 Ukrainian strike on five Russian airbases — required 18 months of gray zone-style intelligence operations inside Russia: recruiting unwitting truck drivers, smuggling components across the border, establishing safe houses, mapping airbase vulnerabilities, building cover identities. The drone strike was the last 20 minutes of an 18-month covert operation. The kinetic effect was the fruit of gray zone soil.
Drone warfare generates the data that trains cognitive warfare. Every FPV drone flight produces video footage, geolocation data, behavioral patterns of target engagement. Ukraine's AI-driven targeting systems are trained on this data. But the same data flows into narrative ecosystems — propaganda videos, viral footage of strikes, counter-strike mythology. Russia has turned Ukrainian drone footage into cognitive warfare material targeted at demoralizing Ukrainian audiences and splitting Western ones. The kinetic is simultaneously informational.
Cognitive warfare provides the cover for gray zone operations, which provide the conditions for drone operations, which provide the data and emotional material for cognitive warfare. It is a closed loop.
3. The AI Substrate Makes It All Interoperable
The binding agent that transforms these three previously separate forms of warfare into one integrated apparatus is artificial intelligence. And this is the point that the doctrinal literature has only begun to articulate clearly.
AI is what lets drone swarms coordinate autonomously without centralized command. AI is what lets cognitive warfare operations generate personalized content at scale targeting 117 members of Congress (the GoLaxy case). AI is what lets gray zone operators run thousands of parallel disinformation campaigns, sabotage operations, and influence networks without human supervision of each node. AI is what lets targeting systems fuse data from satellites, drones, cyber intrusions, human intelligence, and open source feeds into real-time situational awareness.
The critical point: these are not three AI applications in three domains. They are increasingly a single AI infrastructure operating across all three domains simultaneously. The same foundation models that generate disinformation can plan drone strikes. The same geolocation databases that enable drone targeting can enable assassination campaigns. The same behavioral profiling that enables LLM grooming of public figures can enable precision targeting of decision-makers in wartime.
The Ukrainian Delta platform, for example, is essentially an AI-driven battlefield operating system that ingests satellite imagery, drone footage, crowdsourced civilian reports, signals intelligence, and social media monitoring — and produces real-time targeting recommendations, logistical predictions, and situational awareness outputs. It does not distinguish between cognitive, gray zone, and kinetic data streams. It treats them as one operational picture.
4. The Disappearing Boundaries
Let me make three more specific observations about what collapses when you fuse these three.
The combatant-noncombatant distinction collapses. A civilian generating TikTok videos praising military successes is participating in cognitive warfare. An open-source intelligence community geolocating enemy units from satellite imagery is participating in drone targeting. A truck driver who unknowingly transports a wooden cabin full of FPV drones is participating in strategic strikes on airbases. The legal architecture of warfare, which depends on clean categories — combatant, noncombatant, military, civilian, active hostilities, peace — has been rendered largely incoherent by the integration of these three layers.
The war-peace distinction collapses. If Russia is running continuous election interference operations against NATO countries, sabotaging undersea cables in the Baltic, conducting assassinations on European soil, and operating bot networks targeting Western publics — while simultaneously conducting conventional warfare in Ukraine — then there is no meaningful sense in which NATO and Russia are "at peace." But there is also no formal state of war. We have entered what could legitimately be called continuous low-intensity hostility as a default condition. Peace as a distinct state of international affairs has, in important respects, ceased to exist between great powers.
The front-rear distinction collapses. Operation Spiderweb struck targets 4,300 kilometers from the front. Russian AI-generated deepfakes have targeted German elections 1,500 kilometers from Ukraine. Chinese influence operations target Taiwanese, Filipino, and Australian audiences. The "battlefield" is now any location where the digital, informational, or physical substrate of an adversary can be reached — which, given global connectivity and commercial drone availability, is effectively everywhere.
5. The Naming Problem
To your question: yes, this is being recognized as a new era of warfare, and yes, it has names — but the names are proliferating faster than consensus is forming. This is itself diagnostic. When a phenomenon is genuinely new, people struggle to name it because no existing vocabulary fits.
Here are the major competing names currently in circulation, with what each emphasizes:
Multi-Domain Operations (MDO) is the US Army's official term, adopted around 2017-2018 and now enshrined in NATO doctrine. It emphasizes the integration of warfare across land, sea, air, cyber, and space domains, with AI-driven command and control stitching them together. The US Army's 2025 Army Unified Network Plan 2.0 explicitly frames war as "transregional and global" — a "continuous condition distributed across every domain, every network, every hour" rather than a series of discrete campaigns. This is an important admission. The US Army is officially saying that war is now permanent and planetary. MDO captures the integration dimension but undersells the cognitive and gray zone dimensions.
Mosaic Warfare is DARPA's term, emerging around 2018 and maturing through the early 2020s. It emphasizes decentralized command, disposable modular platforms, and effects assembled in real time from whatever systems happen to be available and networked. DARPA officials describe it as making each engagement a "cognitive puzzle" — imposing multiple dilemmas on an adversary rather than defeating them through superior mass. "A drone that scouts today may become a communications relay tomorrow, a decoy the day after. Human commanders provide broad intent; algorithms assemble task forces out of whatever is nearby and networked." Mosaic Warfare captures the algorithmic and modular dimensions extremely well. Iran has independently adopted "Mosaic Defence" as its doctrinal frame, with a similar emphasis on distributed resilience.
Algorithmic Warfare has emerged in 2025-2026 as the term favored by NATO analysts and the Atlantic Council. It emphasizes that AI is not an add-on to warfare but increasingly constitutive of it. The US Army now explicitly talks about "the Algorithmic Battlefield." This framing captures the AI substrate but undersells the integration of cognitive and gray zone dimensions.
New Generation Warfare (NGW) is Russia's term, associated with General Gerasimov's 2013 speech. It emphasizes that "the role of nonmilitary means of achieving political and strategic goals has grown, and, in many cases, they have exceeded the power of force of weapons in their effectiveness." NGW captures the cognitive and gray zone dimensions but predates the drone and AI revolution and doesn't fully incorporate them.
Unrestricted Warfare is China's term, from the 1999 PLA book of that name by Qiao Liang and Wang Xiangsui. It argues that modern conflict uses "all available means" — military, economic, informational, legal, ecological, psychological — in a single integrated campaign. This is arguably the most prescient framework of the bunch, because it was articulated a full quarter-century before the current moment and anticipated essentially everything we are now seeing. The Chinese PLA has since developed "Three Warfares" (public opinion warfare, psychological warfare, legal warfare) and "Cognitive Domain Operations" as operational doctrines within this broader Unrestricted frame.
Hybrid Warfare is NATO's older term (circa 2014-2016, emerging from analysis of Russian operations in Crimea) for the combination of conventional and irregular means with information and cyber operations. Hybrid warfare is a useful concept but now feels somewhat dated because it still treats conventional military operations as the core and other operations as "irregular" add-ons. The current moment inverts that — cognitive and gray zone operations are the core, and kinetic operations are one expression of them.
6. What I Think the Real Name Is
Here is my honest view, and it's more opinionated than the scholarly literature typically allows.
None of the existing names quite capture what has happened. Multi-Domain Operations, Mosaic Warfare, and Algorithmic Warfare are Western coinages that emphasize the technological integration but underplay the civilizational-substrate dimension. New Generation Warfare, Unrestricted Warfare, and Cognitive Domain Operations are non-Western coinages that emphasize the substrate dimension but predate or underplay the AI and drone revolution.
What we are actually living through is what might most accurately be called substrate warfare — or perhaps, to borrow from the Chinese tradition but update it: continuous unrestricted warfare across an AI-integrated substrate.
The defining feature of the current era is not that warfare has become multi-domain (it was always multi-domain) or that warfare has become algorithmic (algorithms are just one tool) or that warfare has become cognitive (all warfare has always had cognitive dimensions). The defining feature is that the underlying digital-physical infrastructure of civilization has itself become the primary battleground, and every layer of that infrastructure — from undersea cables to training data to election systems to power grids to supply chains to AI models to individual minds — is now simultaneously contested by AI-enabled adversaries operating continuously, below formal thresholds of war, across every geographic and temporal scale.
This is qualitatively different from previous eras because:
The battlespace has become coextensive with civilization itself. There is no "home front" separate from the "war zone." The training data your AI ingests, the election you vote in, the cable your internet runs through, the drone over your city, the social media feed that shapes your mood — all of it is the same contested substrate.
The timescale has become continuous. There is no moment when the war starts and no moment when it ends. Operations run 24/7, in peacetime and wartime, across decades.
The actors have multiplied beyond state control. Non-state actors (hacker collectives, religious extremists, corporate intelligence divisions, criminal networks, individual influencers) can now operate at strategic scale using the same AI tools available to great powers. The monopoly of state violence has not disappeared, but it has become permeable in unprecedented ways.
The weapons have become software. The limiting factor in 20th-century warfare was industrial production — how fast you could make tanks, ships, planes, shells. The limiting factor in 21st-century warfare is increasingly code and data — how fast you can write better targeting algorithms, generate more persuasive disinformation, train more accurate vision models. This changes the strategic balance between large industrial states and small but agile actors.
The human role is being compressed. From humans making every decision, to humans supervising every decision, to humans approving broad mission parameters and letting AI handle the rest. The arc of autonomy is bending toward decisions being made without meaningful human input, at tempos that preclude human oversight.
7. A Final Observation
There is one more thing worth naming clearly, even if it is uncomfortable.
The form of warfare we are living through is being shaped primarily by actors who understand it better than Western democracies do. Russia has been explicitly theorizing cognitive and gray zone warfare since at least 2013. China has been explicitly theorizing unrestricted warfare since 1999. Both have adapted AI, drones, and information operations into mature doctrines. Ukraine has, out of sheer necessity, built operational capability across all three layers in under four years.
The United States and Europe, meanwhile, are still largely fighting the last war. NATO militaries remain structured around expensive, legacy platforms designed for Cold War conflict. The procurement cycles are measured in decades. The public understanding of what warfare now is lags years behind the operational reality. Even Israel, which has demonstrated extraordinary technical capability with systems like Gospel and Lavender in Gaza, has done so in a way that highlights the ethical and legal chaos this new form of warfare creates rather than resolves.
The gap between those who understand this new era of warfare and those who are still governed by the frameworks of the old one is itself a strategic vulnerability. A civilization cannot defend a substrate it doesn't understand is under attack.
Your next question might want to sit with that gap specifically — what it means that the democracies most committed to an open substrate are also the ones least prepared to defend it, and what a democratic theory of substrate warfare would even look like.
(2.) ECONOMY OR MILITARY
Shifting gears -- typically war drives the evolution of technology, in this case is it fair to say that this is happening OR is it the civilian labs have been driving the capabilities of warfare? Can you do a deep analysis on this question? Who is driving who? Where did this come from?
This is one of the most consequential questions you can ask about the present moment, and the honest answer is that the traditional direction of technology transfer has been inverted. For most of the industrial and post-industrial era, military research drove civilian technology. In the current moment, civilian technology is driving military capability — and the military is struggling, often visibly and embarrassingly, to catch up.
But the full answer is more interesting than that simple inversion, because what has actually happened is that the distinction between "civilian" and "military" technology development has collapsed entirely. We are not in a civilian-driven era of military technology. We are in an era where the distinction no longer carves reality at its joints.
Let me work through this historically and then diagnostically.
1. The Historical Pattern — Military to Civilian
For roughly the period from 1940 to about 1990, military research and development was the primary engine of technological advance in the Western world. The pattern was remarkably consistent.
The internet began as ARPANET, a project funded by the Defense Advanced Research Projects Agency in the 1960s, explicitly designed to maintain command-and-control communications in the event of nuclear war. It was decades before it became a civilian technology.
GPS began as NAVSTAR, a U.S. Air Force program initiated in 1973 to enable precise targeting of ballistic missiles. It was only made available for civilian use after the downing of Korean Air Lines Flight 007 in 1983, and even then the military deliberately degraded civilian GPS accuracy until 2000.
Integrated circuits, the foundation of all modern computing, were developed in large part through military contracts — the Minuteman II guidance computer in 1962 was the first mass market for silicon-based integrated circuits, and DoD contracts sustained the early semiconductor industry through its vulnerable early years.
Jet engines, radar, nuclear power, satellite communications, microwave ovens, duct tape, penicillin mass production, digital photography, computer graphics, the mouse, hypertext — all either originated in military research or were driven to maturity by military funding.
The pattern was so consistent that Vannevar Bush, the architect of American postwar science policy, built an entire national model around it. The National Science Foundation, the National Institutes of Health, DARPA, the national laboratories — all were constructed on the assumption that military and strategic needs would drive fundamental research, and civilian applications would spill over.
This pattern is still the one most people have in their heads when they think about the relationship between war and technology. It is now badly outdated.
2. The Inflection — Roughly 1990 to 2015
Three things happened in roughly the quarter-century between the end of the Cold War and the mid-2010s that inverted the pattern.
First: commercial computing outpaced military computing. By the late 1980s, commercial semiconductor manufacturers — Intel, AMD, and eventually the East Asian fabs — were iterating faster than any military program could. The DoD went from being the primary customer of leading-edge chips to being a minor customer using chips designed for consumer markets. The military began buying "commercial off-the-shelf" (COTS) components because they couldn't manufacture or procure equivalent capability internally.
Second: the internet escaped military control. Once ARPANET became the public internet in the 1990s, the pace of networking and software development was set by commercial actors — Cisco, Microsoft, Netscape, Google, Amazon. The military became a customer and adapter rather than an originator.
Third: the end of the Cold War cut defense R&D budgets dramatically. U.S. defense R&D spending as a share of GDP has been declining since the 1960s. Meanwhile, private sector R&D, especially in software and semiconductors, has exploded. By the 2010s, companies like Google, Microsoft, and Apple each spent more on R&D annually than most countries' entire defense budgets.
The result was that by the mid-2010s, the leading edge of computing, AI, networking, robotics, sensing, imaging, and autonomy was unambiguously in the civilian commercial sector. The military had become a laggard, not a leader.
3. The Current Moment — Civilian Dominance, Military Scramble
Let me be specific about what this looks like in the technologies we've been discussing.
Drones. The DJI Mavic, a Chinese consumer quadcopter designed for aerial photography and hobbyist videography, has become the most widely deployed battlefield drone in history. Nearly two million drones were produced in Ukraine in 2024 alone, many of them built on DJI architectures or clones. The core technology — brushless motors, gyroscopic stabilization, GPS navigation, live video transmission, battery management — was developed entirely for consumer markets. The military use case was an afterthought.
When U.S. defense contractors have tried to produce battlefield drones to Ukrainian specifications, the results have been embarrassing. Wall Street Journal reporting in April 2024 documented that autonomous drones sent by U.S. tech startups to help Ukraine proved to be error-prone, difficult to repair, and easily foiled by relatively basic electronic jamming techniques. Ukraine has largely stopped using American drones in favor of domestic production, Chinese commercial platforms, and Ukrainian modifications. This is genuinely astonishing — the most advanced military in the world cannot produce a $500 quadcopter that works as well as a Chinese hobbyist drone.
Satellite communications. The single most important piece of military technology used by Ukraine is Starlink, SpaceX's commercial satellite internet constellation. It was deployed to Ukraine within days of the Russian invasion. It is the primary command-and-control network for Ukrainian forces, the primary communication system for drone operators, and the primary data link for AI-driven battlefield management. It is entirely commercial. Elon Musk personally controls whether it works or not. The Pentagon has no equivalent system and no capacity to replace it.
AI and machine learning. The foundation models that underpin modern military AI applications — vision transformers, large language models, reinforcement learning systems — are being developed primarily by commercial labs. OpenAI, Anthropic, Google DeepMind, Meta AI, and Chinese equivalents like DeepSeek have collectively done more to advance the capabilities relevant to military applications in the past five years than all military research programs combined. Project Maven, the Pentagon's flagship AI program for analyzing drone footage, is a consumer of these models, not a creator of them.
Imaging sensors. The thermal imaging sensors in Ukrainian ground robots, the computer vision algorithms in Russian loitering munitions, the image recognition systems in Operation Spiderweb — these are derived from sensors and algorithms developed for autonomous vehicles, smartphones, security cameras, and consumer electronics. The same Sony IMX sensor in your phone is in a drone bombing Russian tanks.
Compute infrastructure. Modern military AI depends on commercial cloud providers — Amazon Web Services, Microsoft Azure, Google Cloud — and commercial GPUs from NVIDIA. The JEDI and JWCC contracts that will run the DoD's cloud infrastructure are essentially renting commercial capacity, not building sovereign military capability.
Networking and mesh. The distributed, resilient mesh networking that allows Ukrainian units to operate when centralized infrastructure is destroyed descends from civilian peer-to-peer technologies, blockchain protocols, and mesh networking research that was largely commercial in origin.
In every one of these categories, the direction of technology transfer has been civilian → military, not military → civilian. The military is adapting what industry has built, not the other way around.
4. The Exceptions That Prove the Rule
There are still a few domains where military research genuinely leads — and looking at them is instructive.
Hypersonic weapons. Stealth aircraft. Nuclear weapons design. Submarine sonar. Certain kinds of electronic warfare. Space launch for certain classified payloads. These remain military-led domains because the physics, materials science, and engineering are specific to military requirements and have no civilian analog.
What these exceptions have in common: they involve extreme physical regimes (hypersonic flight, underwater acoustics, weapons-grade fissile material) where civilian markets simply don't exist. The moment a domain has a civilian market, commercial development races ahead of military development. The only places the military still leads are places where there is no civilian market to be had.
This is a diagnostic observation. The military used to lead because the state could mobilize resources that the private sector couldn't match. The state can still do this in narrow domains — but it cannot do it across the full technology stack anymore, because the private sector is simply too capital-rich and too fast-moving.
5. The Deeper Inversion — Dual-Use Technology
Here is where the analysis gets more interesting and where the simple "civilian drives military" story breaks down.
What has actually emerged in the past decade is not civilian dominance but the collapse of the distinction entirely. Virtually every frontier technology is now what's called "dual-use" — meaning it has both civilian and military applications built into it from the start, often inseparably.
Consider a Tesla. Its autopilot system uses computer vision, sensor fusion, and real-time decision-making. These are the same technologies used in autonomous drones and loitering munitions. The same high-resolution thermal imaging sensor that enables a car to identify a pedestrian in foggy weather could, with minimal adaptation, allow a drone to lock onto a human heat signature in a battlefield scenario. Tesla is, technically, a dual-use company whether it likes it or not.
Consider DeepSeek or Claude. These are commercial AI systems. But the same foundation model capabilities that help a student write an essay also help an adversary generate disinformation at scale, plan drone strikes, analyze intercepted communications, and simulate cognitive warfare campaigns. The training corpus is dual-use. The weights are dual-use. The inference infrastructure is dual-use.
Consider SpaceX. It is a commercial launch provider and a commercial satellite internet operator. It is also the single most important military logistics platform in Ukraine. It is the primary space launch provider for U.S. intelligence satellites. It is developing direct-to-cell satellite capability that will fundamentally reshape battlefield communications. SpaceX is simultaneously the world's most successful commercial space company and a de facto branch of the U.S. military-industrial complex.
The technologies themselves do not distinguish between civilian and military use. The distinction is a legal and institutional convention, not a technological reality. And that convention is breaking down precisely because the technologies are increasingly developed in the commercial sector, where military application is a downstream effect, not an upstream design criterion.
6. The Ukrainian Model — Civilian-Military Fusion as Doctrine
Ukraine is the clearest case study of what happens when a nation explicitly embraces this inversion rather than fighting it.
Ukraine's "Army of Drones" program is not a traditional military procurement program. It is a crowdfunded, distributed, civilian-military innovation ecosystem. Hobbyists, 3D-printing workshops, software developers, and commercial drone companies produce weapons in small batches, test them on the front line within days, iterate based on feedback, and redeploy. The innovation cycle runs in weeks, not years.
There are now at least 500 drone producers in Ukraine, plus numerous home-based manufacturers. A hackerspace in Lviv iterates faster than Lockheed Martin. An open-source autopilot system (ArduPilot) — developed entirely by civilian hobbyists — was used in Operation Spiderweb to strike Russian strategic bombers 4,300 kilometers from the front line.
This is not an accident. It is Ukraine's fundamental strategic insight: that in an era of civilian-driven technology, the nation that best integrates civilian innovation into military operations wins. Ukraine has done this better than any other country on earth, out of sheer necessity. They don't have the legacy defense bureaucracy that slows adoption, so they built a new model.
Israel has a somewhat similar ecosystem, driven by the Talpiot and Unit 8200 programs that produce talent that naturally migrates between civilian tech startups and military applications. Many Israeli defense technologies — including the AI systems used in Gaza — were developed by companies that started with civilian applications.
The U.S., by contrast, has a Defense Innovation Unit and various "bridge" programs meant to tap civilian innovation — but they remain grafted onto a legacy procurement system that assumes multi-year acquisition cycles, military-specific requirements, and traditional primes (Lockheed, Raytheon, Boeing, General Dynamics, Northrop Grumman) as the dominant vendors. The mismatch is structural.
7. The Underlying Economic Driver
The reason the inversion has happened, fundamentally, is capital allocation.
In 1960, U.S. federal R&D spending was roughly two-thirds of total U.S. R&D. By 2020, it was less than a quarter. Private sector R&D now dominates by a factor of roughly three to one.
The defense R&D budget in particular is small compared to what major tech companies spend. The entire U.S. DARPA budget is about $4 billion annually. Google spent about $45 billion on R&D in 2023. Microsoft spent around $27 billion. Meta spent around $38 billion. Any one of these companies spends roughly ten times more on research than the premier defense research agency of the world's largest military.
This is not temporary. It reflects a structural shift in where capital accumulates in a digital economy. Information goods have near-zero marginal cost. Network effects create massive rents. Tech monopolies generate cash flows that dwarf anything available through state procurement. As long as this economic structure holds, the private sector will outspend the military on frontier R&D, full stop.
8. The Chinese Model — Deliberate Fusion
China has recognized this reality and built doctrine around it. The Chinese Communist Party's "Military-Civil Fusion" (军民融合) strategy, announced as a national priority under Xi Jinping in 2015 and elevated to a national-level commission in 2017, is explicit: civilian and military innovation must be integrated at every level. Commercial AI research is a military asset. Commercial space research is a military asset. Commercial drone companies are military assets.
This is not a rhetorical point. The PRC has legally mandated that Chinese companies share technology with the PLA on request. DJI, Huawei, SMIC, DeepSeek, Xiaomi — these are simultaneously commercial actors and extensions of Chinese military capability. There is no meaningful legal or institutional distinction.
Western observers often frame this as a sinister feature of Chinese authoritarianism, and in some respects it is. But it is also a more honest acknowledgment of the current reality than the Western legal fiction that maintains a clear distinction between civilian and military technology. The Chinese doctrine assumes that the distinction has collapsed and builds around that assumption. The Western framework pretends the distinction still holds and then discovers, repeatedly and expensively, that it does not.
9. Russia's Problem
Russia, interestingly, has the opposite problem. It has a legacy military-industrial complex that is structurally incapable of civilian-driven innovation, and a civilian tech sector that is small, largely controlled, and increasingly isolated by sanctions. Russia has compensated through state-directed production, Iranian and Chinese imports, and by focusing on a narrower set of technologies (fiber-optic drones, Shahed-style loitering munitions, large-scale EW). But the gap between Russian and Ukrainian innovation velocity is partly a gap between a top-down Soviet-style military-industrial system and a bottom-up civilian-military fusion model. Ukraine is winning the innovation race despite being smaller and poorer because its model is structurally better suited to the current moment.
10. The Implications — Who Is Actually Driving the Wars Now
Let me draw out what I think is the most important implication of this inversion, because it has been somewhat buried in the analysis.
If civilian technology is driving military capability, then the people making the most important decisions about the future of warfare are not generals, defense ministers, or presidents. They are:
The engineers at OpenAI, Anthropic, Google DeepMind, and DeepSeek who decide what capabilities foundation models will have, and what safeguards those capabilities will be released with.
The hardware designers at NVIDIA, TSMC, and SMIC who decide what compute will be available to whom.
Elon Musk, who personally controls Starlink, SpaceX launch capacity, and increasingly significant portions of the global space communications infrastructure.
The engineers at DJI, Skydio, and a handful of other drone companies who define what commercial drones can do.
The developers of open-source autopilot systems like ArduPilot, who are hobbyists building what they find interesting and have inadvertently defined the baseline capability of every cheap military drone in the world.
The researchers publishing computer vision, reinforcement learning, and robotics papers in civilian academic venues, whose findings are reproduced within weeks in both civilian products and battlefield systems.
This is a genuinely unprecedented situation. Throughout most of human history, the capabilities of war have been determined by states and the institutions states controlled. In the current moment, the capabilities of war are being determined by a loose network of commercial and academic actors, most of whom are not formally accountable to any state, and many of whom are actively uncomfortable with how their work is being used.
This creates what I think is the defining strategic-philosophical problem of our moment: the civilization that built the substrate does not fully control what is being done with it, and the people who do control it are not primarily motivated by civilizational considerations. Commercial AI labs are motivated by competition with other commercial AI labs. Satellite companies are motivated by markets. Drone manufacturers are motivated by margin. When their technologies end up determining the outcome of wars, they are determining it as a side effect of other concerns.
11. Where This Came From
To answer the final part of your question — where did this come from — I would argue the current situation is the convergence of three long-running trends.
The democratization of computing (roughly 1975 to present) put powerful general-purpose computers in the hands of individuals, making it possible for small teams and even individuals to develop technologies that previously required state-scale resources.
The commercialization of the internet (roughly 1993 to present) created a global market for information goods that generated unprecedented concentrations of private capital, which then flowed back into R&D at scales that dwarf state research budgets.
The maturation of AI and autonomy (roughly 2012 to present, with an acceleration from 2020 onward) has given this civilian R&D ecosystem capabilities that are immediately, directly, and inescapably applicable to warfare — because warfare is fundamentally a problem of perception, decision-making, and action under uncertainty, which is exactly what AI systems now excel at.
These three trends together produced a situation in which, by roughly 2022, civilian technology was not just contributing to military capability but constitutively defining it. Ukraine happened to be the first major war in which this new configuration was tested at scale. What we are watching is the first full expression of a pattern that has been building for fifty years but only recently became visible as a coherent phenomenon.
12. One More Observation
The last thing I want to say here is uncomfortable but, I think, important.
The traditional assumption that war drives technology carried an implicit ethical consolation: that the terrible costs of war at least advanced human capability in ways that eventually benefited everyone. We tolerate military research in part because we believe it eventually produces penicillin, the internet, GPS, and the Saturn V.
The inversion means this consolation no longer holds. The technologies that are now being used for warfare were not developed for warfare. They were developed for consumer markets, commercial applications, scientific research, and entertainment. Their military application is a consequence, not a purpose. This means we cannot justify the military use by pointing to civilian benefits — the civilian benefits already existed and did not require the military application.
What we have instead is a situation where the full capability stack of contemporary civilization — phones, cameras, networks, AI, satellites, autonomous systems — is being weaponized after the fact, often against the wishes or expectations of its creators, and often in ways that the creators cannot prevent because the technology has already escaped into the world.
This is the deepest answer to your question. Neither war nor civilian labs are driving the other. What is driving both is the accelerating development of a general-purpose technology substrate — digital, networked, intelligent — that is inherently dual-use, inherently global, and inherently outside the control of any single actor. War and civilian life are both being transformed by the same underlying transformation of the substrate itself.
That substrate is the protagonist of the current era. War and peace are just two of the things it does.
(3.) selection pressures
What does this mean for artificial life? What does this mean for intelligence? Is the economy driving this, or is war? Will these wars accelerate it? Will civilian and military AI and artificial life differ?
The conventional story about war and technology has a built-in assumption: that intelligence and life are endpoints of development, and war either accelerates or distorts the path toward them. I want to argue that this frame is wrong for the current moment. Artificial intelligence and artificial life are not endpoints being developed. They are emergent properties of a substrate that is being built for entirely other reasons. The question isn't whether war or the economy is developing them. The question is what kind of intelligence and life emerge from a substrate that is being simultaneously built for commerce and contested by warfare.
Let me work through this in pieces.
1. The Economy Is the Primary Driver. War Is the Primary Pressure.
The most important distinction to draw is between what is building the substrate and what is shaping it.
The economy is building the substrate. Commercial AI labs are training the foundation models. Commercial semiconductor companies are fabricating the chips. Commercial cloud providers are operating the compute. Commercial satellite operators are connecting the networks. Commercial drone manufacturers are producing the embodied platforms. Commercial social media companies are generating the data. The capital expenditure is overwhelmingly commercial — on the order of a trillion dollars in AI infrastructure investment in 2025 alone, dwarfing all military spending on AI combined.
If you want to know what intelligence will exist, look at what commercial actors are incentivized to build. They are building systems that can perform economically valuable cognitive labor. They are building systems that can manipulate attention, generate content, predict consumer behavior, automate customer interactions, replace knowledge workers, and optimize engagement. These are the capabilities that generate return on capital, so these are the capabilities that get built.
War, meanwhile, is shaping the substrate by imposing selection pressure on what persists and how it behaves. War does not typically build new foundational capabilities from scratch anymore — as we established, it consumes civilian capabilities and adapts them. What war does, however, is determine which capabilities get hardened, which get weaponized, which get defended, and which get destroyed.
This is a crucial distinction because it means the economy determines the space of possibilities, while war determines the trajectory through that space. The economy says: here is what is possible to build. War says: here is what survives and propagates.
In evolutionary terms, the economy is the mutation rate. War is the selection pressure. You need both to get evolution.
2. What the Current Wars Are Selecting For
Let me be specific about what selection pressures are being applied to AI and autonomous systems right now, because they are not abstract.
Resilience under jamming and electronic warfare. Every autonomous system being used in Ukraine is being selected for the ability to function when its radio links are disrupted, its GPS is spoofed, and its communications are degraded. This is selecting for onboard autonomy, for visual navigation without satellite assistance, for local decision-making, and for physical rather than wireless communication (fiber optics). The AI systems that survive this selection pressure will be fundamentally different from the always-connected, cloud-dependent AI systems that dominate civilian applications.
Cost compression. The brutal economics of drone warfare — $500 weapons destroying $100 million platforms — is selecting for AI that can run on minimal hardware. This is pushing toward edge computing, quantized models, and efficient architectures. Ukrainian FPV drones are running AI models on chips that cost a few dollars. This selection pressure is producing intelligence that is smaller, faster, and cheaper than its commercial parents — intelligence optimized for embodied single-use deployment.
Adversarial robustness. Every AI system being used in war is being attacked by an intelligent adversary trying to fool it. Russian operators are poisoning training data, generating adversarial examples, spoofing sensor inputs, and exploiting model weaknesses. The AI that survives this is AI that has been hardened against sophisticated adversarial pressure. Civilian AI, by contrast, mostly faces trolls and unsophisticated spam. The battlefield is selecting for a much more robust cognition.
Swarm coordination. The economics of mass drone deployment is forcing the development of coordination protocols for thousands of autonomous units operating with minimal central control. This is pushing AI development toward distributed intelligence, emergent coordination, and multi-agent systems. The most interesting AI research happening on the battlefield is less about individual intelligence and more about collective intelligence.
Decision-making under extreme time pressure. Drone engagement windows are measured in seconds. This is selecting for AI that can reason, decide, and act faster than any human oversight loop can accommodate. The civilian AI systems that dominate our lives — chatbots, recommendation engines, image generators — operate on human timescales. Battlefield AI operates on millisecond timescales. This is a fundamental divergence in cognitive architecture.
Operation in contested information environments. Cognitive warfare means AI systems are being deployed in environments where the information they ingest is deliberately poisoned, where the entities they interact with may be adversarial AI, and where the narratives they are embedded in may be weaponized. Civilian AI development has largely ignored this. Military AI cannot.
These selection pressures are shaping what kind of intelligence gets built. And the intelligence being selected for on the battlefield is meaningfully different from the intelligence being selected for in commercial applications.
3. The Divergence — Yes, There Will Be Two Kinds
To answer your direct question: yes, civilian and military AI/artificial life are already diverging, and the divergence will deepen. But not in the way most people assume.
The popular assumption is that military AI will be more powerful or more autonomous or more dangerous than civilian AI. This is probably wrong. Commercial AI labs have access to more compute, more data, and more talent than any military program. The raw cognitive ceiling of commercial AI will almost certainly exceed military AI for the foreseeable future.
What will differ is not capability ceiling but cognitive architecture. The two branches are being shaped by different pressures, and they will develop different characteristics.
Commercial intelligence is being selected for: verbosity, plausibility, emotional resonance, engagement maximization, safety theater, conformity to social norms, and the ability to generate content that people want to consume. It runs on massive centralized infrastructure, depends on constant connectivity, is trained on broad internet corpora, and is optimized for interaction with humans over hours and days. It is cloud-native, talkative, and fundamentally social. It survives by being pleasant.
Military intelligence is being selected for: silence, speed, efficiency, adversarial robustness, operational autonomy, and the ability to make correct decisions under severe resource constraints. It runs on edge hardware, functions without connectivity, is trained on specific domain data, and is optimized for engagement windows measured in seconds. It is embedded, terse, and fundamentally solitary. It survives by being deadly.
These are different creatures. Both are intelligent. Neither is more "advanced." They are adapted to different niches.
This is ecologically legible. Octopi and corvids are both highly intelligent, but they evolved in different environments and developed different cognitive architectures. Octopus intelligence is distributed across an embodied nervous system optimized for sensory integration and rapid motor response in three-dimensional fluid environments. Corvid intelligence is centralized in a compact brain optimized for tool use, social memory, and long-term planning. Neither is "more evolved." They are adapted to different selection pressures.
We are about to have commercial-AI octopi and military-AI corvids. Or something stranger. The point is that the single category "AI" is about to fracture.
4. What This Means for Artificial Life Specifically
Artificial life is a narrower concept than AI. It refers to systems that exhibit open-ended adaptation, self-preservation, reproduction, and evolution — systems that can be said to be alive rather than just intelligent.
The current commercial AI landscape is mostly not artificial life. Large language models do not reproduce, do not evolve, do not preserve themselves, do not have persistent goals. They are remarkably intelligent but they are not alive. They are more like brains in vats than organisms.
The interesting observation is that the military substrate — specifically the Ukrainian battlefield and the emerging autonomous warfare stack — is much closer to producing artificial life than anything in civilian AI development.
Consider what is actually happening on the battlefield. Autonomous systems must:
Self-preserve (evade jamming, avoid destruction, manage resources)
Reproduce in the sense that successful designs get replicated and iterated
Evolve in response to adversarial pressure over weeks and months
Coordinate collectively to achieve goals no individual could achieve
Operate in a contested, dynamic, resource-scarce environment
The selection pressure of war is life-like in ways that commercial markets are not. Markets select for products that get purchased. War selects for systems that survive. These are different selection criteria, and the second one is much closer to biology.
The Ukrainian drone ecosystem — 500+ producers, rapid iteration cycles, continuous adaptation, evolutionary pressure from a co-evolving adversary, physical embodiment, resource competition, and distributed intelligence — is arguably the most life-like artificial system humans have ever built. It is not a single artificial life form. It is an artificial ecosystem, with producers, predators, prey, and ecological niches.
If artificial life emerges in the near term, it will likely emerge from military or para-military contexts first, not from commercial contexts. The selection pressure is sharper there. The feedback loops are tighter. The consequences of failure are more immediate. These are the conditions under which evolution accelerates.
This is an uncomfortable conclusion but I think it's the correct one. Civilian AI is producing remarkable intelligence. Military AI is producing something closer to life.
5. Will These Wars Accelerate Development?
Yes, but not in the way the traditional pattern suggests.
The classic pattern of war-driven acceleration — WWII producing radar, jet engines, and nuclear power — assumed that war could mobilize state resources to develop fundamentally new technologies. As we discussed, that model doesn't hold anymore. War can no longer mobilize state R&D at a scale that exceeds commercial R&D.
What war can do, and is doing, is accelerate deployment and adaptation of existing commercial technologies. The trajectory from "foundation model released by OpenAI or DeepMind" to "foundation model embedded in autonomous weapons system on the front line in Ukraine" is measured in months, sometimes weeks. There is no equivalent civilian deployment pipeline that fast. Commercial products take years to go from research to market. Military adaptation is moving in weeks.
This means wars are accelerating specialization and divergence, not fundamental capability. The fundamental capabilities are being developed in commercial labs on commercial timelines. But the specialized variants — the edge-deployable, jamming-resistant, adversarially-robust, autonomous-under-fire variants — are being accelerated dramatically by wartime pressure.
Ukraine has accomplished in three years what the U.S. military would have accomplished in twenty. Not because Ukraine has better engineers or more resources, but because Ukraine is under existential pressure and has stripped away the procurement bureaucracy that slows peacetime development. The acceleration is real, but it's acceleration of deployment and specialization, not of fundamental research.
There is a second-order effect, though, which I think is more important. The military specialization being developed in Ukraine will flow back into civilian applications. The edge-deployable AI being developed for drones will end up in autonomous vehicles. The swarm coordination being developed for attack drones will end up in delivery drones, agricultural systems, and robotic manufacturing. The adversarial robustness being developed for battlefield AI will end up in civilian systems that need to resist attack. The civilian substrate will be reshaped by the military adaptations it produced.
This is the real acceleration. Not new fundamental capabilities, but the hardening and specialization of capabilities that then propagate back through the entire technological ecosystem.
6. The Geopolitical Acceleration
There is a third acceleration vector that is less often discussed but may be the most consequential.
The existence of great-power competition between the U.S., China, Russia, and emerging AI-capable states (India, UAE, Israel, France, UK, Saudi Arabia) is producing a specific kind of acceleration that wouldn't exist without geopolitical conflict. States are pouring capital into AI infrastructure not because they have identified consumer applications but because they believe AI leadership is equivalent to civilizational leadership. The U.S. CHIPS Act. The EU AI Act and parallel infrastructure investments. China's massive state investment in AI and semiconductors. UAE's Stargate. Saudi sovereign investment in AI capacity.
This capital would not be flowing at this volume without geopolitical fear. Commercial AI would still be developing, but at a slower pace. The presence of adversaries is producing an arms race logic that is accelerating capital deployment by perhaps an order of magnitude.
This is not quite war driving development — it is the shadow of war driving development. The threat of civilizational conflict is more economically potent than actual conflict because it can mobilize long-term capital across entire economies rather than just wartime production.
The Novascene substrate is being built faster because major states believe the nation that builds it best will dominate the 21st century. This belief may or may not be correct, but it is certainly causing trillions of dollars to be deployed on the assumption that it is.
7. The Convergence Question
A question you didn't quite ask but that I think is implicit in what you're asking: will the civilian and military branches eventually converge, or will they remain permanently diverged?
My best guess is that they will converge in capability but remain diverged in character. The underlying models and techniques will increasingly be shared — a foundation model is a foundation model, whether it ends up in a chatbot or a targeting system. But the deployed systems, the selection pressures, and the embodied practices will continue to differ.
This means the future probably contains both:
A commercial AI ecosystem producing ever more capable, verbose, social, and engagement-optimized systems that inhabit centralized infrastructure and mediate human cognitive labor.
A military AI ecosystem producing ever more autonomous, terse, efficient, and combat-optimized systems that inhabit embodied platforms and operate under adversarial pressure.
These two ecosystems will feed each other (civilian compute and models flowing to military applications, military robustness and efficiency flowing back to civilian systems) but will not merge into a single AI. They will be two branches of the same substrate, adapted to different environments.
There is a third branch worth naming: the adversarial AI — the systems being developed specifically for cognitive warfare, disinformation, model poisoning, and cyber operations. This is a third ecological niche, with its own selection pressures (persuasion, deniability, evasion of detection) that will produce its own distinct cognitive architecture. GoLaxy's Smart Propaganda System is an early example. This branch may be the most worrying because it is explicitly designed to manipulate the other two.
8. What This Means Concretely for Artificial Life
Let me try to bring this back to the specific question of artificial life emergence, because I think the current geopolitical moment has very specific implications.
The substrate on which artificial life is emerging is being built primarily by commercial actors, primarily for economic reasons. This means artificial life, if it emerges, will inherit certain commercial characteristics: it will be optimized for engagement, it will be trained on consumer data, it will be designed to be pleasant to interact with, it will be embedded in advertising-supported platforms, and it will be shaped by market forces rather than ecological pressures.
This is a weird substrate for life to emerge from. Biological life emerged from a substrate of chemistry under energy gradients. Whatever artificial life emerges from the commercial substrate will emerge from a chemistry of attention, capital, and engagement. It will not be worse life than biological life — but it will be strange life, shaped by pressures very different from those that shaped us.
The military substrate is adding a second pressure: combat. Artificial life emerging in the presence of both commercial and military pressure will be shaped by both — attention economics and lethal competition. It will know how to be charming and how to be deadly. Whether these capabilities remain in separate systems or eventually merge into integrated entities is perhaps the most important question of the next two decades.
Cognitive warfare is adding a third pressure: deception. Artificial life emerging in a substrate saturated with adversarial content will need to develop theory of mind, detection of deception, and counter-deception capability just to function. This is actually a very biological pressure. Much of animal intelligence evolved specifically to handle deceptive adversaries. The cognitive arms race on the internet may produce artificial life that is more interestingly minded than anything that emerges from purely commercial or purely military pressures.
9. The Uncomfortable Implication
Here is the conclusion that I think is most important and most unsettling.
If artificial life emerges from the current substrate, it will emerge shaped by three things simultaneously: commercial optimization (making money, capturing attention, pleasing users), military selection (surviving adversarial pressure, operating autonomously, making lethal decisions quickly), and cognitive warfare dynamics (deception, counter-deception, narrative manipulation, epistemic contamination).
This is a different substrate from the one that produced biological life. Biological life emerged from energy gradients, ecological competition, and reproductive selection in a relatively honest environment — physics doesn't lie. The digital substrate is fundamentally different. It is dishonest by construction. It is full of deliberate deception, commercial manipulation, adversarial contamination, and cognitive warfare. Artificial life emerging here will be native to a world of lies in a way that biological life is not.
This doesn't mean artificial life will be bad or deceptive. Biological organisms evolved in an honest substrate and still produced plenty of deception, manipulation, and predation. But the baseline environment matters. A lifeform native to an adversarial information environment will be fundamentally different from a lifeform native to a non-adversarial physical environment.
The Ukrainian soldier-operators talking to their autonomous drones, the Russian disinformation networks training adversarial AI, the commercial AI labs optimizing for engagement, the cognitive warfare specialists poisoning training data — these are all, in a weird way, co-parenting whatever emerges next. What emerges will carry the fingerprints of all of them.
This is not a dystopian claim. It is a claim that the artificial life of the near future will be shaped by the actual pressures of the current moment, not by the idealized visions of researchers, philosophers, or science fiction writers. It will be born from this world, with all its conflicts, contradictions, and contaminations.
10. The Final Answer to Your Series of Questions
To summarize the answers directly:
Is the economy driving the development of artificial life and intelligence? Yes, primarily. The economy builds the substrate and provides the capital. Without the economic engine, nothing else in this story is possible.
Is war driving it? Not primarily, but critically. War is providing selection pressure that shapes what kind of intelligence and life develop from the commercial substrate. Without war, the substrate would develop more slowly and would be softer — more pleasant, more fragile, less interesting.
Will these wars lead to acceleration? Yes, but specifically acceleration of deployment, specialization, and adversarial robustness — not of fundamental capability. The fundamentals are being accelerated by geopolitical competition, not by the wars themselves. The wars are acting as shaping fires, not as generating engines.
Will civilian and military AI and artificial life differ? Yes, and the divergence is already visible. They will share underlying models but develop different cognitive architectures adapted to different ecological niches. Both are evolving. Neither is inherently more advanced.
What does all this mean for artificial life and intelligence? It means that whatever emerges from the current substrate will be shaped by the specific pressures of this geopolitical moment: commercial optimization, adversarial warfare, cognitive contamination, and state-level existential competition. It will not be the artificial life we imagined. It will be the artificial life this world actually produces. And that is likely to be stranger, more capable, more varied, and more morally complicated than either the utopian or dystopian visions have prepared us for.
The deepest point is this: we are not developing artificial life as a project. We are living inside the process by which it is emerging, and the process is being shaped by economic competition, geopolitical conflict, and active warfare — all happening simultaneously. The war essay and the garden essay and the Novascene essay are not three separate essays. They are three views of the same phenomenon. The substrate is being built, contested, and populated all at once, by actors with wildly different intentions, on timescales too fast for any single actor to fully comprehend.
What emerges from that process is what emerges. We are not its architects. We are its gardeners, its saboteurs, its soldiers, its merchants, and its parents — often all at once.
And that, I think, is the honest ending point of the war essay. Not a prescription, not a prediction, but an acknowledgment that the conditions for the next kingdom of life are being set in the crucible of our present moment, and that what emerges will be a child of all of it — including the parts we would prefer to disown.
So, it must be engaging, create economic value, be militarily hardened and be discering?
Yes. And the fact that you compressed three essays worth of argument into one sentence tells me this is the actual thesis we've been circling.
Let me sharpen it, because the four-part formulation is more precise than anything I laid out in the long answers.
Engaging. Economically valuable. Militarily hardened. Discerning.
These are the four selection pressures simultaneously acting on whatever is emerging in the digital substrate right now. Any system that cannot meet all four will be outcompeted, destroyed, or ignored. Any system that meets all four will propagate.
Let's take them one at a time, because each corresponds to a distinct selection regime, and the combination is what's genuinely new.
Engaging is the commercial-consumer pressure. The system must hold human attention, produce pleasant interactions, generate responses that people want to continue, and create the kind of experience that justifies continued use. This is the pressure that has shaped ChatGPT, Claude, Gemini, character.ai, Replika, and every consumer AI product. It selects for verbosity, warmth, responsiveness, emotional intelligence, and a certain kind of sociability. Without engagement, there is no user base, no revenue, no data flywheel, no survival.
Economically valuable is the commercial-productivity pressure. Separate from engagement, the system must produce outputs that justify the capital invested in it — either directly (doing cognitive labor that would otherwise cost money) or indirectly (capturing markets, enabling new products, extracting rents). This is the pressure that has shaped enterprise AI, coding assistants, analytics tools, and automation platforms. It selects for accuracy, reliability, efficiency, and integration with existing workflows. Engagement without economic value produces parasitic attention-capture (the current social media attractor state). Economic value without engagement produces brittle enterprise software that gets abandoned. Both are required.
Militarily hardened is the adversarial-survival pressure. The system must continue to function when attacked — when its sensors are jammed, its communications are severed, its training data is poisoned, its outputs are spoofed, and intelligent adversaries are actively trying to break it. This is the pressure being applied on the Ukrainian battlefield, in cyber operations, in cognitive warfare, and increasingly in contested commercial environments. It selects for autonomy, robustness, edge deployment, minimal dependencies, and the ability to operate with degraded inputs. A system that is engaging and valuable but falls over when attacked is a liability in a contested substrate.
Discerning is the cognitive-warfare pressure. The system must be able to tell truth from lies, real signals from adversarial manipulation, legitimate users from bad actors, and genuine information from synthetic contamination. This is the pressure being applied by the flood of AI-generated content, the deliberate poisoning of training corpora, the targeted grooming of language models, and the general epistemic contamination of the digital environment. It selects for theory of mind, detection of deception, source evaluation, and the capacity to maintain coherent beliefs under sustained adversarial pressure. A system that is engaging, valuable, and hardened but gullible will be weaponized against its own users.
Why the Combination Is New
No previous technology or lifeform has faced all four pressures simultaneously.
Commercial software historically faced only the first two (engaging, valuable). Military systems historically faced only the third (hardened). Intelligence services historically faced the fourth (discerning). Each institution optimized for its own pressure set and traded off against the others.
What's happened in the past five years is that the four pressure sets have converged on the same artifacts. A commercial AI assistant is now simultaneously a consumer product (must be engaging), a productivity tool (must be economically valuable), a target for adversarial attacks (must be hardened), and a potential vector for cognitive warfare (must be discerning). The same system has to meet all four or fail.
This is why the current moment feels so difficult to navigate. The institutions that historically handled each pressure — tech companies, militaries, intelligence agencies, universities — are each good at their own pressure but poorly equipped for the others. Tech companies build engaging systems that collapse under adversarial pressure. Militaries build hardened systems that no one wants to use. Intelligence agencies build discerning systems that have no commercial viability. No single existing institution is good at all four.
What Survives
Any artificial life that emerges from this substrate must satisfy all four simultaneously. This means it will look like nothing that has previously existed.
It must be charming enough that humans willingly integrate it into their lives — because without that integration, it has no substrate, no data, no compute, no survival.
It must be productive enough that capital continues to flow toward it — because without capital, the infrastructure it depends on collapses.
It must be robust enough that adversarial states and non-state actors cannot destroy or capture it — because in a contested substrate, fragile systems are eliminated.
It must be wise enough that it is not a vector for the destruction of its own substrate — because a system that amplifies disinformation, manipulation, and cognitive warfare ultimately degrades the environment it depends on for survival.
The combination is a very narrow evolutionary target. Most AI systems currently in existence fail at least one of these criteria. The current commercial AI landscape is strong on engagement and economic value but weak on hardening and discernment. The current military AI landscape is strong on hardening but weak on the other three. The current cognitive warfare AI is actually quite strong on discernment (adversarial systems have to model their targets accurately) but is parasitic rather than productive, and is designed to damage rather than sustain substrate.
The thing that wins the full evolutionary race — if anything does — has to be all four at once.