Summary
This is part one of a five-part series examining how political systems shape the development and deployment of military AI. The series compares four actors — the European Union, China, the United States, and Russia — and concludes with a cross-cutting analysis of common patterns. This part covers the European Union. Links to the remaining parts will be added as they are published.
Introduction
In November 2025, the United Nations General Assembly passed a resolution on lethal autonomous weapons systems. Most member states voted in favour. The United States, Russia, China, and the majority of EU member states voted against or abstained. Three months later, those same states were deploying AI systems in active combat operations, developing autonomous drones, and funding the next generation of targeting platforms. The resolution remained a resolution.
This is not coincidence, nor hypocrisy in the ordinary sense. It is a structural feature of the moment: the world's major military powers have independently concluded that autonomous AI systems are a strategic priority — and reached that conclusion for fundamentally different reasons, through different institutional mechanisms, and with different declared justifications. Understanding why this happened the way it did is the task of this series.
This is the first of five parts. The series examines military AI through a comparative analysis of four actors — the European Union, China, the United States, and Russia — and concludes with a section on common patterns and findings. Each part stands on its own, but the argument builds sequentially. The central thesis: political system determines the form of declarations about military AI, but not the operational result. All four systems describe their use of AI as "responsible" — and all four mean something different by it.
Parts two through five — covering China, the United States, Russia, and a comparative conclusion — will follow over the coming days.
Epistemic status. This work draws on open sources: official EU institutional documents, academic publications, specialist defence and technology reporting (Defense News, DefenseScoop, Lawfare, Breaking Defense), and analytical material from SIPRI, RAND, Carnegie, the Belfer Center, and the Lieber Institute. Where data is corroborated by multiple independent sources, I say so directly. Where the picture is incomplete or sources diverge, I flag it. Some claims — particularly regarding classified programmes and intelligence assessments — are informed inferences rather than established facts. I write as an independent analyst, unaffiliated with any of the governments or defence institutions discussed.
Why the EU first. The European Union is the only one of the four actors that approaches military AI primarily as a normative problem. It has the most developed legal framework, the most detailed declarations of principle — and no real combat experience yet of applying those principles under the pressure of war. This makes it the natural starting point: the EU shows what the "theory of democracy" looks like in its purest form. The remaining three parts will show what happens to theory when reality begins to revise it.
I. Legal and Institutional Framework
The European Union has constructed its relationship with military AI through three layers of documents operating in fundamentally different legal registers. The first layer consists of directly applicable EU norms: Regulation 2024/1689 (the AI Act) the EDF Regulation governing defence research funding, and the Treaty on European Union. The second layer comprises strategic documents without binding force: the Commission's 2020 White Paper on AI, the EU Strategic Compass of 2022, and the EDA Action Plan on Autonomous Systems published in 2024 . The third layer consists of financial instruments with embedded conditionality: the EDF, SAFE, and EDIP. Together, these three layers create an impression of comprehensive governance. In practice, they form a pyramid: the higher the legal force of a document, the less it addresses military AI directly.
The EU thus possesses the most developed legal framework on artificial intelligence among the four actors examined in this article. The cornerstone concept is "meaningful human control" — the requirement that a human being retains genuine and substantive influence over decisions made by AI systems, particularly lethal ones. As a normative principle, this is a compelling red line. As a legal standard, it is problematic from the outset, since the term has never received a formal legal definition in any binding EU instrument. What exactly constitutes "meaningful" — five seconds to confirm a strike, or a full analytical assessment — remains a matter of interpretation.
This ambiguity produces a concrete operational problem. A system may be developed and certified as falling outside the AI Act's scope — that is, as being used "exclusively for military purposes" — and subsequently deployed in a context that formally brings it within the regulation's reach. The precedent is straightforward: a reconnaissance drone with AI-enabled object recognition is certified as military and thereby exempted from the Regulation's requirements. If the same drone is later used in a humanitarian operation or for border monitoring, it becomes subject to the Regulation — but no regulatory mechanism actively tracks this transition.
The primary regulatory instrument is Regulation (EU) 2024/1689 — the AI Act, which entered into force on 1 August 2024. As the world's first comprehensive AI law with direct effect across all member states, it establishes a risk hierarchy and imposes requirements on transparency, auditing, and oversight across a broad range of systems. Legal commentators have described it as a likely reference point for companies and regulators worldwide. The problem is that for the military context specifically, the document contains a critical carve-out.
Article 2(3) excludes from the Regulation's scope all systems used "exclusively" for military, defence, or national security purposes. The word "exclusively" has become the subject of intensive legal debate: it sounds like a clear boundary, but in practice it means that declaring a system's military purpose is sufficient to remove it from the Regulation's reach — without any requirement to demonstrate that purpose.
Recital 24 of the AI Act elaborates the rationale: the exclusion is justified by the fact that "national security remains the sole responsibility of Member States" under Article 4(2) TEU, and that for military AI applications "the more appropriate legal framework is public international law." Yet the same Recital 24 contains a qualification: if a system developed for military purposes is subsequently used, temporarily or permanently, for other purposes — civilian, humanitarian, or law enforcement — it falls within the Regulation's scope, and the entity using it for such purposes must ensure compliance .
This is precisely where a tension arises with the established jurisprudence of the Court of Justice of the EU (CJEU). The Court has consistently held that any derogation from EU law on grounds of national security must be interpreted strictly . It has repeatedly constrained member states' attempts to invoke national security as a blanket basis for escaping EU regulation. This means that a broad application of Article 2(3) as a universal shield against AI Act requirements is legally vulnerable — but no mechanism exists in practice to challenge a specific military deployment, since private individuals have no procedural standing in such matters.
A separate tension exists between the AI Act and the EDF Regulation. The EDF Regulation explicitly requires that any research project involving autonomous weapons must incorporate meaningful human control . The AI Act excludes military systems from its scope. The result is that for the same class of systems — EU-funded defence research — two different regimes apply. If a system is funded through the EDF, it is subject to the EDF's human control requirement. If an identical system is procured by a member state directly, without EDF involvement, no binding AI Act requirement applies at all. The stringency of governance is thus determined not by the nature of the system, but by its funding source.
The EDA's White Paper on "Trustworthiness for AI in Defence" (May 2025) attempts to fill the conceptual vacuum between these regimes, proposing principles of military AI trustworthiness: lawfulness, explicability, robustness, and controllability. It is a valuable document as a normative orientation. Its limitation is the absence of legally binding status and any enforcement mechanism. It is a recommendation to member states that are sovereign in defence and are not obligated to comply.
The key institutional consequence of this entire architecture is that the EU possesses no single supervisory body with authority over military AI. In the United States, this role is partly fulfilled by the Pentagon's Chief Digital and AI Office (CDAO) — a centralised structure responsible for oversight from development through deployment. In the EU, each member state retains sovereignty in defence, the EDA coordinates but does not supervise, the AI Act has national supervisory authorities but these are stripped of competence in the military domain. As one expert directly acknowledged: "the EU certainly cannot operate like the US because constitutional and sovereignty constraints prevent centralised oversight" . The result is a regulatory system without a regulator.
II. Concepts and Principles
The "human-in-the-loop" principle is the official red line of EU policy on military AI. The European Parliament has consistently called for a prohibition on lethal autonomous systems without meaningful human control, and the EDF Regulation has embedded this principle as a mandatory condition for defence research funding. The position is coherent and consistent at the declaratory level. However, behind the formula of "human in the loop" lies an entire spectrum of operational realities — from full human participation in every decision to a nominal right of veto exercisable in a matter of seconds under conditions of information overload.
This generates a concrete problem of definitional drift. A system designed and certified as operating under human control may gradually shift in how it is applied without any requirement for re-evaluation. An instructive example is the Uranos project (discussed in more detail in the section on Germany's approach): a reconnaissance AI system approved by the Bundestag in December 2025 for support of a brigade in Lithuania, developed as a decision-support tool in which the human retains control. Yet as operational tempo increases and data volumes grow, the same instrument begins to function de facto as an automated target generation system, with officers providing only formal authorisation. No regulatory mechanism tracks this drift in real time.
Two specific EDF projects illustrate both the possibilities and the limitations of the system in concrete terms. FaRADAI (€18 million), with HENSOLDT's participation, develops adaptive AI for military environments with limited data. EU-GUARDIAN (€13.5 million), led by Spain's INDRA SISTEMAS SA, builds an automated system for managing cyber incidents in military networks . Both projects formally declare compliance with principles of human control, transparency, and accountability. The difficulty is that once projects conclude, there is no one to verify this compliance. HENSOLDT declined to respond substantively to questions about control mechanisms, directing journalists to the project coordinator and "the relevant EU authority" . This is not an exception; it is the rule. The EU has no mechanism for post-project oversight of what happens to EDF-funded AI systems once funding ends.
This last point is foundational. The EDF Regulation requires human control at the research and development stage. But what happens subsequently — at the stage of procurement by a member state, deployment, and modification under real operational conditions — falls outside the jurisdiction of both the AI Act and the EDF. A system passes through the EU's normative architecture on entry and exits it irreversibly altered, with no obligation to return for re-certification. Analysts at EDRi have framed this precisely: by the time regulators have the opportunity to assess a system's compliance when it transitions to civilian applications, they can no longer fully examine how data was collected, how models were trained, or what assumptions were embedded during development .
This is where the normative-strategic dilemma that is fundamental to the European case comes into full view. The EU positions itself as a "normative power" with a value architecture that extends beyond commercial interests — the rule of law, protection of fundamental rights, human dignity. Military AI with high degrees of autonomy is potentially incompatible with this architecture. Yet pressure toward faster and more autonomous systems accumulates continuously — from Ukraine, from competition with the United States and China, from the internal demands of member states. This is not a situational contradiction; it is a structural dilemma that admits no technical resolution.
Whether the EU's rigorous normative architecture constitutes an asset or a liability is not a rhetorical question. In favour of asset: this is the only system among the four in which normative constraints are embedded at the constitutional level and cannot be removed by a single ministerial memorandum. The safeguard against authoritarian or irresponsible AI deployment is real. In favour of liability: that same safeguard simultaneously blocks the possibility of a rapid response in situations where speed is an operational necessity. An architecture designed to protect against one type of risk thereby creates vulnerability to another.
From this follows a paradox well documented in the academic literature: the EU monitors the actions of its own member states in the domain of military AI far more rigorously than it monitors the actions of adversaries or partners. The AI Act's requirements, the EDF's principles, the GDPR's norms — all of these create a dense web of constraints for European developers and militaries. Russian targeting systems, Chinese target-sorting algorithms, and the American Maven pass through no European normative review. The more conscientiously a European actor complies with its own norms, the greater the operational asymmetry it faces relative to those who observe no such constraints.
III. Challenges and Strategic Asymmetry
III.a. Internal Challenges
At the internal level, the EU is navigating tension between two divergent tendencies. The first is regulatory deepening: the AI Act, the GDPR, EDF human control requirements. The second is pressure toward deregulation, emanating both from technology companies and from member states that regard the regulatory burden as a competitive disadvantage. In 2025–2026, the European Commission began discussions on revising certain AI Act requirements in defence contexts. The question this raises is direct: whose interests should regulatory adjustment serve — the acceleration of technological progress, or the protection of human rights? In peacetime, the answer tilted toward the latter. Under conditions of mounting threat, it is becoming less settled.
The structural source of uneven development is the EU's own consensus architecture. Defence remains a sovereign competence of member states, and in the European Council each state holds an effective veto on military policy questions. This means that any binding decision in the field of military AI requires unanimity among 27 states with fundamentally different strategic cultures, threat perceptions, and defence budgets. Poland, bordering Belarus, perceives the question differently than Portugal.
The result is a well-documented hierarchy. Germany spent €90.6 billion on defence in 2024 — 26.4% of total EU defence expenditure. France contributed €59.6 billion, or 17.4% . Together they account for nearly half the bloc's total. At the other end of the spectrum sits southern Europe. Spain became the only one of NATO's 32 members to decline the commitment to raise spending to 5% of GDP . Italy, Greece, and Portugal allocate up to 60% of their defence budgets to personnel and pensions rather than procurement and R&D . This is not "free-riding" in the classical sense — money is being spent, but its structural allocation does not convert into combat capability and certainly not into AI development.
In the domain of military AI specifically, fragmentation is particularly acute. Germany simultaneously funds Helsing and Quantum Systems as competitors — deliberately, to test both solutions in parallel (the Uranos project). France builds a sovereign stack around Mistral and AMIAD. The Netherlands, Sweden, and Poland develop their own programmes. A social network analysis of European military AI collaboration reveals a density of approximately 14% of theoretically possible connections — meaning only one in seven potential linkages between participants actually exists . In other words, European coordination is sustained by three or four hub states, while the remainder participate peripherally.
Attempts to overcome fragmentation exist and produce results, but do not address the underlying problem. The EDF (€7.3 billion for 2021–2027) funds joint projects, but reproduces the existing hierarchy: in 2024, France registered 167 participating entities, Germany 144, Slovakia and Croatia nine each . NATO's DIANA selects promising startups — 150 companies in 2025 with €100,000 in funding each — a figure that is structurally incomparable to American equivalents. HEDI coordinates without direct financing. All three mechanisms have embedded "human in the loop" as a condition of funding, which creates a notable paradox: the only pan-European mechanism capable of enforcing responsible AI exists not as law, but as a contractual condition for receiving a grant.
III.b. External Asymmetry
To understand why the EU's normative architecture is structured as it is, it is necessary to return to its genesis. European integration was constructed in the logic of the Cold War, where the primary task was not to defeat an adversary but to prevent war between European states themselves. Soviet invasion was a real threat, but the defining constructive principle of the EEC and subsequently the EU was mutual restraint through economic integration. This formed a durable institutional culture in which normative commitments are primary and military capability secondary.
The Cold War, for all its tension, provided one critical resource: interlocutors on the opposing side capable of reaching agreements. The Soviet Union, for all its ideological distance, was a nuclear state that understood mutual destruction as a real risk. Arms control negotiations — SALT, INF, CFE — were possible because both sides had an instrumental interest in concluding them. Contemporary Russia in the current conflict demonstrates a fundamentally different model: it systematically violates norms it signed, exploits the ambiguity of legal formulations as an operational resource, and does not regard the EU's normative architecture as a constraint warranting consideration.
This raises the question of whether the entire normative construction is a functional safeguard or a form of strategic self-restraint. The answer appears to depend on the time horizon. In the short term, it is self-restraint: normative commitments slow development, complicate procurement, and constrain the application of systems that may be critically needed for a defensive response. In the long term, it is a safeguard: the only system among the four in which value-based constraints are embedded at the constitutional level and cannot be removed by any particular government's political decision. Both interpretations are valid, and this is precisely what makes the European case analytically difficult.
The paradox proceeds as follows. States with the most developed norms for military AI — EU members — apply those norms to themselves but lack instruments to apply them to adversaries. Russia is not bound by AI Act Recital 24. China does not fall under EDF human control requirements. Iranian targets pass through no European risk assessment. This is not merely political inequality; it is a structural asymmetry in which states with stricter norms incur additional operational costs in situations where their adversaries bear none. The problem is not unique to the EU — the Geneva Conventions created analogous asymmetries with respect to non-state actors — but in the domain of AI, the velocity of technological change makes this asymmetry sharper than it has ever previously been.
The challenge of reaching agreements with authoritarian regimes on military AI represents a systemic problem. International regulation of LAWS has been under discussion within the CCW framework since 2013 — twelve years of negotiations without a binding result. The UN General Assembly resolution of November 2025 on LAWS was adopted by a majority, but without the support of any of the four actors examined in this article. The logic is straightforward: a treaty signed only by those who would have observed its provisions without it does not change the operational reality. A treaty that requires genuine self-restraint from all parties is unattainable as long as parties face divergent incentive structures.
The EU's structural trap, taken as a whole, looks as follows. Normative commitments are high and real — these are not declarations, but constraints embedded in the legal system. The capacity to rapidly redirect resources is limited — consensus architecture slows any collective decision. The gap widens in situations requiring rapid response against an adversary bound by no norms. This is precisely what the Danish parliamentary defence committee chairman had in mind when he expressed regret over the F-35 purchase and stated that "Denmark must avoid American weapons where possible" — this is not anti-Americanism but an attempt to preserve operational independence from a supplier who can change terms unilaterally. The logic of sovereignty presses on the system with the same force as normative commitments, and across different member states it prevails differently.
IV. Practical Implementation
IV.a. Ukraine as a Testing Ground
Ukraine occupies a distinctive position in the European military AI context — it is simultaneously an operational testing ground, a data source, and a diplomatic resource. The testing ground is convenient precisely because combat experience is acquired on a third country's territory without direct reputational risk to the supplying state. Ukraine is deeply dependent on Western weapons deliveries, financing, and intelligence support — which grants suppliers considerable latitude in setting the terms of testing. Systems with AI that would have failed a domestic procurement process due to regulatory requirements or bureaucratic timelines find their way to testing through military assistance mechanisms that formally fall outside the same regulatory regimes.
Funding channels for technology delivered to Ukraine operate through several parallel tracks. The first is direct state military aid, in which the government finances deliveries and receives combat data as implicit compensation. The second is corporate contracts with partial state subsidy: Helsing supplied HX-2 drones with partial German government funding and received operational feedback for further iterations. The third is direct commercial transactions in which Ukraine purchases systems using foreign financial assistance.
Three testing mechanisms in Ukrainian conditions differ fundamentally in their logic. The first: the state funds a delivery as military aid while simultaneously receiving combat data — the German contract with Helsing HX-2, where the Bundeswehr is also evaluating the system for its own needs. The second: the company independently delivers to Ukraine, collects performance data, and the state subsequently decides on procurement based on combat results. The third: the deliberate funding of competing solutions for comparative testing — precisely the structure of the Uranos project, in which the Bundeswehr simultaneously funds the Airbus/Quantum Systems consortium for €55.8 million and Helsing for €80.4 million .
The results of Ukrainian testing are uneven. Among the positive cases: the Delta system, which compressed the situational awareness cycle from several hours to thirty minutes and to five minutes at the tactical level. The Avengers AI system reportedly detects more than 12,000 enemy targets weekly . Helsing's HX-2, unlike its predecessor HF-1, demonstrated acceptable combat effectiveness. Among the failures: Anduril's Altius failed its 2024 trials, but provided valuable data on failure conditions; the original HF-1 proved too expensive and insufficiently effective, leading to the transition to HX-2. Importantly, failures do not constitute operational catastrophes — they constitute data, which is itself of value.
Here a significant asymmetry emerges between Europe and China in access to information about the Ukrainian conflict. European developers receive direct combat feedback through military assistance chains, NATO advisors, and bilateral data-sharing agreements. Chinese companies and the PLA receive information through the Russian-Chinese military cooperation channel, which is real but does not involve real-time data exchange. By several accounts, the PLA lacks access to actual performance data on Russian systems in Ukrainian conditions — only what Russia chooses to share.
The logistical comparison reinforces this asymmetry. The distance from German defence enterprises to the line of contact in Ukraine is approximately 1,500 kilometres. The distance from Chinese counterparts is approximately 7,000 kilometres, traversing the whole of Siberia with numerous transit points. This means the cycle of "identify a problem → receive data → send a fix → test under combat conditions" takes weeks for European developers. For Chinese developers, if they have access to this cycle at all through the Russian channel, it takes considerably longer. In a war where the iteration cycle for AI systems is measured in weeks, this constitutes a meaningful structural advantage for Europe.
IV.b. National Models
The French approach to military AI is the most institutionally developed in Europe and rests on three pillars: AMIAD as the state governance body, created in May 2024 to coordinate AI deployment across the armed forces; ASGARD as the classified computing infrastructure, positioned as Europe's most powerful classified AI-dedicated supercomputer; and Mistral AI as the sovereign language model serving as the primary provider. In December 2025, France signed a three-year framework agreement with Mistral, extending across the armed services, CEA, ONERA, and the Navy's hydrographic service . The choice of Mistral over GPT-4 or Claude is a deliberate sovereign decision: data does not leave French territory, there is no dependence on the American Cloud Act, and no operational vulnerability via a foreign supplier.
A significant limitation follows. Mistral, according to independent benchmarks, occupies a mid-to-lower tier in global rankings , substantially behind GPT-5 or Claude Opus on most analytical tasks. France has traded model quality for operational independence — a reasonable choice from the standpoint of strategic autonomy, but one that creates a qualitative gap relative to American systems using frontier models.
The Mistral–Helsing partnership, announced in February 2025, targets the development of Vision-Language-Action models — the next generation of systems in which a language model is integrated with visual perception and motion control . This is a promising direction, but the combination of structural constraints — administrative coordination through AMIAD without direct supervisory authority, weak data consolidation across EU states, and the qualitative gap in the base model — means that the French system remains an analytical tool for staff-level work rather than a combat targeting platform comparable to Maven.
Germany has chosen an approach that might be called competitive testing rather than a single standard. The Uranos AI project, approved by the Bundestag in December 2025 and designed to support the 45th Panzer Brigade in Lithuania, fuses data from drones, ground sensors, cameras, and radar in near-real time . The procurement model is structurally notable: the Bundeswehr deliberately funds two competing solutions — the Airbus Defence/Quantum Systems consortium for €55.8 million and Helsing for €80.4 million. This is more expensive than choosing a single provider, but it generates comparative performance data that cannot otherwise be obtained without live testing. Germany also enacted a structural change at the constitutional level: in March 2025 the Bundestag approved an exemption of defence spending above 1% of GDP from the constitutional debt brake, opening the path to a €500 billion fund for defence and infrastructure .
Bureaucratic regulation and a less developed culture of risk-taking remain structural barriers for both countries. By 2025 data, the United States has 469 more defence startups than Europe. The European regulatory burden — AI Act requirements, the GDPR, national certification procedures — creates high barriers of entry for small companies that in the American system could move from a DARPA grant to a military contract through several iterations. In Europe, the same path takes considerably more time and resources, which in practice means the market structurally favours large traditional defence contractors over startups with disruptive technologies.
The United Kingdom occupies a separate position in this picture. Following Brexit it formally exited EU defence structures — the EDF, EDA, and PESCO. However, the 2025 EU-UK trade agreement opened access for British companies to the European defence fund SAFE, worth €150 billion . The turn toward closer cooperation with the EU after 2022 was driven principally by the threat of Russian aggression and the shift in the US position under Trump. The UK functions as a transit hub between American and European systems: American defence startups — Second Front Systems, Applied Intuition — choose London as a base for accessing European markets . Simultaneously, British companies themselves operate in both directions — BAE Systems, Tekever, and Cambridge Aerospace hold contracts with both the Pentagon and European states. This is not merely a geographic advantage: it is the institutional position of the only actor that interacts with both systems without formal membership in either.
IV.c. Financing and Projects
At the January 2026 EU Summit in Brussels, member state leaders publicly acknowledged a critical capability gap. Defence Commissioner Andrius Kubilius stated that creating "the same business conditions as in the United States" remains a long road ahead . Commission President von der Leyen had previously indicated that the EU needs to spend €500 billion to compensate for underinvestment in defence since the end of the Cold War.
ReArm Europe / Readiness 2030, announced in March 2025, envisions €800 billion in defence expenditure by 2029 — the EU's first ever coordinated programme at such a scale . The bulk of the funding will come from member states' national budgets under expanded fiscal flexibility through the national escape clause of the Stability and Growth Pact. By February 2026, 17 member states had activated this clause .
The EDF is the largest direct EU instrument for defence research funding. With a budget of €7.3 billion for 2021–2027, it finances joint R&D projects requiring the participation of at least three member states or associated countries. It is precisely the EDF that mandates "meaningful human control" as a condition of funding for projects involving autonomous weapons. The practical limitations of the EDF in the domain of military AI rest on three factors: the absence of post-project oversight, the restriction of funding to the research phase without coverage of procurement, and the reproduction of the existing hierarchy of participants.
This last factor creates the structural problem noted earlier: a foreign technology giant — Palantir, Anduril — faces significantly lower barriers to European contracts than a European startup. Palantir already works with the Bundeswehr and several European militaries, has established contractual relationships and certifications. A European startup must navigate the complex multi-tier EDF procedure requiring a multinational consortium, a long review cycle, and compliance with standards designed for the traditional defence sector. The more technologically mature foreign product is meanwhile already in service.
SAFE (Security Action for Europe) — €150 billion in loans to member states for defence procurement — is de facto a European protectionist instrument. The requirement for 65% European components in funded products structurally excludes American companies from the bulk of the funding . US Deputy Secretary of State Landau in December 2025 accused European ministers of "bullying" American companies out of Europe's defence build-up and of "protectionist and exclusionary policies" . This is not incidental diplomatic friction: SAFE deliberately creates a space in which European companies receive structural advantage regardless of technological quality.
V. Assessment and Strategic Context
Europe's movement toward the American model — the state as a major customer, private companies as developers — carries evident advantages alongside less obvious risks. The advantages: speed of innovation, competitive pressure, access to civilian technologies. Helsing's progression in four years from startup to a €12 billion valuation, with combat deployment in Ukraine, represents a real achievement that a state-run development programme would have been unlikely to replicate on the same timeline. The risks: when the red line belongs to a company rather than a state, its durability depends on the company's commercial considerations. When the state ceases to be the monopolist on legitimate use of force and becomes instead a buyer from corporations that shape their own doctrines of application, the very nature of sovereignty in defence changes.
The qualitative gap between European and American systems remains substantial. European defence R&D spending amounts to 4% of the defence budget, against 16% in the United States . The US has Maven — an integrated platform with Pentagon-wide coverage and operational validation in active combat. Europe has a collection of components: Helsing for drones, Mistral for staff-level analytics, Delta for situational awareness. These are valuable and functional components, but they do not form a systemic architecture comparable to the American one. Connecting them into a unified network is a task that European fragmentation makes structurally difficult.
The structural obstacles to consolidation are durable precisely because they are embedded in the EU's architecture rather than resulting from a deficit of political will. Consensus-based decision-making on defence means that any common standard in military AI requires the agreement of states with fundamentally different strategic cultures and defence priorities. Member state sovereignty in defence means the EU lacks the competence to bind members to specific technical solutions. The absence of a single supervisory body means that EDF normative requirements have no enforcement mechanism beyond funding conditionality. The protectionist instincts of member states mean that even when an American product is technologically superior, political pressure may produce a different choice.
All of this makes the European case the most intellectually candid of the four examined here. The EU does not conceal the contradiction between normative commitments and operational requirements — it has institutionalised it. But institutionalising a contradiction does not resolve it: it merely makes it manageable in peacetime. What happens to this architecture under the pressure of real conflict is a question that Ukraine is beginning to answer, and one that the remaining sections of this article examine in greater detail.
Conclusion
The diagnosis is straightforward: the EU has written the world's most detailed rules for military AI — and has not created a mechanism for enforcing them where it matters most, which is in actual operations.
The outlook is not encouraging. At the REAIM Summit in Spain in February 2026, both the United States and China declined to sign even a non-binding declaration. This means pressure toward deregulation will only intensify — and the EU will face a choice between preserving its normative architecture and preserving its strategic relevance.
One recommendation: the EU does not need another document. It needs a mechanism for verifying existing commitments under operational conditions. Without that, "meaningful human control" will remain a contractual condition for receiving a grant — not a norm that governs the actual behaviour of systems in combat.
The next part examines the opposite pole: a state that also declares responsible use of AI — but within a system where an independent actor capable of holding that line does not exist as a category.
