Hide table of contents

By Luisa Rodriguez  |    Watch on Youtube   |   Listen on Spotify   |    Read transcript


Episode summary

There is this vision of a possible future that as militaries integrate artificial intelligence and autonomy more fully across the force, that we might reach some tipping point where the pace of combat action is just too fast for humans to respond, and humans have to be completely out of the loop.

I think what’s scary about that possible vision is that humans are then no longer in control of violence and warfare. And that raises moral questions, but it also raises just really fundamental questions of how do you control escalation in wartime? How do you end a war that’s happening at superhuman speeds? And we don’t have good answers for that. And I think maintaining human control over warfare is absolutely essential to making sure that we can navigate this transition towards more powerful AI in a safe way.

— Paul Scharre

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” The system told him the United States had fired five nuclear weapons at the Soviet Union. Protocol demanded he report it to superiors, which would almost certainly trigger a retaliatory strike.

Petrov didn’t do it. He had a “funny feeling” in his gut. He reasoned that if the US were actually attacking, they wouldn’t just fire five missiles — they’d empty the silos. He bet the fate of the world on a hunch that the machine was broken. He was right.

Paul Scharre, the former Army Ranger who led the Pentagon team that wrote the US military’s first policy on autonomous weapons, asks a terrifying question: What would an AI have done in Petrov’s shoes?

Would an AI system have been flexible and wise enough to make the same judgement? Or would it have launched a counterattack?

Paul joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasingly replaces humans in much of the military, changing the way war is fought with speed and complexity that outpaces humans’ ability to keep up.

Militaries don’t necessarily want to take humans out of the loop. But Paul argues that the competitive pressure of warfare creates a “use it or lose it” dynamic. As former Deputy Secretary of Defense Bob Work put it: “If our competitors go to Terminators, and their decisions are bad, but they’re faster, how would we respond?”

Once that line is crossed, Paul warns we might enter an era of “flash wars” — conflicts that spiral out of control as quickly and inexplicably as a flash crash in the stock market, with no way for humans to call a timeout.

In this episode, Paul and Luisa dissect what this future looks like:

  • Swarming warfare: Why the future isn’t just better drones, but thousands of cheap, autonomous agents coordinating like a hive mind to overwhelm defences.
  • The Gatling gun cautionary tale: The inventor of the Gatling gun thought automating fire would reduce the number of soldiers needed, saving lives. Instead, it made war significantly deadlier. Paul argues AI automation could do the same, increasing lethality rather than creating “bloodless” robot wars.
  • The cyber frontier: While robots have physical limits, Paul argues cyberwarfare is already at the point where AI can act faster than human defenders, leading to intelligent malware that evolves and adapts like a biological virus.
  • The US-China “adoption race”: Paul rejects the idea that the US and China are in a spending arms race (AI is barely 1% of the DoD budget). Instead, it’s a race of organisational adoption — one where the US has massive advantages in talent and chips, but struggles with bureaucratic inertia that might not be a problem for an autocratic country.

Paul also shares a personal story from his time as a sniper in Afghanistan — watching a potential target through his scope — that fundamentally shaped his view on why human judgement, with all its flaws, is the only thing keeping war from losing its humanity entirely.

This episode was recorded on October 23-24, 2025.

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

The interview in a nutshell

Paul Scharre, vice president at the Center for a New American Security and former US Army Ranger, argues that we are inching toward a “battlefield singularity” — a point where the speed of war outpaces human cognition. While he believes this shift will take decades, he outlines the following trajectory and risks:

Incentives for speed will eventually push humans “out of the loop”

While current drones are largely remote controlled, competitive pressures might force militaries to hand over control to machines.

  • The “battlefield singularity”: Just as high-frequency trading moved too fast for human traders, warfare may reach a tipping point where human reaction times are a liability. This creates the risk of “flash wars” — accidental, rapid escalations similar to stock market flash crashes.
  • Swarming warfare: Future wars will likely involve swarms of thousands of drones that self-coordinate, self-heal their networks, and attack from multiple directions simultaneously.
  • Democratisation of violence and changes in the global balance of power: AI and software scale differently than hardware. This relatively benefits smaller actors, allowing them to challenge major powers with cheap, lethal technology.

AI integration into nuclear systems creates instability

Paul is deeply concerned about the automation of nuclear command, control, and communications, specifically regarding the “always/never dilemma” (you always want weapons to work when authorised, but never when not).

  • The risk of false alarms: Paul cites the 1983 incident where Stanislav Petrov correctly identified a US missile launch warning against the Soviet Union as a false alarm based on “gut feeling” and context — factors an AI trained on limited data might miss.
  • Destabilising transparency: AI surveillance could theoretically track mobile missiles and submarines, making a “first strike” to disarm an enemy seem plausible. Even if technically difficult, the fear of this vulnerability drives nations like China to expand their nuclear arsenals to ensure survivability.
  • Lack of training data: We cannot reliably train AI for nuclear crises because we lack real-world data on nuclear exchanges, and synthetic data/wargames may not capture the complexity of reality.

Cyberwarfare will likely see the fastest and most dangerous AI adoption

Unlike physical warfare, which is bound by logistics, cyberwarfare occurs in the native environment of AI and is already moving at machine speed.

  • Evolutionary malware: We may transition from static viruses to intelligent, adaptive malware that evolves to counterdefences and self-replicates, behaving more like a biological pathogen than traditional software.
  • Offence-defence balance: Currently, cyber favours the attacker (who only needs to find one vulnerability) over the defender (who must patch everything). While AI could help defenders automate patching, Paul worries that risk-tolerant attackers will deploy unpredictable, agentic AI tools more aggressively than cautious defenders.
  • Critical infrastructure vulnerability: As society digitises everything from power grids to water treatment plants, the attack surface grows just as AI tools to exploit it become more accessible.

US-China dynamics: An “adoption competition,” not an arms race

Paul disputes the narrative of a spending “arms race,” noting that AI accounts for roughly 1% of the US Department of Defense budget. The real competition is over which organisational culture can successfully adopt the technology.

  • China’s specific advantages are overrated: While China has massive data collection, US companies have better global reach (and thus global data).
  • The US advantage lies in compute and talent: The US dominates in chip technology (via Nvidia/TSMC controls). Crucially, the world’s top AI talent — including Chinese researchers — overwhelmingly prefer to live and work in the US.
  • The “authoritarian dilemma”: While China can implement surveillance rapidly, Paul argues that authoritarian systems eventually become brittle, whereas democratic “messiness” often leads to better long-term error correction.

Policy solutions: Bans are unlikely, but “circuit breakers” are vital

Paul views a total ban on autonomous weapons as unrealistic given the military incentives, but proposes specific governance measures:

  • Anti-personnel ban: A ban specifically on targeting humans (as opposed to tanks or ships) might be achievable because it has less military utility and higher moral repugnance.
  • Nuclear human control: The US and China have already made initial agreements to keep humans in the loop for nuclear decisions; Paul wants this expanded to the P5 nations with clearer definitions of what “control” means.
  • Independent verification: For high-stakes systems, we should use “dual phenomenology” — using completely independent algorithms and datasets to cross-check alerts before acting.

Highlights

How will AI transform the nature of war?

Paul Scharre: I think one major paradigm shift that could occur, and is probably eventually likely to occur over the next several decades, is towards swarming warfare. Where you can imagine large numbers of autonomous drones in the air, at sea, undersea, on land, that are networked, that are working together cooperatively and autonomously adapting their behaviour on the battlefield to respond to events.

Right now we’re seeing massive numbers of drones deployed in Ukraine, certainly tens of thousands of drones on the front lines. But those drones are not only remotely controlled for the most part; they’re not really working cooperatively in any way. So even if humans had the ability autonomously for the drone to go out and find its own target, having 10,000 drones that are independently finding targets is very different than 10,000 drones that are working cooperatively together.

You could have much more dramatic effects in the battlefield by having swarms that are able to simultaneously attack from multiple directions, have self-healing communications networks, self-healing minefields; the ability to react to what humans are doing, to what the enemy is doing in real time — and at a faster not only speed but also scale of coordination than is possible with humans.

And I think the real dramatic change here is not actually in the physical technology. I mean, drones are interesting, they could do neat things, but it’s in this sort of cognitive dimension — and in particular here of what the military would call “command and control.”

Militaries today are organised in this very hierarchical fashion: you have teams and squads and platoons and companies and battalions, and you have these organised predominantly because of the limitations of human cognition. So if you put a human commander in charge of 10,000 soldiers, and they were directly issuing orders to each 10,000, that would be totally impractical. There’s no way to do that. That’s not how militaries are organised, that’s not how corporations are organised.

If you look at sports, it’s really interesting that a lot of team sports have somewhere between maybe five to a dozen or so players on the field. Now, imagine a game of soccer where you had 100 players on each side and 50 balls: you’d have to have a completely different way of organising that.

But robots or swarms could do that differently. They could perfectly coordinate their behaviour and ensure that they’re optimally using those resources to hit the soccer balls, go up to the enemy targets, whatever it is. So I think that’s a potentially really dramatic shift in how militaries fight in the future.

There is this vision of a possible future that, as militaries integrate artificial intelligence and autonomy more fully across the force, we might reach some tipping point where the pace of combat action is just too fast for humans to respond, and humans have to be completely out of the loop.

I think what’s scary about that possible vision is that humans are then no longer in control of violence and warfare. And that raises moral questions, but it also raises just really fundamental questions of how do you control escalation in wartime? How do you end a war that’s happening at superhuman speeds? And we don’t have good answers for that. I think maintaining human control over warfare is absolutely essential to making sure that we can navigate this transition towards more powerful AI in a safe way.

Why would militaries take humans out of the loop?

Luisa Rodriguez: My sense is that currently the Defense Department and others who are going to be in charge of these decisions do not want to take humans out of the loop. So why does that seem like a likely thing that’s going to happen, and how does that drive things forward faster?

Paul Scharre: Yeah, that’s a great question. I think this push/pull is very common in these types of major revolutions in military affairs, where you have old institutions and ways of fighting that are not necessarily super enthusiastic about the new way of fighting. The cavalry, for example, wasn’t particularly enthusiastic about tanks.

And right now, certainly within the US military, there’s a strong belief that humans should remain “in the loop.” That’s not actually official US policy, but certainly when you hear US senior military officers talk about it, they’ll talk about it that way: that they want humans in control. I think because there’s just a healthy scepticism for all the reasons that everyone who’s ever interacted with AI could understand about these systems: that sometimes they get it wrong, and there’s value in humans making these decisions.

I think the ultimate arbiter is what works on the battlefield. That is what will drive how militaries change. Militaries tend to be often very conservative with these types of changes, in part because you never really know what’s going to work until militaries fight a war.

Luisa Rodriguez: Right. So the idea is there is this conservatism, and maybe it takes decades, but eventually the technology… I mean, it’s my suspicion that the technology just is very likely to improve enough that you’re really disadvantaging yourself if you don’t use it. Does that sound right to you?

Paul Scharre: I think that there’s a trajectory towards greater automation and greater speed and tempo of war. I do think that militaries have choices about exactly how they implement that technology. And the important thing for militaries — this is actually true of most military tactical revolutions — what matters most is not actually getting the technology first, or even having the best technology in some sense; it’s figuring out the best ways of using it. It’s figuring out, like, What do I do with a tank? What do I do with an aeroplane?

And I think there’s value to human cognition. There are lots of types of cognitive problems that at least today are very challenging for AI. And even if AI is cognitively better, there’s probably value in keeping humans in control of warfare. The question is how to maintain that balance in the best possible way. I think that that’s going to be a really important question in the next several decades.

Luisa Rodriguez: But do you think that value is really likely to persist long enough that, at some point at least one country decides that taking the human out of the loop is strategic and then does better? And if they do better, that creates this pressure for their adversaries to take them out of the loop?

Paul Scharre: So Former Deputy Secretary of Defense Bob Work, who was really a pioneer in bringing artificial intelligence into the US military, has this quote of, “If our competitors go to Terminators, and their decisions are bad, but they’re faster, how would we respond?”

Which is a colourful way for a senior leader to be talking about Terminators, but I think it does highlight this really difficult problem of this potential for an arms race in speed in militaries: that there’s this incentive towards faster reaction times and decision making that might pressure militaries to do the same — even if they don’t want to, if they’re not comfortable with that, they have to go faster to keep up, similar to what we’ve seen in financial markets with high-frequency trading.

And that could lead to a dangerous situation, where you have this dangerous arms race in speed in the military. I’ve heard some people argue we ought to have some limits on that. How do you put a speed limit on warfare? Seems like an appealing idea. I don’t know how you do that in practice to try to put brakes on this tendency, which is I think a big risk as militaries are adopting AI and autonomy.

What does a "battlefield singularity" look like?

Luisa Rodriguez: Just to make sure we get a concrete picture of what this battlefield singularity, or sometimes called hyperwar, would look like: Can you describe what it looks and feels like? What kinds of weapons? How have they been automated? What do conflict engagements look like? Are there any humans in the loop at any level?

Paul Scharre: Let’s start with where we are today, and I want to kind of paint a picture for how that might grow over time. So since at least the 1980s, countries have had automated air and missile defence systems that can shoot down incoming threats when the speed of these incoming missiles or rockets or artillery aircraft are just too fast for humans to respond.

For example, a US Navy warship has an automated mode on the air and missile defence system that can be activated where there might be missiles coming in — and there’s just so little time for humans to respond, and you might have multiple threats coming from different directions — so that then the machine, once it’s activated by people, can automatically sense all these threats and shoot them down.

Now, we’ve had these systems around for decades. They haven’t really been widely used in conflicts in these automated modes. And there have been a couple examples of accidents: there were a couple fratricide incidents in 2003 with the Patriot air and missile defence system. But that’s something that we have some experience with: there is this very narrow domain today of machine control over warfare where really, humans just can’t be in the loop in this area.

I think what I would envision is that this domain of machine warfare grows over time, and then several decades from now, we end up in a world where something like that exists at a much larger scale along the entire front, where there are swarms of thousands of drones on both sides, and they’re dynamic and responding to enemy behaviour. And there are missiles being launched and striking targets, and there are AI systems identifying new targets which are moving and mobile. And humans can’t possibly be in the loop to respond to that enough: it’s too slow.

And humans are maybe observing this action. Maybe you could think of it the way a coach might on the sidelines, right? Humans could have some degree of direction of, “I’m going to change the higher-level guidance for these systems, or I might try to add new parameters to the operating systems.” But humans can’t really, in real time, intervene.

I think we have a really interesting example of this exact kind of behaviour in financial markets, stock trading — where there’s this whole new domain of high-frequency trading where humans can’t possibly intervene in the milliseconds that these algorithms are responding. And then we’ve seen examples of flash crashes that come with that. So I think the scary analogy there would be, could we have something like a flash war, where the interactions are so fast that they escalate in ways that humans really struggle to control?

I think that’s a really scary proposition. How do you find ways to stop that? In financial markets, they put in place circuit breakers that can take trading offline if we see movements that are too volatile. There’s no good way to do that in warfare. There’s no rough way to call timeout. How do you maintain human control over warfare that’s happening at superhuman speeds?

How would we even train these systems?

Paul Scharre: The problem with some system making some determination is in particular what is the training data that you use for a surprise attack? We don’t actually know what that looks like. And we know that AI systems often perform quite badly when pushed outside the bounds of their training data. So if you put it with a novel situation, maybe you get something good, maybe you don’t.

There’s this really interesting example from the ’80s where the Soviets created this intelligence system called VRYAN that was designed to predict the likelihood of a surprise US attack. What it was designed to do was collect data on all of these things that the Soviets thought might be indicators that the US is preparing for a surprise attack on the Soviet Union: things like the US stockpiling blood in blood banks, the locations of senior US political and military leaders. So you could see indicators of maybe it looks like maybe they’re getting ready for something. That sounds actually like a really interesting use of automation.

What happened in practice was KGB agents were basically incentivised to generate reports and feed data into the system. The data that was coming in was just bogus, because people were judged based on going out and getting information and bringing it in, so the whole thing relied on bad data.

I think that’s an example of some of the flaws of these systems. You could imagine some AI intelligence system that’s looking at all of these different indicators — troop activity, the locations of senior political leaders, and where we see their nuclear submarines and bombers and mobile missile launchers being moved — and it sort of comes to some judgement: “OK, this is my probability.”

One of the problems is: how do we verify that that’s accurate? We can verify that a lot of other AI things are good and are performing well because we could test them in their actual operating environment. We can look at image classifiers and we can get to ground truth: What is the thing? Is it accurate? We could take self-driving cars and drive them on the operating environment. In this case, we wouldn’t have any great way to measure the baseline of just like, is it good at this at all?

And then of course, a lot of AI systems are super opaque. Let’s say your AI system says, “I think there’s a 70% probability that there’s an attack.” Why? Maybe it can tell you something, but that doesn’t necessarily mean that the story it’s telling you is accurately reflective of the underlying cognitive processes inside that neural network, of course.

So I think it seems like a really dicey way to use AI. I do think that over time, militaries are going to start to integrate in intelligence communities AI in this fashion. I think they’re likely to be very conservative though, which is probably not a bad thing in this case.

AI warfare and the balance of power

Luisa Rodriguez: It sounds like this might cause us to enter a world where smaller, poorer states, or even non-state actors, can actually threaten much larger militaries by leveraging these cheap automated weapons systems that are offence-dominant. How much is that going to change balance of power dynamics? Are there going to be more wars fought because it’s cheaper to start them, including by small groups that don’t have as many resources?

Paul Scharre: Well, that’s a good question. The economics of it I feel are valid, that it relatively benefits smaller groups more.

I’ll give another example here. Ukraine basically neutralised Russia’s Black Sea fleet by sinking and damaging several warships worth hundreds of millions of dollars by spending a few tens of millions of dollars in small drone boats laden with explosives that could come in and sink a warship. I think we’re going to see those tactics copied more.

Whether that leads to more wars is tricky, because it depends a lot on what you think the mechanics are that drive wars. I think one mechanic can be if there’s a disagreement between actors about the relative balance of power. And here’s a place where I think you could argue AI on net does one or the other; I could see arguments both ways.

The argument that AI might make conflicts more likely would be that, one, it’s just a disruptive change, so there’s more uncertainty about how this is used and who’s at an advantage here. And some countries might think, “We have the AI, we can win now.” In particular, countries might feel overconfident about AI, because humans often seem to overestimate what AI can do in terms of its abilities. We see this really dramatically, unfortunately, with the early implementation of autopilot in Teslas, where there were a number of fatal accidents with people sort of over-trusting the automation. So that could be one kind of risk.

Another way that AI might make wars more likely is that, as more military capability is embedded in software and algorithms, it’s much harder to measure. You can measure aeroplanes, you can measure ships, you can measure tanks — and we say, “Look, they have three times as many aircraft as we do and twice as many tanks, so maybe we shouldn’t fight a war with them.” But when it’s algorithms, it’s really hard to know. How do we know if our swarming algorithm is better than their swarming algorithm? That’s actually really tricky. Other than like, “We’ll fight them and find out,” that’s going to be really hard. So that might lead to more uncertainty and disagreements.

One way that AI might be more stabilising is if it creates more transparency and greater ability for countries to just see what others are doing, and may make it harder to carry out surprise attacks. I think we’ve actually got really solid evidence of this. We saw this in the runup to Russia’s invasion of Ukraine, where the US, because of just greater intelligence and satellite imagery, was able to have really great visibility into what Russia was doing and then share it — in a really impressive diplomatic move, share that intelligence, declassify it, share it with European allies to get Europeans on board that this was something that was going to occur.

You could see AI just makes it harder to amass forces for surprise attack, and that takes away some of the incentives. We see a little bit of this even tactically on the frontlines in Ukraine. Despite the fact that there’s all these drones, and the drones are kind of hard to defend against, the frontlines are really static — for a lot of reasons, but drones seem to be making the frontlines more static. And one of the things that we’re hearing from people on the frontlines is there’s no way to amass forces for an assault: because they have drones overhead, they can see what you’re doing, so they know that you’re going to make an attack in this area and then they can defend against it. So it contributes to this stasis.

And so all of which is like, I don’t know, you could see arguments on either side. And a lot of it depends upon how the technology is implemented by countries.

Malware that looks like biological threats

Paul Scharre: I think the interesting question is: do we get to the point down the road where malware is much more intelligent and adaptive than today? Today you have malware that spreads on its own, that is self-replicating, that acquires resources — like botnets that acquire computing resources and then can use them for things like distributed denial-of-service attacks, can sort of leverage that. But when there are adaptations to malware, those are done manually.

Conficker, this huge worm that spread across the internet several years ago, is a really interesting case where there were a bunch of variants that evolved over time. So the taskforce that was put together — of law enforcement and intelligence communities and the private sector — to combat this worm was fighting different variants over time, but those were all designed by humans.

So do we get to the point where you have malware that’s actually able to evolve and adapt somewhat? Either it’s more clever when it’s on a computer network and able to maybe hide itself in response to threats, or adapt what it’s doing to the network itself? You can imagine a more capable reasoning model that could assess what’s going on on computer networks and then make some reasonable judgement about what to do or actually change itself over time, which would seem like a much more dangerous kind of threat.

And we’ve seen sort of concerning attempts by language models to engage in behaviour like self-exfiltration, copying itself in ways that would try to preserve its goals, or copying itself to overwrite the goals that a human would do of a new system. Now, the models aren’t very good at that yet, because it’s just not good enough yet at software engineering. But you sort of have all of the pieces in place: self-replication already exists, the ability to acquire computing resources already exists, the tendency of models — it’s not common, but it happens — where models might attempt some of these concerning behaviours like self-exploration.

It looks like right now the missing piece is just that they’re just not good enough. That’s going to get better. You can really count on this is going to get better. On what timeline, I don’t know. So I think that that’s a very troubling possibility in the long term, that you could end up with malware that maybe feels more like biological threats.

Are the US and China in a race to build AI into their military systems?

Luisa Rodriguez: Are the US and China currently in a race to automate and build AI into their military systems?

Paul Scharre: Well, they’re certainly in a competition militarily to maintain an advantage over the other and to adopt AI.

Sometimes it’s characterised as an arms race. It is clearly not an arms race, if you use that term in a precise academic way. The way that academics talk about arms races — and we have historical examples of the nuclear arms race, the arms race among battleship construction in the early 20th century — academics define it as above-normal levels of defence spending that’s driven by two countries competing against another.

It’s kind of hard to pin down numbers of AI spending inside militaries. Bloomberg had done some really interesting work a couple years ago poring through the Defense Department budget to try to figure this out. Like how much is the Defense Department spending on artificial intelligence? And they don’t have a good answer internally, DoD doesn’t, interestingly.

Bloomberg came up with about 1%. That’s not an arms race. That’s not even a priority. When you have senior defence leaders saying, “AI is our number one priority” — no, it’s not. Your Joint Strike Fighter is your number one priority when they look at what you’re actually doing.

So I think it’s clearly not an arms race. I do think that there is an adoption competition in AI of how do militaries find ways to import the technology. Both the US and China are going to have access to roughly the same level of AI technology. Whether OpenAI is a couple months ahead of DeepSeek just doesn’t matter, because let’s say that there’s a gap of six to 12 months between leading labs in the United States and China. Well, if the US military is charitably five years behind the frontier of AI in adoption, maybe more like 10, that one-year advantage means nothing. It’s really a contest of adoption.

But it becomes the most critical thing, figuring out how do you use this technology in a way that’s constructive, that actually advantages war fighters? And I think that’s a tricky one. It has a lot to do with how militaries organise themselves and create the right incentives internally for experimentation and reorganisation. And I think it’s just not actually clear who has an advantage there.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities