Intelligent Autonomy Is the Key to Decision Dominance and Winning the Next War

Intelligent autonomy is the key to Decision Dominance, enabling faster, coordinated decisions across the battlespace by reducing human burden and orchestrating systems at scale.
Modern warfare is as much a question of decision-making as it is of hardware. There are three capabilities that will determine the outcome of the next peer-level conflict:
Modernizing the industrial base
Scaling autonomous systems
Achieving Decision Dominance
Of these, Decision Dominance is the most crucial — not just because it’s the only one that can be achieved in the immediate term (it will take years to both rebuild the U.S. industrial base and develop autonomous systems that are attritable and deployable at scale), but because it’s the key to achieving local force advantage through the effective use of large-scale autonomous systems.
The side that can out-pace and out-decide its adversary, orchestrating and integrating data from sensors, manned platforms, and unmanned platforms across a modern, resource-constrained, 100-million sq. mile theater, will win.
This is achievable now, if we move quickly.
Unblocking the OODA Loop Bottleneck
In the context of a modern peer conflict, the OODA Loop (Observe, Orient, Decide, Act) is catastrophically vulnerable. There is simply too much sensor data, often corrupted with misinformation, overloading human cognition and making it nearly impossible to orient.
At the same time, commanders are being asked to perform complex time-space calculations across a vast theater using whiteboards and Microsoft 365, all while attempting to navigate decision-making workflows through siloed planning horizons and limited communication to the front lines. This makes it nearly impossible to decide, especially when the cost of a wrong decision is the loss of an aircraft carrier.
Put simply, the problem is that we cannot analyze and make decisions from the existing multimodal data fast or accurately enough. Today, it takes 96 hours to plan the next 96 hours. In that time, a peer adversary like China could land 30,000 troops on Taiwan.
Unblocking this bottleneck will require building technologies that compress the Orient and Decide steps: fusing and analyzing petabytes of sensor data in real time and converting that analysis into decisions across the kill chain. This capability must operate seamlessly across all warfighting functions, all decision horizons, and every echelon from the command center to the tactical edge.
This is what we call Decision Dominance.
Intelligent Autonomy: The Key to Decision Dominance
Our ability to achieve Decision Dominance depends heavily on how effectively we can enable what we call “intelligent autonomy.”
At its core, intelligent autonomy does three things: (1) it alleviates pressure on human decision-making, (2) it enables decentralized command and control, and (3) it maximizes the effectiveness and efficiency of autonomous systems.
Reducing Pressure on Human Decision-Making
Humans can no longer be involved in every step of the planning and decision-making process. The data volume is too high, the variables are too numerous, and the pace is too fast. Decision planning breaks under that kind of pressure.
Instead, systems must handle the continuous analysis, coordination, and optimization required to operate at speed, while allowing the right amount of human-in-the-loop decision making at the moments where humans add the most value.
Intelligent autonomy allows commanders to set intent, let the system handle the underlying planning and recalculation, and then return to evaluate options and make decisions. They do not need to watch each step unfold, but they do need to understand what the system has done, what it is recommending, and why.Enabling Decentralized C2
Reducing cognitive load becomes even more important in communications-degraded environments.
In a peer conflict, command and edge will not be able to stay continuously connected. Units at the edge will often have to operate for long periods without guidance from higher headquarters, but their decisions still need to reflect more than the immediate local picture. They need to account for campaign intent, theater-wide resource allocation, and tradeoffs being made elsewhere.
Intelligent autonomy makes that possible. It allows units to act locally and independently without drifting from the larger mission when communications drop out.Maximizing Autonomous System Effectiveness and Efficiency
Today’s autonomous systems succeed in reducing risk to humans, but they don’t yet meaningfully reduce the burden on them. That’s because operators still have to task, coordinate, and adapt these systems in employment, often one platform at a time.
Such systems are useless operating in a vacuum. Their sensor data and observations need to be integrated with other systems. Their tasking needs to be orchestrated not just within a swarm, but with other swarms and other manned platforms across the theater to achieve campaign objectives.
Intelligent autonomy optimizes autonomous systems in the conduct of both sensing (perception) and effecting (action), resulting in their effective and efficient coordination at scale.
Leveraging Reinforcement Learning to Power Intelligent Autonomy
Delivering intelligent autonomy will require a suite of AI tools and technologies — what we call an “AI village.” Among these, reinforcement learning (RL) will play a major role, building on the AI advancements that have defined the past few years.
Most recent commercial progress has come from large language and vision models trained on labeled datasets. Those models are good at pattern recognition, classification, and prediction, and they are useful in many settings. But peer conflict poses a different kind of problem. It requires planning across multiple time horizons, optimizing under constraints, and reasoning through how actions unfold in physical space over time. And there is no dataset for modern peer-level conflict at the scale we would encounter in the Pacific Theater.
Systems have to be trained in simulated environments instead.
Reinforcement learning provides a way to do that. Smacks RL models interact with an environment, make decisions, and improve through repeated exposure to outcomes. In this setup, the environment is as important as the model that learns from it. It has to reflect how platforms realistically behave under different conditions, what choices are actually available to decision-makers, and how those choices shape outcomes across the broader system.
Building such environments is hard. The physics have to be accurate. The decision space has to match real operational choices across functions and echelons. And human domain expertise has to be built into the system to cover what physics alone cannot capture, constrain the problem appropriately, and keep the outputs grounded in operational reality.
The training environment is the data, and the quality of that environment comes from the depth of human knowledge encoded into it.
TL;DR
Modern warfare has become a decision-making problem. The volume of data, the scale of operations, and the speed required have outpaced our ability to plan and act using current, outdated systems.
Decision Dominance closes this gap. By compressing the Orient and Decide steps, units can operate on intent and decisions can be coordinated rapidly across the entire battlespace.
The key to Decision Dominance is intelligent autonomy, which enables three things: humans to focus only on the decisions that matter, edge units to operate in a manner that’s locally independent but coordinated towards campaign objectives, and autonomous systems to act efficiently and effectively together.
Achieving this now will require a suite of AI technologies including deep reinforcement learning, but it can be done. And it will determine who wins the next global peer-level conflict.




