An algorithm for a wingman: The coming robot revolution in future wars

Last month one, of the Air Force’s top F-16 pilots sat down at the controls of a simulator to battle a newfangled foe in an old-fashioned, one-on-one aerial dogfight.

The expert pilot, call sign Banger, a graduate of the prestigious Air Force Weapons School, was up against a computer program that uses artificial intelligence algorithms to think, react, and shoot with superhuman ability.

The result was as predictable as a chess master losing to a supercomputer.

Banger was shot out the virtual sky five times in a row by an AI “pilot” developed by Heron Systems in a simulated dogfight contest sponsored by DARPA, the Pentagon’s Defense Advanced Projects Research Agency.

AI-piloted planes won’t be flying themselves anytime soon, and dogfights are largely a relic of a bygone era, but AI will be at the heart, or perhaps more accurately the brains, of future combat.

“This will be one of the great military modernization challenges that will happen on your watch,” Gen. Mark Milley, chairman of the Joint Chiefs of Staff, told graduates at the Naval War College in June, describing a future battlefield where everything happens exponentially faster, requiring sophisticated algorithms to overcome information overload and cut through the fog of war.

“We know that autonomous systems supported by artificial intelligence and high-capacity wireless connectivity are the foundation for future military operations, but how we integrate these systems may be decisive in the next conflict within the context of the changing character of war,” said Milley.

So while humans may still pilot combat aircraft for decades to come, their wingmen will soon be autonomous or semi-autonomous robots.

Take for example the Air Force’s Skyborg program — a name inspired by the fictional Borg of Star Trek, who warn their enemies that “resistance is futile.”

The idea is to create a family of “attritable” drones that are cheap enough to lose in battle but smart enough to adapt to battle conditions on the fly to serve as wingmen for manned aircraft.

“Skyborg is an autonomy-focused capability that will enable the Air Force to operate and sustain low-cost, teamed aircraft that can thwart adversaries with quick, decisive actions in contested environments,” the Air Force describes the initiative.

The Navy has a similar plan for a fleet of unmanned surface ships and underwater drones armed with vertically launched missiles that are operated by sailors on manned ships who would oversee targeting and firing decisions.

It’s not the stuff of science fiction. In fact, Defense Secretary Mark Esper, in a speech this month to the Pentagon’s virtual Artificial Intelligence Symposium and Exposition, warned that Russia and China are already aggressively pursuing AI.

“Artificial intelligence is in a league of its own, with the potential to transform nearly every aspect of the battlefield,” Esper said. “In 2017, Russian President Vladimir Putin declared that whichever nation leads in AI will be the ‘ruler of the world.’”

“Since then, Moscow has announced the development of AI-enabled autonomous systems across ground vehicles, aircraft, nuclear submarines, and command and control,” said Esper.

Meanwhile, “the [Chinese] People’s Liberation Army regards AI as a ‘leapfrog’ technology, which could enable low-cost, long-range autonomous vehicles and systems to counter America’s conventional power projection,” he said. “Chinese weapons manufacturers are selling autonomous drones they claim can conduct lethal, targeted strikes.”

While Esper calls the impact of machine learning on the future of warfighting “tectonic,” the vision of a future force augmented by legions of robotic systems is facing some skepticism on Capitol Hill, where lawmakers of both parties are wary of authorizing billions of dollars for technologies that too often fail to perform as advertised.

Embarrassing production delays, cost overruns, and technology failures on the Navy’s Ford-class aircraft carriers and littoral combat ships have made the House and Senate armed services committees gun-shy about writing a blank check for the Navy’s robot fleet.

“I believe this is the way of the future and an area where we need to be investing and learning. I am concerned, however, with the Navy’s approach,” said Rep. Adam Smith, chairman of the House Armed Services at a February hearing on the Navy’s 2021 budget, which requested $464 million for two prototype large unmanned vessels.

“The current acquisition strategy appears remarkably similar to how the littoral combat ship came into existence. Unclear requirements and unproven technologies are being overlooked in an effort to prioritize speed of acquisition,” Smith said in his prepared opening remarks.

In a July interview, Chief of Naval Operations Adm. Michael Gilday conceded that congressional critics have a point.

“I actually agree with Congress on this,” Gilday told Defense News. “We’ve got a family of unmanned systems we’re working on. Undersea, we’ve got extra-large, large, and medium unmanned underwater vehicles. On the surface, we have small, medium, and large unmanned surface vessels, and in the air, we have a number of programs,” Gilday said, citing what he called a lack of rigor in assessing the prospects of each program.

“I’ve got a bunch of horses in the race, but at some point, I have to put my money down on the thoroughbred that’s going to take me across the finish line.”

One big missing piece is the secure computer network that would link all these autonomous systems together, a project called the Navy Tactical Grid, which isn’t projected to come online until 2035 but Gilday says is critical to making the whole concept work.

“We’re investing in netted weapons, netted platforms, netted headquarters — but we don’t have a net,” Gilday said. “Without it, I have a bunch of unmanned [vessels] that I shouldn’t be building because I can’t control it very well.”

Two years ago, the Pentagon established the Joint Artificial Intelligence Center to accelerate the integration of artificial intelligence into every aspect of warfighting and to work more closely with private industry, where the most innovative work is being done.

And before the end of the year, the Pentagon is planning to revise its fundamental doctrine radically on waging modern warfare, dubbed the Joint Warfighting Concept.

It envisions AI-enabled forces that can quickly overwhelm any and all adversaries and will prevent war in this century in the same way nuclear weapons deterred major power conflict in the last half of the 20th century.

“If we go into a future where there are no lines on the battlefield and we have ubiquitous, all-domain command and control and logistics that go seamlessly from place to place, from service to service, and it all happens in enormous speed — holy cow, that is the world where an adversary will not challenge us,” said Gen. Paul Selva, vice chairman of the Joint Chiefs, in remarks at a Pentagon conference on AI this month.

“That is deterrence, having a capability that prevents the war from happening,” Selva said. “And goodness knows we never want to have a war with China or a war with Russia or a war with any nuclear-armed adversary, and the only way to avoid that is to have such strength that is demonstrated to our adversaries that they will not challenge us. We can do that.”

[Related: The Pentagon’s $2 billion gamble on artificial intelligence]

Jamie McIntyre is the Washington Examiner’s senior writer on defense and national security. His morning newsletter, “Jamie McIntyre’s Daily on Defense,” is free and available by email subscription at dailyondefense.com.

Related Content