Artificial intelligence enabled military platforms can expand U.S. war-fighting capabilities and remove the need for human personnel to conduct certain operations. That makes them a must-have for the 21st century Pentagon.
Others disagree. As the BBC notes, at a conference in Washington, D.C. last week, 89 scientific, technology, and activist groups again called for banning AI, or robotic, weapons. Under the umbrella “Ban Killer Robots” alliance, these groups are concerned that AI weapons threaten ethical violations and new conflicts. On the alliance’s website they argue that, “Replacing troops with machines could make the decision to go to war easier and shift the burden of conflict even further on to civilians. Fully autonomous weapons would make tragic mistakes with unanticipated consequences that could inflame tensions.”
I disagree. For a start, the history of warfare implores us to grasp any opportunity to achieve a sustained or improved operational effect, while also mitigating risks to our combatants. This is a lesson proved by the bloody suffering at Antietam, at the Somme, on Guam, at Choisin, Hue City, and by people such as Robert Kelly and David Greene. If there are new ways to simultaneously save lives and defeat enemies, we have a moral responsibility to grasp them.
Yet, there’s also a military imperative for developing AI weapons. Consider how AI robots are likely to be used in the future. They’ll almost certainly operate in groups that penetrate enemy strongholds and identify, then destroy, critical targets. And if we can identify more targets more efficiently, we’ll achieve greater tactical effect. The cumulative impact will be strategic effect: removing the enemy’s will to resist and thus winning the war. The activists say that AI robots may mistake civilians for targets, but this is a deception. There is already always risk of misidentification. AI weapons can be designed and employed in ways that mitigate the risks of civilian harm.
As an aside, China and Russia are most likely to be America’s major adversaries in any future conflict, and we must assess how they intend to fight us. We know the answer: Their weapons procurement and strategies focus on denying us access to their strongholds and then throwing innovative weapons against us. And you can bet that these efforts will include AI-based platforms. Are we supposed to simply sit back and say, “Okay, you have the advantage but we won’t match it because … activists oppose it?”
No way. War is hell, but as Plato reminded us, only the dead remember its end. We must maximize our means to victory and mitigate the risks to our people. AI robots are instrumental to that interest. We must continue their development.