At a conference in Omaha, Nebraska, Air Force General Anthony J. Cotton—the man in charge of America’s nuclear arsenal—played up the importance of artificial intelligence in nuclear decision making. “We are also developing artificial intelligence or AI-enabled, human led, decision support tools to ensure our leaders are able to respond to complex, time sensitive scenarios,” Cotton said in a speech on October 27.
Cotton’s speech reflects a rush towards AI that’s happening in every other industry. Like Silicon Valley, the Pentagon is hyping up its speedy adoption of artificial intelligence. But also like Silicon Valley, the U.S. military is leaning heavy on vague buzzwords and marketing hype without explaining the specifics of how it plans to use this new technology.
Cotton is the head of America’s Strategic Air Command (STRATCOM), the portion of the U.S. military that handles the country’s nuclear bombs and intercontinental ballistic missiles. He was speaking at the Defense Intelligence Information System (DoDIIS) Worldwide Conference. The conference is a chance for IT nerds and America’s military leaders to gather and talk.
As first reported by Air & Space Forces Magazine, Cotton’s speech happened during the opening night of the conference. America’s nukes are ancient technology by military standards and the Pentagon is set to spend $1.7 trillion over the next few decades upgrading them. A big part of that project will involve modernizing IT infrastructure and, according to Cotton, integrating AI systems into nuclear command and control.
“To retain the competitive edge, we are exploring all possible technologies, techniques, and methods to assist with the modernization of [nuclear command, control and communications] capabilities. STRATCOM and the entire enterprise are upgrading our legacy [nuclear command, control and communications] systems to modern IT infrastructure,” he said.
For Cotton, AI is key to the future of America’s nuclear weapons. “Advanced AI and robust data analytics capabilities provide decision advantage and improve our deterrence poster. IT and AI superiority allows for a more effective integration of conventional and nuclear capabilities, strengthening deterrence,” he said. “AI/ML capabilities offer unique deterrence effects that complement traditional military power.”
Cotton also called AI a “force multiplier” and several times said that a human would always be in the loop as part of any decision making process. “Advanced systems can inform us faster and more efficiently, but we must always maintain a human decision in the loop,” he said.
The problem with Cotton’s speech is two-fold. One, it’s vague. A lot of how America’s nuclear weapons systems work, especially command and control, is secret. AI is a marketing term that describes a bunch of different systems. The mix of AI hype and nuclear secrecy makes it hard to know what, exactly, Cotton is talking about.
“I think it’s safe to say that they aren’t talking about Skynet, here,” Alex Wellerstein, an expert in nuclear secrecy and professor at the Stevens Institute of Technology told 404 Media, referring to the Pentagon-funded AI system that attempts to wipe out humanity in the Terminator films.
“He’s being very clear that he is talking about systems that will analyze and give information, not launch missiles. If we take him at his word on that, then we can disregard the more common fears of an AI that is making nuclear targeting decisions. But there are still other fears and critiques.”
Wellerstein pointed out that replacing “AI” in his sentences with “computer analysis” renders them mundane. “For example, imagine that it was an algorithm that would sift through a huge amount of satellite data in order to give an analysis of whether missiles were launching, and what their probable destinations were based on their trajectories,” he said. “One wouldn’t be surprised at that, and if techniques like machine learning (e.g., a massively statistical analysis based on a large ‘training’ set of what missiles launching might look like) was used to develop it, that wouldn’t necessarily be any scarier than a similar algorithm based on some other methodology.”
But using AI to analyze and collate military data still has its share of problems. We’re talking about nuclear weapons. The most consequential decision Cotton is talking about is a human deciding whether or not to launch a nuke. It’s a decision that could irrevocably alter life on Earth and though Cotton is clear a human would make that call, he wants the decision made faster and aided by AI systems.
America has nuclear-equipped stealth submarines, intercontinental-ballistic missiles in silos dotting the country, and bombers ready to drop nuclear bombs. “Deterrence” refers to the practice of having a whole bunch of nukes ready to go at a moment's notice. The idea is that China, Russia, or another nuclear equipped country won't launch a missile at the U.S. because, if they did, we'd launch all of our missiles at them.
But deterrence only works if the people in charge of pressing the button that launches the nukes have reliable intelligence about what's going on. Every time a nuclear power has almost pressed the button in the past 50 years it's been because a machine failed, humans understood that failure, and humans decided not to press the button.
In 1956, NORAD misinterpreted a flock of swans flying over Turkey as an unidentified Soviet bomber. In 1960, radar equipment thought the moon rising over Norway was an all out Soviet missile attack. A year later, a relay station failed and America's nuclear forces went on high alert, thinking it meant an attack. In 1967 a solar flare hit NORAD radars which malfunctioned. Analysts initially thought it was a Soviet jamming attempt.
There's more of these, a lot more. In many of the cases, humans analyzed the data and figured out that the machines had failed. Humans pulled us back from the brink. "They didn’t always know the source of the error, but they understood the fallibility of the systems, and after the fact people were able to diagnose the specific causes of error (e.g., light bouncing off clouds in a funny way, a broken chip, etc.), and that knowledge then became part of the way that later system readouts were interpreted," Wellerstein said.
One of the problems with bringing AI into command and control is that it's a black box. "Will that sort of thing be possible in this context? Potentially—we can describe the problems of machine learning errors (hallucinations, or weird ‘poisoning’ effects that sometimes crop up in image generators where they get ‘obsessed’ with particular forms) even if we don’t totally understand why they happen," he said. "One would hope that in this context that not only would there be rigorous testing for the kinds of possible attribution errors that are possible, but operators and the people who actually make decisions would be made very familiar with them.”
AI literacy is a huge problem. As these systems are integrated into America's defense systems, as Cotton and others have promised, then they must also make sure that the military professionals handling those systems are knowledgeable about them. They have to know how to look for mistakes.
According to Cotton, finding those people is harder than any other issue the Pentagon is facing. “Finding people capable of integrating AI/ ML into combat today requires unique specialization and currency. And this is vitally more complex than integrating a technological advancement of the past," he said. "We can not compete with industry salaries to maintain personnel. But the nature of our work, and having the most advanced capabilities to carry our missions that only the U.S. government can conduct, is in itself a retention tool.”