How Artificial Intelligence Could Increase the Risk of Nuclear War

Loading...
How Artificial Intelligence Could Increase the Risk of Nuclear War

Could artificial intelligence upend concepts of nuclear deterrence that have helped spare the world from nuclear war since 1945? Stunning advances in AI—coupled with a proliferation of drones, satellites, and other sensors—raise the possibility that countries could find and threaten each other's nuclear forces, escalating tensions. Lt. Col. Stanislav Petrov settled into the commander's chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through data from satellites and radar, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983.

A siren clanged off the bunker walls. A single word flashed on the screen in front of him.

"Launch."

The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world's major nuclear powers. It's not the killer robots of Hollywood blockbusters that we need to worry about; it's how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That's the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It's part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

"This isn't just a movie scenario," said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. "Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful."

Glitch, or Armageddon?

Petrov would say later that his chair felt like a frying pan. He knew the computer system had glitches. The Soviets, worried that they were falling behind in the arms race with the United States, had rushed it into service only months earlier. Its screen now read “high probability,” but Petrov's gut said otherwise.

He picked up the phone to his duty officer. “False alarm,” he said. Suddenly, the system flashed with new warnings: another launch, and then another, and then another. The words on the screen glowed red:

"Missile attack."

To understand how intelligent computers could raise the risk of nuclear war, you have to understand a little about why the Cold War never went nuclear hot. There are many theories, but “assured retaliation” has always been one of the cornerstones. In the simplest terms, it means: If you punch me, I'll punch you back. With nuclear weapons in play, that counterpunch could wipe out whole cities, a loss neither side was ever willing to risk.​​​​​​​

Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely.

That theory leads to some seemingly counterintuitive conclusions. If both sides have weapons that can survive a first strike and hit back, then the situation is stable. Neither side will risk throwing that first punch. The situation gets more dangerous and uncertain if one side loses its ability to strike back or even just thinks it might lose that ability. It might respond by creating new weapons to regain its edge. Or it might decide it needs to throw its punches early, before it gets hit first.

That's where the real danger of AI might lie. Computers can already scan thousands of surveillance photos, looking for patterns that a human eye would never see. It doesn't take much imagination to envision a more advanced system taking in drone feeds, satellite data, and even social media posts to develop a complete picture of an adversary's weapons and defenses.

A system that can be everywhere and see everything might convince an adversary that it is vulnerable to a disarming first strike—that it might lose its counterpunch. That adversary would scramble to find new ways to level the field again, by whatever means necessary. That road leads closer to nuclear war.

"Autonomous systems don't need to kill people to undermine stability and make catastrophic war more likely," said Edward Geist, an associate policy researcher at RAND, a specialist in nuclear security, and co-author of the new paper. "New AI capabilities might make people think they're going to lose if they hesitate. That could give them itchier trigger fingers. At that point, AI will be making war more likely even though the humans are still quote-unquote in control."
A Gut Feeling

Petrov's computer screen now showed five missiles rocketing toward the Soviet Union. Sirens wailed. Petrov held the phone to the duty officer in one hand, an intercom to the computer room in the other. The technicians there were telling him they could not find the missiles on their radar screens or telescopes.

It didn't make any sense. Why would the United States start a nuclear war with only five missiles? Petrov raised the phone and said again:

False alarm.

Computers can now teach themselves to walk—stumbling, falling, but learning until they get it right. Their neural networks mimic the architecture of the brain. A computer recently beat the world champion at the ancient strategy game of Go with a move that was so alien, yet so effective, that the champion stood up, left the room, and needed a 15-minute break before he could resume play.

Russia recently announced plans for an underwater doomsday drone with a warhead powerful enough to vaporize a major city.​​​​​​​

The military potential of such superintelligence has not gone unnoticed by the world's major nuclear powers. The United States has experimented with autonomous boats that could track an enemy submarine for thousands of miles. China has demonstrated “swarm intelligence” algorithms that can enable drones to hunt in packs. And Russia recently announced plans for an underwater doomsday drone that could guide itself across oceans to deliver a nuclear warhead powerful enough to vaporize a major city.

Whoever wins the race for AI superiority, Russian President Vladimir Putin has said, "will become the ruler of the world." Tesla founder Elon Musk had a different take: AI, he warned, is the most likely cause of World War III.
The Moment of Truth

For a few terrifying moments, Stanislav Petrov stood at the precipice of nuclear war. By mid-1983, the Soviet Union was convinced that the United States was preparing a nuclear attack. The computer system flashing red in front of him was its insurance policy, an effort to make sure that if the United States struck, the Soviet Union would have time to strike back.

But on that night, it had misread sunlight glinting off clouds over the American Midwest.

"False alarm." The duty officer didn't ask for an explanation. He relayed Petrov's message up the chain of command.

The next generation of AI will have "significant potential" to undermine the foundations of nuclear security, the researchers concluded. The time for international dialogue is now.

Keeping the nuclear peace in a time of such technological advances will require the cooperation of every nuclear power. It will require new global institutions and agreements; new understandings among rival states; and new technological, diplomatic, and military safeguards.

It's possible that a future AI system could prove so reliable, so coldly rational, that it winds back the hands of the nuclear doomsday clock. To err is human, after all. A machine that makes no mistakes, feels no pressure, and has no personal bias could provide a level of stability that the Atomic Age has never known.

That moment is still far in the future, the researchers concluded, but the years between now and then will be especially dangerous. More nuclear-armed nations and an increased reliance on AI, especially before it is technologically mature, could lead to catastrophic miscalculations. And at that point, it might be too late for a lieutenant colonel working the night shift to stop the machinery of war.

The story of Stanislav Petrov's brush with nuclear disaster puts a new generation on notice about the responsibilities of ushering in profound, and potentially destabilizing, technological change. Petrov, who died in 2017, put it simply: "We are wiser than the computers," he said. "We created them."
What the Future May Hold—Three Perspectives

RAND researchers brought together some of the top experts in AI and nuclear strategy for a series of workshops. They asked the experts to imagine the state of nuclear weapon systems in 2040 and to explore ways that AI might be a stabilizing—or destabilizing—force by that time.

PERSPECTIVE ONE Skepticism About the Technology

Many of the AI experts were skeptical that the technology will have come far enough by that time to play a significant role in nuclear decisions. It would have to overcome its vulnerability to hacking, as well as adversarial efforts to poison its training data—for example, by behaving in unusual ways to set false precedents.

PERSPECTIVE TWO Nuclear Tensions Will Rise

But an AI system wouldn't need to work perfectly to raise nuclear tensions, the nuclear strategists responded. An adversary would only need to think it does and respond accordingly. The result would be a new era of competition and distrust among nuclear-armed rivals.

PERSPECTIVE THREE AI Learns the Winning Move Is to Not Play

Some of the experts held out hope that AI could some day, far in the future, become so reliable that it averts the threat of nuclear war. It could be used to track nuclear development and make sure that countries are abiding by nonproliferation agreements, for example. Or it could rescue humans from mistakes and bad decisions made under the pressure of a nuclear standoff. As one expert said, a future AI might conclude, like the computer in the 1983 movie "WarGames," that the only winning move in nuclear war is not to play.
Loading...

Related posts:

No responses yet for "How Artificial Intelligence Could Increase the Risk of Nuclear War"

Post a Comment

Loading...