Written by: Stephen Rogers | May 21, 2024

Let’s take a trip down memory lane. In 1983's "WarGames," Matthew Broderick inadvertently hacks into a NORAD supercomputer programmed to run war simulations . . .but also in control of the entire U.S. nuclear arsenal with autonomy to launch. Thinking he’s just playing a game, he almost triggers a nuclear holocaust. The movie ends with a profound lesson: some games aren’t meant to be played by machines. Indeed, some games are best not played at all. 

Fast forward to today; Congress seems to be channeling at least some of this lesson.

A robot pushing the button

The "Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023" is a legislative effort to prevent AI from having the final say on nuclear launches. This bill aims to ban the use of federal funds for launching a nuclear weapon via an uncontrolled autonomous weapons system.

The bill states Congress believes that the use of autonomous nuclear weapons systems cannot adhere to international humanitarian law, and that decisions about nuclear weapon launches should not be made by AI. It prohibits the use of federal funds in using an autonomous weapons system without meaningful human control to launch a nuclear weapon or select targets for a nuclear launch.

(As an aside, if you click on the Similar Bills tab on the bill page you see a number of other eye-opening bills. Such as S 1186 / HR 669 which aim to ensure a pre-emptive nuclear strike can only be made if Congress has formally declared war on the target nation. That's reassuring, you wouldn't want to kill millions of people without the right paperwork in place. That would be downright rude. Or New Hampshire's HB 1599 which helpfully points out that the second amendment allows the use of autonomous AI for personal defence. Yay! Killer drones for everyone!)

Who Do You Trust With That Red Button?

Now, let’s face it: the idea of AI-controlled nukes seems more terrifying than human-controlled bombs. But is it? Hollywood has long depicted the dangers of autonomous systems running amok, with suitably catastrophic results. We all remember SkyNet destroying the world so that Arnold Schwarzenegger lookalike robots can menacingly prowl the wasteland. The stakes in the real world are not a thrilling climax to a blockbuster but the potential annihilation of humanity. As the bill notes, a large-scale nuclear war could lead to millions of deaths, catastrophic climate effects, and agricultural collapse. It’s not exactly the kind of scenario you want to leave to an algorithm.

At this juncture, I have to confess a certain macabre fascination with the end of the world. From the zombie apocalypse to climate collapse and asteroid impact, I'm in. So when I stumbled across a book entitled Nuclear War: A Scenario by Annie Jacobsen I couldn't resist. It is a genuinely gripping read and I'd highly recommend it for anyone who never wants to get a good night's sleep ever again. Jacobsen has researched every millisecond of a thankfully still-theoretical scenario, interspersed with nerdy facts about protocols and procedures.

One of the many disquieting details is that the President and the small group of advisors around them have only six minutes from first alert of an ICBM in the air to having to decide whether to push the button and end the world. Six minutes. To establish if it is a technical glitch or a genuine attack. While in a state of shock and panic and with very limited information. Would you trust [INSERT NAME OF LEAST FAVORITE PRESIDENTIAL CANDIDATE HERE] to make the right call?

We're all familiar with some of the near misses that happened during the Cold War, such as the Cuban Missile Crisis in 1962. We fondly imagine that crisis was the closest we've come to actual nuclear war. Those in the know would say differently (if they were allowed to, which they aren't). There have been a number of near misses over the years which we are blissfully unaware of, but that Jacobsen has dug up to make sure her readers are appropriately terrified all the time. Such as in 1979 when a simulation test tape was mistakenly inserted into a NORAD computer deceiving analysts into thinking that the U.S. was under attack by Russian ICBMs. That night, rather than wake President Carter as he should have, the nuclear watch officer on duty dug into it further and realized it was an error. Would an AI have made the same call? Or, if an AI had been in charge would the error never have happened in the first place?

"Shall We Play a Game?"

In "WarGames," the supercomputer learns that some scenarios have no winners, a lesson Congress doesn't seem to have taken completely to heart. Jacobsen's book, which unsurprisingly takes a rather anti-nuclear weapons stance, reports on the Proud Prophet War Game carried out at the National War College (yes, there's a National War College) in 1983 - the same year War Games was released in cinemas. Coincidence? I think not. For two weeks the government played out a wide range of different nuclear scenarios, from a tactical strike to decapitation events to preemptive strikes by the U.S. Every single war game ended the same way - global annihilation. No one wins a nuclear war, and once a button is pressed whether deliberately or by accident, there is no possibility of de-escalation.

So, rather than making sure there's a human finger on the button perhaps Congress' time would be better spent figuring out a way to remove the nuclear threat altogether? Until that time, however, in an age where technology races ahead at breakneck speed, this bill serves as a sobering reminder that some responsibilities are too great to delegate entirely to machines. Sometimes you need a human to just stop, break protocol and double check before pushing the panic button. But if Matthew Broderick taught us anything, it's that some games are better left unplayed. 

How about a nice game of chess?


About BillTrack50 – BillTrack50 offers free tools for citizens to easily research legislators and bills across all 50 states and Congress. BillTrack50 also offers professional tools to help organizations with ongoing legislative and regulatory tracking, as well as easy ways to share information both internally and with the public.