• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Thursday, November 13, 2025
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Technology And Software

Will AI start nuclear war? What Netflix movie A House of Dynamite misses.

Josh by Josh
November 8, 2025
in Technology And Software
0
Will AI start nuclear war? What Netflix movie A House of Dynamite misses.
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


For as long as AI has existed, humans have had fears around AI and nuclear weapons. And movies are a great example of those fears. Skynet from the Terminator franchise becomes sentient and fires nuclear missiles at America. WOPR from WarGames nearly starts a nuclear war because of a miscommunication. Kathryn Bigelow’s recent release, House of Dynamite, asks if AI is involved in a nuclear missile strike headed for Chicago.

AI is already in our nuclear enterprise, Vox’s Josh Keating tells Today, Explained co-host Noel King. “Computers have been part of this from the beginning,” he says. “Some of the first digital computers ever developed were used during the building of the atomic bomb in the Manhattan Project.” But we don’t know exactly where or how it’s involved.

So do we need to worry? Well, maybe, Keating argues. But not about AI turning on us.

Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full episode, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

There’s a part in A House of Dynamite where they’re trying to figure out what happened and whether AI is involved. Are these movies with these fears onto something?

The interesting thing about movies, when it comes to nuclear war, is: This is a kind of war that’s never been fought. There are no sort of veterans of nuclear wars other than the two bombs we dropped on Japan, which is a very different scenario. I think that movies have always played a kind of outsize role in debates over nuclear weapons. You can go back to the ’60s when the Strategic Air Command actually produced its own rebuttal to Dr. Strangelove and Fail Safe. In the ’80s, that TV movie The Day After was kind of a galvanizing force for the nuclear freeze movement. President [Ronald] Reagan apparently was very disturbed when he watched it, and it influenced his thinking on arms control with the Soviet Union.

In the specific topic I’m looking at, which is AI and nuclear weapons, there’s been a surprising number of movies that have that as the plot. And it comes up a lot in the policy debates over this. I’ve had people who are advocates for integrating AI into the nuclear command system saying, “Look, this isn’t going to be Skynet.” General Anthony Cotton, who’s the current commander of Strategic Command — which is the branch of the military responsible for the nuclear weapons— advocates for greater use of AI tools. He referred to the 1983 movie WarGames, saying, “We’re going to have more AI, but there’s not going to be a WOPR in strategic command.”

Where I think [the movies] fall a little short is the fear tends to be that a super intelligent AI is going to take over our nuclear weapons and use it to wipe us out. For now, that’s a theoretical concern. What I think is the more real concern is that as AI gets into more and more parts of the command and control system, do the human beings in charge of the decisions to make nuclear weapons really understand how the AIs are working? And how is it going to affect the way they make these decisions, which could be — not exaggerating to say — some of the most important decisions ever made in human history.

Do the human beings working on nukes understand the AI?

We don’t know exactly where AI is in the nuclear enterprise. But people will be surprised to know how low-tech the nuclear command and control system really was. Up until 2019, they were using floppy discs for their communication systems. I’m not even talking about the little plastic ones that look like your save icon on Windows. I mean, the old ’80s bendy ones. They want these systems to be secure from outside cyber interference, so they don’t want everything hooked up to the cloud.

But as there’s this ongoing multibillion-dollar nuclear modernization process underway, a big part of that is updating these systems. And multiple commanders of StratCom, including a couple I talked to, said they think AI should be part of this. What they all say is that AI should not be in charge of making the decision as to whether we launch nuclear weapons. They think that AI can just analyze massive amounts of information and do it much faster than people can. And if you’ve seen A House of Dynamite, one thing that movie shows really well is how quickly the president and senior advisers are going to have to make some absolutely extraordinary, difficult decisions.

What are the big arguments against getting AI and nukes in bed together?

Even the best AI models that we have available today are still prone to error. Another worry is that there could be outside interference with these systems. It could be hacking or a cyberattack, or foreign governments could come up with ways to sort of seed inaccurate information into the model. There has been reporting that Russian propaganda networks are actively trying to seed disinformation into the training data used by Western consumer AI chatbots. And another is just how people interact with these systems. There is a phenomenon that a lot of researchers pointed out called automation bias, which is just that people tend to trust the information that computer systems are giving them.

There are abundant examples from history of times when technology has actually led to near nuclear disasters, and it’s been humans who’ve stepped in to prevent escalation. There was a case in 1979 when Zbigniew Brzezinski, the US national security adviser, was actually woken up by a phone call in the middle of the night informing him that hundreds of missiles had just been launched from Soviet submarines off the coast of Oregon. And just before he was about to call President Jimmy Carter to tell him America was under attack, there was another call that [the first] had been a false alarm. A few years later, there was a very famous case in the Soviet Union. Colonel Stanislav Petrov, who was working in their missile detection infrastructure, was informed by the computer system that there had been a US nuclear launch. Under the protocols, he was supposed to then inform his superiors, who might’ve ordered immediate retaliation. But it turned out the system had misinterpreted sunlight reflecting off clouds as a missile launch. So it’s very good that Petrov made the decision to wait a few minutes before he called his superiors.

I’m listening through to those examples, and the thing I might take away if I’m thinking about it really simplistically is that human beings pull us back from the brink when technology screws up.

It’s true. And I think there’s some really interesting recent tests on AI models given sort of military crisis scenarios, and they actually tend to be more hawkish than human decision makers are. We don’t know exactly why that is. If we look at why we haven’t fought a nuclear war — why, 80 years after Hiroshima, nobody’s dropped another atomic bomb, why there’s never been a nuclear exchange on the battlefield — I think part of it’s just how terrifying it is. How humans understand the destructive potential of these weapons and what this escalation can lead to. That there are certain steps that may have unintended consequences and fear is a big part of it.

From my perspective, I think we want to make sure that there’s fear built into the system. That entities that are capable of being absolutely freaked out by the destructive potential of nuclear weapons are the ones who are making the key decisions on whether to use them.

It does sound like watching A House of Dynamite, you can vividly think that perhaps we should get all of the AI out of this entirely. It sounds like what you’re saying is: AI is a part of nuclear infrastructure for us, for other nations, and it is likely to stay that way.

One thing one advocate for more automation told me was, “if you don’t think humans can build a trustworthy AI, then humans have no business with nuclear weapons.” But the thing is, I think that’s a statement that people who think we should eliminate all nuclear weapons entirely would also agree with.
I may have gotten into this worried that AI was going to take over and take over nuclear weapons, but I realized right now I’m worried enough about what people are going to do with nuclear weapons. It’s not that AI is going to kill people with nuclear weapons. It’s that AI might make it more likely that people kill each other with nuclear weapons. To a degree, the AI is the least of our worries. I think the movie shows well just how absurd the scenario in which we’d have to decide whether or not to use them really is.



Source_link

READ ALSO

Offload Patterns for East–West Traffic

SpyOnWeb: Top 5 Alternatives & Website Ownership Tools

Related Posts

Offload Patterns for East–West Traffic
Technology And Software

Offload Patterns for East–West Traffic

November 13, 2025
SpyOnWeb: Top 5 Alternatives & Website Ownership Tools
Technology And Software

SpyOnWeb: Top 5 Alternatives & Website Ownership Tools

November 13, 2025
Weibo's new open source AI model VibeThinker-1.5B outperforms DeepSeek-R1 on $7,800 post-training budget
Technology And Software

Weibo's new open source AI model VibeThinker-1.5B outperforms DeepSeek-R1 on $7,800 post-training budget

November 13, 2025
Our favorite 2025 advent calendars from Lego, Pokémon, Funko Pop, Magna-Tiles and more
Technology And Software

Our favorite 2025 advent calendars from Lego, Pokémon, Funko Pop, Magna-Tiles and more

November 13, 2025
DHS Kept Chicago Police Records for Months in Violation of Domestic Espionage Rules
Technology And Software

DHS Kept Chicago Police Records for Months in Violation of Domestic Espionage Rules

November 13, 2025
‘Chad: The Brainrot IDE’ is a new Y Combinator-backed product so wild, people thought it was fake
Technology And Software

‘Chad: The Brainrot IDE’ is a new Y Combinator-backed product so wild, people thought it was fake

November 13, 2025
Next Post
The Role of Gamification in Digital Health Engagement

The Role of Gamification in Digital Health Engagement

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
7 Best EOR Platforms for Software Companies in 2025

7 Best EOR Platforms for Software Companies in 2025

June 21, 2025

EDITOR'S PICK

The Importance of Post-Event Feedback and How to Collect It

The Importance of Post-Event Feedback and How to Collect It

July 22, 2025
ChatGPT’s Study Mode Is Here. It Won’t Fix Education’s AI Problems

ChatGPT’s Study Mode Is Here. It Won’t Fix Education’s AI Problems

July 30, 2025
The Ivalice Chronicles team had to remake the original Final Fantasy Tactics’ source code from scratch

The Ivalice Chronicles team had to remake the original Final Fantasy Tactics’ source code from scratch

September 7, 2025
Google’s AI can now make phone calls for you

Google’s AI can now make phone calls for you

July 18, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • Full list of winners: The inaugural Zenith Awards
  • Who is Johnson Wen? The Ariana Grande Stage Invader
  • Offload Patterns for East–West Traffic
  • Building ReAct Agents with LangGraph: A Beginner’s Guide
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?