• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Thursday, March 5, 2026
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Technology And Software

Iran war: Is the US using AI models like Claude and ChatGPT in combat?

Josh by Josh
March 5, 2026
in Technology And Software
0
Iran war: Is the US using AI models like Claude and ChatGPT in combat?


In the week leading up to President Donald Trump’s war in Iran, the Pentagon was waging a different battle: a fight with the AI company Anthropic over its flagship AI model, Claude.

That conflict came to a head on Friday, when Trump said that the federal government would immediately stop using Anthropic’s AI tools. Nonetheless, according to a report in the Wall Street Journal, the Pentagon made use of those tools when it launched strikes against Iran on Saturday morning.

Were experts surprised to see Claude on the front lines?

“Not at all,” Paul Scharre, executive vice president at the Center for a New American Security and author of Four Battlegrounds: Power in the Age of Artificial Intelligence, told Vox.

According to Scharre: “We’ve seen, for almost a decade now, the military using narrow AI systems like image classifiers to identify objects in drone and video feeds. What’s newer are large-language models like ChatGPT and Anthropic’s Claude that it’s been reported the military is using in operations in Iran.”

Scharre spoke with Today, Explained co-host Sean Rameswaram about how AI and the military are becoming increasingly intertwined — and what that combination could mean for the future of warfare.

Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full episode, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

The people want to know how Claude or ChatGPT might be fighting this war. Do we know?

We don’t know yet. We can make some educated guesses based on what the technology could do. AI technology is really great at processing large amounts of information, and the US military has hit over a thousand targets in Iran.

They need to then find ways to process information about those targets — satellite imagery, for example, of the targets they’ve hit — looking at new potential targets, prioritizing those, processing information, and using AI to do that at machine speed rather than human speed.

Do we know any more about how the military may have used AI in, say, Venezuela on the attack that brought Nicolas Maduro to Brooklyn, of all places? Because we’ve recently found out that AI was used there, too.

What we do know is that Anthropic’s AI tools have been integrated into the US military’s classified networks. They can process classified information to process intelligence, to help plan operations.

We’ve had this sort of tantalizing detail that these tools were used in the Maduro raid. We don’t know exactly how.

We’ve seen AI technology in a broad sense used in other conflicts, as well — in Ukraine, in Israel’s operations in Gaza, to do a couple different things. One of the ways that AI is being used in Ukraine in a different kind of context is putting autonomy onto drones themselves.

When I was in Ukraine, one of the things that I saw Ukrainian drone operators and engineers demonstrate is a little box, like the size of a pack of cigarettes, that you could put onto a small drone. Once the human locks onto a target, the drone can then carry out the attack all on its own. And that has been used in a small way.

We’re seeing AI begin to creep into all of these aspects of military operations in intelligence, in planning, in logistics, but also right at the edge in terms of being used where drones are completing attacks.

How about with Israel and Gaza?

There’s been some reporting about how the Israel Defense Forces have used AI in Gaza — not necessarily large-language models, but machine-learning systems that can synthesize and fuse large amounts of information, geolocation data, cell phone data and connection, social media data to process all of that information very quickly to develop targeting packages, particularly in the early phases of Israel’s operations.

But it raises thorny questions about human involvement in these decisions. And one of the criticisms that had come up was that humans were still approving these targets, but that the volume of strikes and the amount of information that needed to be processed was such that maybe human oversight in some cases was more of a rubber stamp.

The question is: Where does this go? Are we headed in a trajectory where, over time, humans get pushed out of the loop, and we see, down the road, fully autonomous weapons that are making their own decisions about whom to kill on the battlefield?

That’s the direction things are headed. No one’s unleashing the swarm of killer robots today, but the trajectory is in that direction.

We saw reports that a school was bombed in Iran, where [175 people] were killed — a lot of them young girls, children. Presumably that was a mistake made by a human.

Do we think that autonomous weapons will be capable of making that same mistake, or will they be better at war than we are?

This question of “will autonomous weapons be better than humans” is one of the core issues of the debate surrounding this technology. Proponents of autonomous weapons will say people make mistakes all the time, and machines might be able to do better.

Part of that depends on how much the militaries that are using this technology are trying really hard to avoid mistakes. If militaries don’t care about civilian casualties, then AI can allow militaries to simply strike targets faster, in some cases even commit atrocities faster, if that’s what militaries are trying to do.

I think there is this really important potential here to use the technology to be more precise. And if you look at the long arc of precision-guided weapons, let’s say over the last century or so, it’s pointed towards much more precision.

If you look at the example of the US strikes in Iran right now, it’s worth contrasting this with the widespread aerial bombing campaigns against cities that we saw in World War II, for example, where whole cities were devastated in Europe and Asia because the bombs weren’t precise at all, and air forces dropped massive amounts of ordnance to try to hit even a single factory.

The possibility here is that AI could make it better over time to allow militaries to hit military targets and avoid civilian casualties. Now, if the data is wrong, and they’ve got the wrong target on the list, they’re going to hit the wrong thing very precisely. And AI is not necessarily going to fix that.

On the other hand, I saw a piece of reporting in New Scientist that was rather alarming. The headline was, “AIs can’t stop recommending nuclear strikes in war game simulations.”

They wrote about a study in which models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 percent of cases, which I think is slightly more than we humans typically resort to nuclear weapons. Should that be freaking us out?

It’s a little concerning. Happily, as near as I could tell, no one is connecting large-language models to decisions about using nuclear weapons. But I think it points to some of the strange failure modes of AI systems.

They tend toward sycophancy. They tend to simply agree with everything that you say. They can do it to the point of absurdity sometimes where, you know, “that’s brilliant,” the model will tell you, “that’s a genius thing.” And you’re like, “I don’t think so.” And that’s a real problem when you’re talking about intelligence analysis.

Do we think ChatGPT is telling Pete Hegseth that right now?

I hope not, but his people might be telling him that.

You start with this ultimate “yes men” phenomenon with these tools, where it’s not just that they’re prone to hallucinations, which is a fancy way of saying they make things up sometimes, but also the models could really be used in ways that either reinforce existing human biases, that reinforce biases in the data, or that people just trust them.

There’s this veneer of, “the AI said this, so it must be the right thing to do.” And people put faith in it, and we really shouldn’t. We should be more skeptical.



Source_link

READ ALSO

Black Forest Labs' new Self-Flow technique makes training multimodal AI models 2.8x more efficient

Google ends its 30 percent app store fee and welcomes third-party app stores

Related Posts

Black Forest Labs' new Self-Flow technique makes training multimodal AI models 2.8x more efficient
Technology And Software

Black Forest Labs' new Self-Flow technique makes training multimodal AI models 2.8x more efficient

March 5, 2026
Google ends its 30 percent app store fee and welcomes third-party app stores
Technology And Software

Google ends its 30 percent app store fee and welcomes third-party app stores

March 4, 2026
The Texas Senate Primary Was a Preview of Creator Wars to Come
Technology And Software

The Texas Senate Primary Was a Preview of Creator Wars to Come

March 4, 2026
Who needs data centers in space when they can float offshore?
Technology And Software

Who needs data centers in space when they can float offshore?

March 4, 2026
Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release
Technology And Software

Did Alibaba just kneecap its powerful Qwen AI team? Key figures depart in wake of latest open source release

March 4, 2026
What’s really in OpenAI’s Pentagon deal — and why many quit ChatGPT
Technology And Software

What’s really in OpenAI’s Pentagon deal — and why many quit ChatGPT

March 4, 2026
Next Post
Google’s Pixel 10 is the best Android phone available, and it’s $200 off

Google’s Pixel 10 is the best Android phone available, and it’s $200 off

POPULAR NEWS

Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
Google announced the next step in its nuclear energy plans 

Google announced the next step in its nuclear energy plans 

August 20, 2025

EDITOR'S PICK

RCS, MMS, and List Validations: The Triple Play for Modern, Compliant Messaging Experiences

October 15, 2025
Skylight, Maple, and the quest to fix the American family’s calendars

Skylight, Maple, and the quest to fix the American family’s calendars

August 22, 2025
Virtual Try On Test Enhancement

Virtual Try On Test Enhancement

December 5, 2025
Creative Testing Feature for Meta Ads

Creative Testing Feature for Meta Ads

September 24, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • My Personal 10 Picks for The Best AI Visibility Tools
  • How to Use Social Media to Find Tenants for Your Real Estate Empire
  • Black Forest Labs' new Self-Flow technique makes training multimodal AI models 2.8x more efficient
  • How to Build an EverMem-Style Persistent AI Agent OS with Hierarchical Memory, FAISS Vector Retrieval, SQLite Storage, and Automated Memory Consolidation
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions