HONOLULU—“What happens when you concentrate on one [AI] model and all of a sudden that model isn’t available to you?” That’s the reality that U.S. Indo-Pacific Command is living right now, its resources and requirements director said here Monday.
The audience, after a beat, laughed cautiously at the realization that Bob Stephenson was likely referring to Anthropic’s Claude model.
“It happens,” Stephenson said Monday at the Pacific Operational Science & Technology conference. “You know, I actually started thinking about this last September. We were working on a plan to be more model-neutral in our workforce. Now we’re just going faster.”
More than a year ago, INDOPACOM integrated AI throughout its headquarters. Less than two weeks ago, President Trump directed federal agencies to stop using tools by Anthropic. And on Monday, the company sued the Pentagon, Defense Secretary Pete Hegseth, and others, claiming illegal retaliation.
Stephenson, moderating a panel focused on advanced partnerships for multi-domain command and control, described his own “AI journey.”
“My challenge right now is: I’m trying—if you understand the seven functions of joint warfare…those things all happen simultaneously.”
“If you’re going to send a ship into position to launch a missile…you have to worry about, does it have enough fuel to get there? Is it going to have to be refueled when it gets back? What about reloading? What’s the status of the launcher? What’s the status of the weapon? And so on and so forth. And so these things all interact. So we’re trying to use AI to create agentic workflows to allow us to do this at scale.”
On the other side of the world, in Central Command, he said, “They’re executing about 1,000 fires a day. That’s a lot. That’s what we think, that’s what modern warfare looks like. They’re working really hard to try to stay up with this, and they’re using some AI tools that actually worked well for us.”
Panelist Paul Gaertner, project leader for integrated command, control, communications and computing for the Australian Department of Defense, told the audience that he is worried about both under-trusting and over-trusting AI.
Stephenson said he shares that concern. But when asked about allowing autonomous forms to manage themselves and mitigate their own risk, he said the answer is “sort of.”
“My boss tells us that in offensive weapons, there must be human agency,” Stephenson said, referring to commander Adm. Sam Paparo. But for defensive weapons, “the criteria varies. If somebody is shooting at you, there’s much more latitude” in having systems to automatically defend against the threat.
Stephenson, who retired from the Navy in 2003 after 30 years of service, noted that the U.S. has had autonomous weapons systems since he was a captain.
“There is a need for autonomy. There is a desire for autonomy at the edge, but with some of them, every weapon we have has a failsafe. We obviously don’t want to unleash a swarm that’s just going to fly around and go after the wrong thing. So there will be limits,” he said. But “we have these things called torpedoes that we have shot for, you know, a year or two, they worked out this thing called anti-circular run that kept the torpedo from zigzagging around” and coming back to “attack the thing that shot it. So think of a similar constraint for autonomous systems.”








