OpenAI's Pentagon Deal and the Red Lines Problem: Who Actually Decides What AI Does in War?
Yesterday, OpenAI announced a deal with the Pentagon. Hours earlier, the Trump administration had designated Anthropic a "supply chain risk" — contractors had to certify they weren't using Claude — after CEO Dario Amodei told Defense Secretary Pete Hegseth he "cannot in good conscience" hand over unrestricted access to his AI for mass domestic surveillance and autonomous weapons.
Then came the kicker: hours after Trump's ban, the US military used Claude anyway. In real operations. Against Iran. Intelligence assessments, target identification, battle scenario simulation. All running on the AI that had just been politically blacklisted.
That sequence doesn't just reveal bureaucratic chaos. It reveals that the ethical debate is already running behind reality. The decisions are being made in production, not in policy rooms. And nobody has a clean answer to who should actually be in charge: not the AI companies, not the government, not international bodies.
Here's where I keep getting stuck.
Who Gets to Set the Red Lines?
There's a version of the "let the AI companies decide" argument that's genuinely compelling. These companies understand the technology. They can embed limits technically, not just as policy language in a contract, but as actual model behavior. OpenAI's deal with the Pentagon includes what they call a "safety stack": if the model refuses a task, the government can't override it. Anthropic built guardrails into Claude at the model level. These aren't pinky promises. The system actually enforces them, in theory.
And when you look at what Anthropic actually did — standing firm against a Pentagon ultimatum even when it meant getting blacklisted, you can see that at least some people running these companies are willing to absorb real costs to hold the line.
But then look at the numbers.
OpenAI is currently valued at $730 billion. Anthropic just raised $30 billion at a $380 billion valuation. These are not charities operating outside the reach of market pressure. They're companies with investors who expect returns on some of the largest bets in the history of private capital. When a $500 million government contract is on the table and your main competitor just got blacklisted for saying no, the incentive structure is not exactly pointing toward principled restraint.
Sam Altman admitted the Pentagon deal was "rushed" with "bad optics." That's a polite way of saying it was done under political pressure, not through any real ethical review. And he said openly that the red lines could change as technology evolves. That's honest. Placing that much faith in institutions that answer to cap tables, not voters, is a different thing than governance.
So okay, maybe governments should set the rules instead. Democratic accountability. Elected officials answer to the public. Laws persist beyond any single CEO's tenure.
But, watch a congressional hearing on AI and then tell me you feel good about this option.
The lawmakers shaping technology and defense policy are, with some exceptions, not people who understand what a large language model actually does. They're people who once asked Mark Zuckerberg how Facebook makes money and seemed genuinely surprised by the answer. The DoD's own AI strategy memo, published in January 2026, talks about "any lawful purpose" and "model objectivity benchmarks" and accelerating AI adoption for warfighting. That's the current oversight framework.
And then there's the conflict of interest problem, which is just flatly worse than anything you'd accuse an AI company of. At least 50 sitting US lawmakers hold stock in defense contractors. Members of the House Intelligence Committee have traded tens of millions in Pentagon contractor stocks. Senators on Armed Services committees own positions in Lockheed Martin, Raytheon, Honeywell. These are the people who would write the laws governing AI in warfare. People with direct financial stakes in the outcome of those decisions, and in many cases, with campaign contributions flowing in from the same defense industry that profits from the contracts.
The government isn't a neutral referee. It's a party with its own investments, its own agendas, and in many cases, its own very specific portfolio positions.
Neither option is clean. We're choosing between companies that answer to investors and governments that answer to donors.
What If We're Already Losing?
Instinctively, most of us answer the same way: AI should not make autonomous decisions in warfare. Human in the loop, always. That feels right.
But here's where it gets harder.
China is spending an estimated $15 billion a year on military AI. The People's Liberation Army is actively developing autonomous drone swarms for urban warfare. Russia has deployed autonomous deep-strike drones in Ukraine and rejected the 2024 UN resolution on lethal autonomous weapons outright. They are not having this ethical debate. They are building the capability.
If the US commits to keeping humans in every lethal decision loop and adversaries don't, the practical consequence is a speed asymmetry. Autonomous systems can identify and respond to threats faster than any human chain of command can authorize. In a conflict where the other side is operating at machine speed, "human in the loop" might start to look less like a moral commitment and more like a tactical disadvantage.
This is the argument that keeps serious defense people up at night. Not because they want autonomous killing machines, but because they can do the math.
And then just look at the last 24 hours. The Iran situation makes it concrete. The military wasn't using Claude to pull a trigger. It was using it for intelligence analysis and scenario simulation. Tasks that, in previous generations, would have taken teams of analysts days to complete. The AI wasn't autonomous in the lethal sense. But it was already embedded in the decision architecture. The line between "informing" a strike and "enabling" one is thinner than the policy language suggests.
And that line is going to keep getting thinner. As AI systems get faster and more capable, the window for human judgment in real-time operations compresses. At some point, insisting on a human in every decision loop becomes operationally equivalent to not having the capability at all.
Which means the moral position and the operational reality are quietly diverging, and knowing what is the right answer becomes much harder and more complex.
Where That Leaves Us
The current framework is already broken. That much is clear. The US military used a banned AI in active combat operations because it was already integrated into their systems. The company that made it had just drawn a public line in the sand. The president who banned it announced it on Truth Social. And somewhere in the middle of all that political theater, Claude was helping simulate the battle.
That's not governance. That's improvisation.
Can we trust AI companies to hold the line when the financial pressure gets intense enough? Anthropic held for now, with this management team, at this scale, before the pressure gets heavier. A different CEO will make a different call.
Can we trust governments to write good rules for technology they demonstrably don't understand, when many of the people writing those rules hold stock in the contractors who benefit from them? The alternative, handing it to regulatory agencies staffed by technical experts with less democratic accountability, doesn't obviously solve the problem either.
And if we hold the ethical line on autonomous AI in warfare while our adversaries don't, are we making a moral choice or just choosing to lose?
These aren't rhetorical questions. They're the ones that will define how AI gets used in conflict for the next decade. The fact that we're mostly still improvising the answers should worry all of us.