"Shall we play a game?" Is the only winning move in AI arms control not to play by someone else's rules? Image via WarGames (1983)

Although Telemetry has been quiet for a while, I hope for you to welcome my notes 'n news to your inbox again.

Plenty of you have kept in touch professionally over the past year, and so you know that I have been busy within our ecosystem, including leading two popular programs at MIT focused on advanced tech and national security, while managing projects focused on new medical device development and defense microelectronics. It's been tricky sitting still long enough to tell you about it.

I've been preparing The End Effector as a public-benefit endeavor to provide operator engagement, education, and embedded work as an independent publisher for those who are building science-advantaged organizations to face the hardest, most important challenges on Earth and beyond.

Coming up this spring will be new content, community engagements, and a membership program with insightful interactive features, featured events, and some fantastic (IMHO) deep dives into critical capabilities pertaining to lab-to-launch commercialization, national security, and science-advantaged problem solving.

Your replies and feedback are always welcome.

– Jonathan "JMill" Miller

MONITOR

Image by JMill

Who Draws the Red Lines on Dual-Use When There's No Treaty to Draw Them In?

As the clock ticked past 5:01 PM last Friday, we had an answer: Anthropic, the company behind Claude, officially refused to strip two safety constraints from its $200 million Department of War contract. The constraints included 1) No mass domestic surveillance of Americans and 2) No fully autonomous weapons at current model reliability. CEO Dario Amodei put it plainly: the reliability isn't there yet, and capabilities are "getting ahead of the law." The Pentagon wanted the model for "all lawful purposes,” with no carve-outs. (Government procurement has never been accused of subtlety.)

Within hours of the deadline’s expiration, President Trump ordered all federal agencies to stop using Anthropic products. Secretary Hegseth slapped the company with a "Supply-Chain Risk to National Security" designation – essentially a kill switch that bars any military contractor from engaging with them. And OpenAI signed the contract the same week, though it’s been reportedly “rushed”, “opportunistic”, and ultimately “sloppy”.

Most of the coverage frames this as a political standoff, with a company defying the government and thus getting punished. But I think that reading misses what's actually interesting here, which is the structural story underneath.

Anthropic isn't a normal company refusing a normal contract term. They're a Public Benefit Corporation with a Long-Term Benefit Trust, a governance structure specifically designed so that commercial or political pressure can't override safety commitments. Their Responsible Scaling Policy evaluates each model against Safety Levels modeled on biosafety standards. (As an aside: when grappling with how to ‘make sense’ in an emerging field, I’m a fan of examining how other fields/domains structure their investigations and assessments. More on this in a future edition.) The Safety Levels could be misconstrued as public relations gestures when they're effectively load-bearing architecture in the same way that a building's fire exits aren't optional just because the landlord wants more floor space!

My line of thinking goes to DARPA, part of but distinct in its operations from the DoW, and how that organization has worked through elements of this. Their ELSI program (Ethical, Legal, and Societal Implications) moved ethical analysis from post-hoc review into the core of program design. The focus shifted from "did we break anything?" but "what should we build, given what could break?" Dr. Rebecca Crootof, DARPA's inaugural ELSI Visiting Scholar, has been building this into the agency’s programs (here’s a relevant Lawfare podcast episode about her work), and I’ve been intentional about welcoming individuals like Dr. Bart Russell of DARPA Defense Sciences Office and Susan Winterberg of Reframe Venture to speak at my programs on these topics. Elevating ELSI as a core design strategy rather than a side-car tag-along is a genuinely useful approach, whether for a startup or a Big Co.

Anthropic essentially did a similar thing in its rapidly expanding corporate form, by building the ethical constraints into the load-bearing walls of the company. When the government apparently tried to remove them, the building resisted – but looks like it may not have crumbled🚩. That's a fundamentally different animal than Google's 2018 Project Maven walkout, where employees objected after the contract was signed. This is constraints engineered into the business architecture before the first Request-For-Proposal ever landed.

The assumptions I'm making and where they could break

Three things are doing heavy lifting in my analysis. First, that governance architecture is more durable than stated principles. Anthropic's PBC + Trust structure held under pressure this time, and I’d be willing to bet it holds next time. Second, that Amodei's "the reliability isn't there yet" is an honest engineering assessment rather than a negotiating position. Third, that the absence of an AI arms control framework means companies will keep getting conscripted into quasi-governmental roles whether they wanted the job or not. If any of those assumptions turn out to be wrong, the next iteration of this story may read very differently.

And so, a point of tension I don't think anyone resolves cleanly is that there is no AI arms control treaty. Not between the U.S. and China. Not between anyone. This is an arms race without an armistice, and someone needs to be building national security machine intelligence at frontier scale. Anthropic didn't choose to be an arms control negotiator… they basically got conscripted into the role because the governance infrastructure that should exist simply doesn’t and they are one of the leading organizations pushing the frontier, co-building (and perhaps attempting to co-regulate) with strategic customers, too.

The Fastest Contract Swap in Pentagon History Tells You Everything About Leverage

Something that makes me uncomfortable is the hastiness of ‘the swap’, and this would be a big concern for any entrepreneur who’s in contract negations with a prospective contract partner.

OpenAI CEO Sam Altman said publicly that his company holds the same two red lines of no mass surveillance and no autonomous weapons. Then OpenAI signed the contract hours after Anthropic was blacklisted. And the Pentagon approved those same safety constraints for the OpenAI deal.

I had to read that again – the government accepted from OpenAI the exact constraints it punished Anthropic for insisting on. The difference? Anthropic wanted them in the contract. OpenAI stated them publicly, without contractual enforcement. One company built load-bearing walls; the other hung a poster that says "SAFETY FIRST.”

The speed of the substitution (Same week! Same red lines! Different outcomes!), hints that the government's leverage over any single frontier model provider is near-total right now. Stated principles without contractual teeth are worth approximately… nothing(!?)... under procurement pressure. We’ll have to keep an eye on whether OpenAI's red lines survive the first contract amendment. I posit that a quiet erosion is most likely.

What this means depends on where you sit

If you're building dual-use tech: Your acceptable-use policy is a contract term now, not a PR document. Build your governance architecture, be it a public-benefit corp, trust structure, published scaling policy, whatever fits — before the first government RFP shows up. If you don't define your red lines, someone will probably define them for you. Probably at 5:01 PM on a Friday!

If you're investing in it or sitting on a board: The "supply-chain risk" designation is a relatively new and vague tool for the government. Any company with ethical constraints baked into its governance structure could face the same treatment. That's a portfolio risk you need to be underwriting before the next RFP cycle, not after. Ask your companies: do your safety commitments have contractual teeth, or are they posters on the wall?

If you're watching from government or procurement: We see acceptance from OpenAI the exact constraints that punished Anthropic. That inconsistency is now public record and will be citable in every future contract negotiation by every company.

BLIPS

DARPA's ELSI program published a podcast episode on integrating ethical analysis at program inception, recorded before the Anthropic drama. Worth the half-hour listen if you think safety constraints and national security capability are inherently opposed. Anthropic's Responsible Scaling Policy v3.0 dropped February 24, three days before the standoff reached its climax… coincidence or deliberate timing? The document reads like it was written by people who knew a fight was coming. And Anthropic says it will challenge the supply-chain risk designation in court, calling it "legally unsound" and a "dangerous precedent for any American company that negotiates with the government."

“Gentlemen, you can't fight in here — this is the War Room!" Same energy as negotiating safety constraints at 5:01 PM on a Friday. Image via Dr. Strangelove (1964)

No One Builds Alone.

/N1BA

Telemetry is written by JMill of The End Effector.

Questions, feedback? Let me know by replying to this email.

Keep Reading