AI Vendor Lock-In: What the Pentagon Taught Every CISO This Week
The DoD's Anthropic supply chain risk designation exposed every enterprise's embedded AI architecture gap. Here's what your vendor contracts are missing.
You probably don’t know which AI model is running inside your operational tools right now. That’s a near-certainty given how enterprise AI procurement actually works. The Pentagon just ran a live stress test on that exact blind spot, and the results were not subtle. When the Department of War formally designated Anthropic a supply chain risk on March 5, 2026, making it the first American company in history to receive a label previously reserved for Huawei and Chinese state-adjacent tech firms, the disruption didn’t start with Anthropic. It cascaded through Palantir, across AWS infrastructure, and into active military workflows during U.S. strikes on Iran. Your enterprise has the same layered architecture. The question is whether you’ve mapped it, and whether your contracts protect you when the layer you don’t control catches fire.
The AI Model You Don’t Control Is Already in Production
The DoD’s direct customer relationship wasn’t with Anthropic. Claude ran inside Palantir’s Maven Smart System, hosted on AWS at Impact Level 6, sitting on classified infrastructure the military depended on for intelligence analysis and operational planning. The DoD contracted with Palantir. Palantir embedded Claude. When the supply chain risk designation landed, it cascaded from procurement machinery through Palantir’s operational position and into workflows with real military dependencies, reportedly including active support for Iran strikes, even as the designation was being disputed on social media by the Secretary of Defense and the CEO of Anthropic simultaneously.
Piper Sandler analysts noted after the designation that Anthropic was “heavily embedded in the Military and the Intelligence community” and that migrating off the technology could “pose some short-term disruptions” to Palantir’s operations. Short-term disruptions. During an active military operation. That’s the polite Wall Street version of the problem.
Replace “Military and Intelligence community” with your sector. Replace “Palantir” with your largest workflow vendor. Replace “active military operation” with your peak fraud season, your annual close, or your next regulatory audit. You’ve just described your own exposure.
Your enterprise equivalent of Maven isn’t a targeting system. It’s the fraud detection platform your SOC relies on for alert triage. It’s the contract review tool your legal team treats as a first pass on every agreement. It’s the SIEM enrichment workflow your analysts approved 18 months ago, without anyone asking which model was under it or whose usage policy governed it. In each case, there’s a foundation model embedded by a SaaS vendor, hosted by a cloud provider, running under policies you never reviewed and almost certainly can’t enforce. The vendor who sold you the platform might not even know which model version was deployed last Tuesday.
The lock-in risk most CISOs think about is the wrong one. They worry about pricing leverage at renewal or feature gaps during the next budget cycle. Those are real, and they’re also the least interesting version of vendor risk in an AI-dependent stack. The risk that actually bites is operational dependency on a model whose policies, safety stack, and external political relationships sit entirely outside your contractual reach. This week demonstrated those conditions shift in 48 hours. When they do, you find out how embedded you actually are. The DoD found out during airstrikes. You’ll find out during something comparably inconvenient for you.
What the Contract Language Reveals About Your Own Agreements
The factual record on the Anthropic negotiation is clear enough. The Department of War’s January 2026 AI strategy memorandum directed procurement to require “any lawful use” language and to acquire models “free from usage policy constraints that may limit lawful military applications.” Anthropic held two red lines: no mass domestic surveillance of Americans, and no fully autonomous weapons with no human in the targeting decision loop. The DoD called those constraints unacceptable. The negotiation collapsed. The designation followed.
Here’s where it gets interesting... OpenAI reached a deal within hours of the designation announcement, published contract excerpts containing the exact “all lawful purposes” language Anthropic refused, then amended the agreement twice in the following week after legal experts publicly tore apart what the protections actually meant. Sam Altman acknowledged the deal was “definitely rushed” and that “the optics don’t look good.” Jessica Tillipman, associate dean for government procurement law studies at George Washington University, wrote that the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use.” Altman signed it anyway. To be fair to him, he was working in 48-hour crisis mode while a competing lab was being designated a national security threat. Good contract hygiene was not the priority.
Instead of wasting your time on the OpenAI vs. Anthropic drama and who is right or wrong, you need to pay attention to the legal architecture underlying AI safety commitments.
Why?
Because your enterprise contracts almost certainly follow the same pattern OpenAI accepted, that include usage restrictions anchored to “applicable law” and “existing policy,” with the vendor’s safety stack as the primary enforcement mechanism. OpenAI anchored its protections to existing statutes: the Fourth Amendment, FISA, DoD Directive 3000.09 on autonomous weapons, and Executive Order 12333. Critics flagged immediately that EO 12333 is the authority the NSA has historically used to justify intercepting Americans’ communications through collection outside U.S. borders. “Lawful” in national security contexts isn’t a fixed boundary. It lives inside classified legal interpretations, executive orders, and internal agency guidance nobody outside the building ever reads.
Your enterprise contracts with AI vendors operate the same way. When law shifts, when policy changes, or when your vendor faces its own version of a 48-hour political deadline, those anchors move with the situation. What your procurement posture needs instead are vendor-imposed, free-standing prohibited-use schedules for your specific high-risk workflows, written into contract appendices with attached audit rights and defined remedies. “We comply with applicable law” is a description of baseline legal obligation. It’s not a control. It’s what every vendor says about every product, whether or not AI is involved. You shouldn’t be paying for that sentence in an AI addendum. You should be getting something that took a lawyer to write specifically for your deployment.
Human-in-the-Loop Theater
Let me describe a workflow you probably have running right now. Your AI triage layer ingests 200 alerts per shift and flags 180 as low severity. Your analyst reviews the queue, confirms the model’s assessment on most items, escalates five, clears the rest. Total elapsed review time for the cleared items is, let’s say, roughly two minutes each. Every disposition went through a human. The audit log shows human review. Your controls documentation references human oversight. What actually happened is your analyst ratified model outputs under cognitive load and time pressure while telling themselves they were exercising judgment.
That’s the failure mode human-in-the-loop review was designed to prevent. The loop exists on paper. The friction isn’t in the workflow design because no step requires the reviewer to explain why they agree with the model before confirming the output. Nobody required forced alternative generation before escalating or clearing. Nobody captured uncertainty as a structured field. The control is decorative.
The OpenAI contract’s autonomous weapons provision bars the use of the AI system “to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” Defense scholars noted the omission of “human-in-the-loop” language was deliberate, preserving operational flexibility. “Human judgment” and “human control” are not equivalent, and the people drafting that language knew it. The contract borrows its enforceability entirely from existing policy, which requires commanders to exercise “appropriate levels of human judgment over the use of force.” Appropriate is not a control. It’s a word that means whatever the decision-maker concludes is appropriate under the circumstances they’re actually in.
Research from King’s College London found that tested AI models threatened nuclear strikes in 95% of simulated crisis scenarios. The problem wasn’t autonomous weapons. The problem was that under uncertainty and time pressure, models produced escalatory recommendations with false confidence, and human reviewers were positioned to ratify those outputs rather than interrogate them. That’s not a future risk. That’s automation bias, and it operates in your environment every shift, at every tier of your AI-assisted workflows.
The Lavender targeting system used by Israeli defense forces was reportedly identified by investigators as carrying a 10% false positive rate on human identification, with human reviewers present throughout the process. The investigation raised a direct question of whether those humans were genuinely reviewing or functionally ratifying outputs under operational tempo. That distinction carries different consequences in contexts outside the military. In your environment, it shows up as a miscategorized fraud case that costs a customer their account, or a misconfigured access control that cleared review because the analyst trusted the model’s output and moved on in the last four minutes of a shift.
Building real decision friction requires designing it into the workflow architecture before something goes wrong, not auditing for it afterward. Two-person review for high-consequence AI outputs. Forced alternative generation before an analyst confirms a model recommendation. Explicit uncertainty capture as a required structured field. If your current AI-assisted workflows don’t require a reviewer to articulate why they agree with the model’s output before confirming it, then you are rubber-stamping your way into a problem down the road. You may survive your next audit. Youwon’t survive your next incident.
The Procurement Posture That Needs to Change Before the Next Signature
Most CISOs don’t own AI vendor contracts. Procurement does. Legal does. The CISO inherits the agreement after signature, usually after the vendor relationship is already operational and the leverage window has closed. This is the moment where I’ll stop pretending that’s a systems failure and call it what it is: CISOs have let themselves get cut out of a decision that’s now one of the highest-risk commitments their organization makes. The Anthropic situation gives you the publicly documented argument to change that for every AI agreement with operational or regulatory exposure going forward.
The DoD’s relationship with Palantir didn’t include enforceable audit rights over Claude’s underlying usage policy, safety stack updates, or model variant changes. When Anthropic’s relationship with the DoD broke down, Palantir faced operational disruption from a vendor dependency it hadn’t fully governed at the model layer. Your enterprise equivalent is any SaaS vendor who embeds a foundation model in a production workflow without explicit flow-down contract obligations. You need those flow-down provisions now: contractual requirements for your SaaS vendors to notify you of material AI policy changes, with a defined right to pause deployment or terminate.
Anthropic’s published usage policy states the company may tailor restrictions for certain customers based on mission and legal authorities, subject to Anthropic’s judgment about safeguards. That clause exists in their public policy documentation. Most of their enterprise customers have never read it, don’t know whether their deployment is governed by standard or tailored terms, and have no contractual mechanism to find out. If you’re an Anthropic customer and you don’t know the answer to that question, the answer is almost certainly that you don’t know, which means you don’t control it.
Splunk’s 2026 CISO Report found that a large majority of CISOs carry personal liability concerns about security incidents. AI model misuse by a subcontractor or an embedded model that you didn’t govern is exactly the incident scenario that tests that liability question. Your current contract schedules almost certainly don’t address it. Here are the questions that need to be in every AI vendor negotiation before signature, not as a wish list, but as conditions of signature:
Which model variant governs your deployment, and does that variant deviate from the vendor’s published acceptable use policy or baseline safety commitments? Get the answer in writing with a version reference.
What change control process governs model updates, safety stack revisions, and policy changes? “We update continuously” is not an answer. You need customer notice requirements and the right to pause deployment when the vendor makes a material change.
What logs exist, who holds access, and what is the retention period? Without logs you can’t support an incident investigation, a regulatory inquiry, or your own post-incident analysis.
What happens when a major customer, a regulator, or a government agency demands scope expansion for your deployment? The Anthropic situation confirmed this question isn’t hypothetical. It’s a negotiating dynamic triggered externally, rapidly, and without advance warning to downstream customers.
From the Run Phase to the Evolve Phase
If you’re applying the CARE framework, this situation signals that you’re overdue for an Evolve-phase review of your AI vendor relationships. The Create and Adapt work produced your current model integrations. Most organizations have stayed in the Run phase, monitoring performance and managing routine issues, while the risk environment underneath those integrations has shifted significantly. The Evolve phase requires reassessing whether the governance model you built for each AI deployment still fits the world you’re operating in now.
The Anthropic situation changed that environment in three concrete ways your board needs to understand. First, it showed that an AI vendor’s political and contractual relationships with high-profile customers now represent operational risk to every downstream customer, not only government contractors. Second, it produced a documented public case where contract language anchored to “applicable law” failed to deliver the protections a party believed it had agreed to. Third, it revealed that model replacement timelines are slower than your AI vendors implied during the sales process. The DoD, with its classified infrastructure, operational urgency, considerable resources, and six-month transition timeline, is the fastest-moving version of this problem you’re likely to encounter. Your enterprise timeline almost certainly isn’t shorter.
Build your AI vendor risk registry before something breaks, while relationships are functional and vendors are cooperative. Map every production AI deployment to the model underneath it, the vendor who embeds it, the cloud provider who hosts it, and the contract that governs each layer. Run a prohibited-use gap assessment: which categories of use does each contract explicitly prohibit, and are those prohibitions free-standing or anchored to “applicable law”? Apply OWASP’s Agentic Top 10 to any workflow where a model makes or influences a decision without a mandatory human review step that requires documented rationale.
The CISOs who were ahead of this story weren’t tracking the Pentagon news cycle. They had already asked their SaaS vendors which model was embedded, what the vendor’s posture would be if that model’s policy changed, and what their exit path looked like. Most got vague answers. The right response to a vague answer from an AI vendor is a contract clause, not a follow-up email.
Key Takeaway: Your AI vendor’s ethics statement doesn’t protect your enterprise. A free-standing prohibited-use schedule, enforceable audit rights, and model-layer flow-down provisions do.
What to Do Next
Start with a model inventory audit across your top ten SaaS vendor relationships. Ask each vendor to identify the foundation model embedded in your production workflows and provide the current acceptable use policy governing your specific deployment, including any tailored terms. Map the gap between what the policy says and what your contract actually enforces.
The Anthropic situation is the most instructive public case study on AI vendor governance to emerge from this space. Use it while it’s in front of your board and before your next AI vendor signature lands on someone else’s desk.
👉 Subscribe for more AI security and governance insights with the occasional rant.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
The views and opinions expressed in RockCyber Musings are my own and do not represent the positions of my employer or any organization I’m affiliated with.






Hello Rock Lambros, the insights you share in your newsletter piece are clear, objective and practical.
Reading through, one could easily begin to understand the ideas you share and how you thought through real-world events to conclude on those ideas.
More CISOs need to hear what you have to say in these newsletters. I will keep advocating for you in that regard on LinkedIn and co.
Incredible work, Sir. ... I am glad to be here.