Weekly Musings Top 10 AI Security Wrapup: Issue 32 March 27-April 2, 2026
Anthropic's Worst Week, CISA's Busiest Friday, and the EU Still Wasn't Ready
Anthropic had a week that should be a case study in operational security failure for years to come. On March 31, a routine release packaging error exposed 500,000 lines of Claude Code source across roughly 2,000 files. Five days earlier, a CMS misconfiguration had already put nearly 3,000 unpublished internal documents into a public search index, including draft material describing their most capable model as posing “unprecedented cybersecurity risk.” By April 1, they were firing DMCA takedowns at 8,000 GitHub repositories, most unrelated to them, trying to unsee what the internet had already seen. By April 2, a congressman was writing to the CEO about national security.
That would have been enough for any week. It was not the only thing that happened. On March 27, CISA added two exploited AI infrastructure vulnerabilities to its KEV catalog; three LangChain and LangGraph CVEs hit disclosure, with 84 million downloads in scope; and the European Commission confirmed attackers had been inside their AWS account for three days. The thread connecting all of it is the same one it always is: AI deployment speed running ahead of the operational security discipline required to sustain it. This week was not an anomaly. It was a pattern. Patterns do not self-correct.
As a bonus, check out my AI Cyber Magazine Podcast with Confidence Staveley during RSA.
1. Anthropic Leaked 500,000 Lines of Claude Code Source, Then Panicked on GitHub
On March 31, a debugging file accidentally bundled into a routine Claude Code update exposed approximately 500,000 lines of source code across nearly 2,000 files (CNBC, Axios, Fortune). The codebase was mirrored across GitHub within hours. Leaked feature flags revealed unreleased capabilities: a persistent background agent, cross-device remote control, and session-to-session learning. Anthropic attributed the incident to “a release packaging issue caused by human error” and stated no customer data was exposed. On April 1, attempting to scrub the code from GitHub, Anthropic sent DMCA takedowns that hit approximately 8,000 repositories, most unrelated to the leak (TechCrunch, Bloomberg).
Why it matters
Competitors received Anthropic’s unreleased feature roadmap. That strategic damage compounds the fact that this happened five days after the Mythos content leak. Coincidence???? I’ll let you decide.
The persistent background agent and remote control capabilities in the leaked code require explicit security design review before deployment. They were in development without prior public disclosure of the capability direction.
The DMCA sweep that caught 8,000 unrelated repositories shows what reactive incident response without a playbook looks like. Every remediation attempt created a new problem.
What to do about it
If you deploy Claude Code in your enterprise environment, review what access it holds to production systems and rotate any associated credentials until the full scope of the leak is confirmed.
Require software composition analysis (SCA) and release integrity verification as contractual terms with your AI vendors.
Develop a pre-incident legal response playbook that covers IP exposure scenarios, including proportional DMCA procedures that require scope confirmation before submission.
Rock’s Musings
Two major operational security failures from the same company in five days. The first was a CMS misconfiguration. The second was a packaging error. Both are basic controls that mature security operations have solved. Anthropic markets itself on safety and trustworthiness, and that positioning is now doing work it was not designed to carry. The DMCA overcorrection made it worse: you leak 500,000 lines of source code, then fire automated takedown requests at 8,000 repositories, most of them unrelated to you. Every IP attorney will tell you DMCA takedowns require good faith and specificity. Have a process before the fire starts.
2. Anthropic Accidentally Confirmed Its Most Capable Model Poses Unprecedented Cybersecurity Risk
A configuration error in Anthropic’s content management system made nearly 3,000 unpublished assets publicly searchable starting around March 26, including draft blog posts for a model called Claude Mythos (Fortune, CoinDesk). Internal documents describe Mythos as capable of rapidly finding and exploiting software vulnerabilities at an unprecedented scale. Anthropic confirmed the model exists and is in testing with early-access customers, calling it “a step change” in capability. The company described the exposure as caused by a configuration error and stated the data store was secured after discovery.
Why it matters
Anthropic’s own internal documentation, not a researcher’s estimate, describes this model as posing cybersecurity risks the industry has not seen before. That is the company’s self-assessment.
Early-access customer deployments were already underway before any public discussion of the risk profile occurred. The model shipped before the security conversation started.
A frontier model capable of autonomously finding and exploiting vulnerabilities at scale invalidates current vulnerability management timelines. That conversation needs to happen now.
What to do about it
Update your AI threat model to account for AI-assisted offensive operations at scale. This is not a future scenario. It is a current deployment.
Ask your AI vendors direct questions about internal capability assessments before your next contract renewal. What have they assessed, and when?
Document board and leadership awareness of frontier AI capability risk as a governance record item. Regulatory scrutiny on this topic will increase.
Rock’s Musings
The model is called Mythos. The leaked internal docs describe the cybersecurity risk as unprecedented. Anthropic was already deploying it with customers before any of this became public. This happened not because of an attack but because someone left a CMS misconfigured. Anthropic has historically been conservative in capability claims. When their own internal documentation describes a model as different in kind from what came before, the security community should take that seriously, not because the word “unprecedented” is alarming on its own, but because the source is the organization that built the thing. They know what it does.
3. ShinyHunters Breached the European Commission’s AWS Account
The European Commission confirmed on March 27 that attackers accessed the AWS account hosting its Europa.eu websites, with the intrusion first detected on March 24 (TechCrunch, Bloomberg). Threat actor ShinyHunters claimed responsibility and alleged theft of more than 350GB of data including mail server exports, databases, confidential documents, and contracts. The Commission’s statement noted internal systems were unaffected and mitigation measures were applied quickly. Affected EU entities received notification.
Why it matters
ShinyHunters has a documented history of monetizing stolen data through dark market sales. Even if the 350GB claim is exaggerated for leverage, policy documents and procurement contracts from the Commission’s web infrastructure are a counterintelligence asset.
The Commission enforces GDPR and is building the AI Act enforcement apparatus. Getting breached while standing up that apparatus is not a good governance signal.
AWS account-level compromise is full infrastructure compromise in practice. A managed cloud provider does not neutralize cloud account security failures.
What to do about it
Audit your AWS account permission boundaries and review CloudTrail logs for anomalous patterns this week, not next quarter.
Ensure your incident response plan explicitly covers cloud account compromise. Traditional endpoint-focused plans miss this scenario entirely.
If any of your vendors are EU institutions or Commission contractors, treat procurement data exposure as a downstream supply chain risk and assess your exposure now.
Rock’s Musings
The body enforcing Europe’s data protection framework had its AWS account cracked. Governance credentials do not equal security maturity. Write the most thorough AI regulation in the world. Your cloud IAM configuration remains a disaster until someone fixes it. The ShinyHunters 350GB claim needs forensic verification before anyone draws conclusions about scope, but three days of undetected access to the official Commission infrastructure doesn’t need verification. The institutions asking private sector organizations to demonstrate AI security maturity owe the market some transparency on their own failures. Name it, fix it, move on.
4. Your AI Workflow Tool Got CISA’s Attention: Langflow CVE-2026-33017
CISA added CVE-2026-33017, a critical remote code execution flaw in Langflow, to its Known Exploited Vulnerabilities catalog on March 26. Attackers began scanning for exposed instances roughly 20 hours after the advisory publication, with exploitation scripts appearing within 21 hours and active .env and .db file harvesting beginning within 24 hours (Sysdig, BleepingComputer, Help Net Security). The vulnerability carries a CVSS score of 9.3 and allows unauthenticated attackers to inject arbitrary Python code through the public flow build endpoint with no sandboxing applied. Federal agencies face an April 8 remediation deadline. Upgrade to Langflow version 1.9.0 or later.
Why it matters
Langflow is used to build and deploy LLM pipelines. Remote code execution in a workflow orchestration tool gives an attacker control over the AI’s inputs, outputs, and the credentials it holds.
The 20-hour exploitation window is increasingly standard for high-severity flaws. The concept of a patch window measured in days is no longer realistic for internet-exposed AI infrastructure.
.env file harvesting is the attacker’s first move because those files contain API keys for LLMs, vector databases, and cloud services the workflow connects to.
What to do about it
If Langflow runs on any internet-accessible host, treat the environment as potentially compromised and rotate all associated credentials before patching.
Segment AI workflow orchestration platforms behind authentication and network controls. These tools have no business being directly internet-accessible.
Verify Langflow version across your environment immediately. Anything prior to 1.9.0 is an open liability.
Rock’s Musings
The 20-hour exploitation timeline should reframe your vulnerability management program. That program was designed when you had days or weeks to act. That era closed. CISA’s KEV catalog is now your minimum viable patch priority list, and if you are not at sub-72-hour remediation SLAs for critical AI infrastructure, you are already behind. Organizations still describing AI workflow platforms as “internal tools” need a rethink. Internal tools with LLM API keys, cloud credentials, and production data connections are not internal in any meaningful threat model. An attacker who executes code in your Langflow environment has lateral movement access to every system that environment touches.
5. LangChain and LangGraph: Three CVEs, 84 Million Downloads Exposed
Cyera security researcher Vladimir Tokarev disclosed three vulnerabilities in LangChain and LangGraph on March 27, each covering a different attack path against the same enterprise AI framework (The Hacker News). CVE-2026-34070 (CVSS 7.5) enables path traversal to arbitrary files through manipulated prompt templates. CVE-2025-68664 (CVSS 9.3) allows extraction of API keys and environment secrets through unsafe deserialization. CVE-2025-67644 (CVSS 7.3) enables SQL injection in LangGraph’s SQLite checkpoint layer. LangChain, LangChain-Core, and LangGraph collectively logged over 84 million downloads. Patches are available: LangChain Core 1.2.22+, LangChain-Core 0.3.81+ or 1.2.5+, and LangGraph checkpoint sqlite 3.0.1+.
Why it matters
These three CVEs cover filesystem data, environment secrets, and conversation history in combination. Together, they represent near-total information exposure for any application built on these frameworks.
The 84 million download count means a significant portion of enterprise AI applications are affected. Most organizations do not know which AI frameworks their development teams selected.
CVE-2025-68664 with its 9.3 CVSS is the most critical. Unsafe deserialization is a well-understood, pervasive, and reliably exploitable class of vulnerability.
What to do about it
Inventory every AI framework in your environment, including those embedded in third-party tools. Do not rely on developers to self-report what they are using.
Apply the three patches and validate versions before the end of the business week.
Assess what data your LangChain-based applications can access and treat those data stores as potentially exposed pending patch confirmation.
Rock’s Musings
Three vulnerability classes in the same framework, covering three categories of sensitive enterprise data, were disclosed in one report. That’s what happens when you build for speed and bolt on security later. AI framework developers made that choice repeatedly, and this week’s CVE list is the invoice. LangChain is the jQuery of AI development right now. It is in everything, often without explicit organizational approval. Your AI security posture includes every dependency your developers pulled in without telling you. Get ahead of that inventory problem before the next disclosure.
6. A Congressman Put Anthropic on Notice Over National Security
Rep. Josh Gottheimer (D-N.J.) sent a letter to Anthropic CEO Dario Amodei on April 2, citing national security concerns arising from the source code leak (Axios, The Hill). Gottheimer’s letter noted that Claude is embedded in defense and intelligence operations, raised the prior CCP-backed group intrusion against Claude, and expressed concern that Mythos could enable more sophisticated cyberattacks against the United States. The letter also flagged Anthropic’s decision in late February to remove its binding commitment to halt model development if safety capabilities fall behind, replacing it with “nonbinding but publicly-declared” goals.
Why it matters
Federal agencies and defense contractors use Claude operationally. A source code leak followed by a congressional inquiry is a vendor risk event, not a PR problem. Your GRC process should treat it as such.
Removing the binding safety commitment is a substantive policy change that the congressional record now documents. The enforceability question will follow Anthropic through every future regulatory discussion.
Gottheimer sits on the House Intelligence Committee. This is not a throwaway letter. It is a first-stage oversight action that signals more to come.
What to do about it
Review your vendor risk assessment for any AI provider with confirmed government contracts. Congressional inquiries are material third-party risk events.
Establish a direct communication channel with your AI vendors’ enterprise security teams and request formal notification procedures for any government inquiries affecting their products.
Track the congressional record regarding Anthropic’s rollback of its safety commitment. It will surface again in budget and procurement cycles.
Rock’s Musings
The safety commitment rollback from February is the most substantive issue in that letter. Anthropic replaced a binding pledge to pause development if safety fell behind with goals they grade themselves on. That is not a small change. That is the foundational accountability mechanism that distinguished their positioning from competitors, and they quietly removed it. Congressional scrutiny was predictable the moment they became embedded in national security operations. The question I would ask directly is how many federal agency customers received notification about the source code exposure before it hit the press. I would guess the answer is uncomfortable.
7. Your Security Scanner Was the Supply Chain Attack: Trivy CVE-2026-33634
CISA added CVE-2026-33634 to its Known Exploited Vulnerabilities catalog on March 27 (Help Net Security, Aquasecurity GitHub advisory). Attackers compromised the Trivy container security scanner on March 19, using stolen credentials to publish a malicious v0.69.4 release and force-push 76 of 77 version tags in the trivy-action repository with credential-stealing malware. The attack triggered a downstream LiteLLM supply chain compromise via poisoned PyPI packages. Federal agencies face an April 9 deadline. Root cause was non-atomic credential rotation on March 1 left a valid token exposed during the rotation window.
Why it matters
Trivy is a default security tool in CI/CD pipelines across the industry. Compromising the scanner means attackers access the same environment credentials the security scan was meant to protect.
Force-pushing 76 version tags is a comprehensive compromise. Any pipeline that pins to mutable major or minor version tags rather than specific commit hashes was exposed.
The downstream LiteLLM PyPI compromise extends the blast radius into Python environments running LLM application code. The supply chain damage propagated well beyond the initial tool compromise.
What to do about it
Audit every CI/CD pipeline for trivy-action or setup-trivy at mutable version tags and pin to specific commit hashes immediately.
Treat any environment that ran a compromised Trivy version since March 19 as potentially credential-compromised. Rotate all associated tokens, SSH keys, and cloud credentials.
Apply this lesson to every security tool in your pipeline. Security tooling supply chains are higher-value targets than application code supply chains.
Rock’s Musings
The attacker turned the vulnerability scanner into the vulnerability. That is the platonic ideal of a supply chain attack: targeting organizations that care about security and embed security tooling in their build pipelines. The more security-conscious your culture, the higher your Trivy adoption, and the more exposed you were. The non-atomic credential rotation is the root cause. Aquasecurity rotated credentials on March 1 but did not revoke all tokens simultaneously. The attacker grabbed freshly rotated secrets during the window between invalidation and deployment. If your own rotation procedures have a gap between “revoke old” and “confirm new is live,” that gap is your exposure. Run your playbooks against that question this week.
8. The State AI Chatbot Safety Wave Is Not Waiting for Washington
Georgia’s state senate voted to concur in the House-amended version of SB 540 during the week of March 27, sending the chatbot disclosure and minor-protection bill to Governor Kemp’s desk (Troutman Privacy, Transparency Coalition). Idaho’s S 1297 passed its full legislature and advanced to Governor Little. Both are chatbot safety measures. Georgia’s bill requires disclosure every three hours for adult users and every hour for minors, along with explicit suicide and self-harm response protocols for conversational AI services. The Future of Privacy Forum’s tracker now counts 78 AI chatbot safety bills moving across 27 states in 2026.
Why it matters
Disclosure, minor safety, and mental health response requirements are becoming the regulatory floor across state jurisdictions. Organizations operating consumer-facing AI products need a 50-state tracking capability, not a wait-and-see approach.
Hourly disclosure requirements for minors are not trivial to implement for many chatbot architectures. The compliance engineering work should start now.
Seventy-eight bills across 27 states mean that any federal preemption framework, if one ever arrives, faces an already established patchwork of state obligations to reconcile.
What to do about it
Map your consumer AI products against chatbot disclosure requirements in every state where users reside. Georgia and Idaho represent the floor, not the ceiling.
Assess your chatbot’s existing mental health response protocols against the Georgia requirement specifics. A disclaimer is not compliant.
Assign someone accountable for multi-state AI governance tracking. This is not a future compliance problem.
Rock’s Musings
Washington cannot pass a federal AI framework. States can. Fifty legislatures with different requirements and different timelines is the compliance nightmare that preemption was supposed to prevent. It didn’t. Georgia’s hourly minor disclosure requirement is specific, implementable, and enforceable. State legislatures are producing more actionable compliance requirements than most federal guidance I have seen this year. If you deploy consumer AI products and you don’t have someone accountable for multi-state AI governance tracking today, that gap closes before Q3 or it closes you.
9. The EU AI Act Has an Enforcement Problem, and Nobody Is Talking About It Honestly
As of late March, only 8 of 27 EU member states had designated the single contact points required for national enforcement coordination under the AI Act, according to the European Parliament Think Tank’s enforcement analysis (Tech Policy Press, IAPP). The Digital Omnibus proposal, with negotiating positions adopted by Parliament’s IMCO and LIBE committees on March 18, would push high-risk AI compliance deadlines to December 2027 for Annex III systems and to August 2028 for Annex I systems, compared with the original August 2026 deadline. The European Commission also missed its own deadline for issuing guidance on high-risk AI systems. Trilogue negotiations between Council, Parliament, and Commission are now underway.
Why it matters
Approximately 70% of EU member states are not operationally ready for AI Act enforcement. Regulations without enforcement infrastructure are aspirational documents.
The 16-month delay in high-risk requirements gives organizations breathing room on paper while creating uncertainty about what compliance standard they are being held to during the gap.
The Commission missing its own implementation guidance deadline sets a poor precedent for holding private sector organizations to their compliance timelines.
What to do about it
Do not use the delay as a license to defer governance program work. The underlying obligations have not changed in substance. Build the program now and own it.
Review the Digital Omnibus amendments specifically for changes to the high-risk AI system definition. Legislative simplification sometimes reclassifies systems in ways that alter the scope of compliance.
Subscribe to IAPP’s EU AI Act tracker for updates on the trilogue outcome. The final text will differ from both Council and Parliament positions.
Rock’s Musings
Eight out of 27 enforcement bodies are operational as the Act’s first major deadlines approach. The Commission missed its own implementation guidance deadline. The most substantive AI governance framework on the planet is running on infrastructure that is not ready to enforce it. The delay does not invalidate the regulation. Organizations that build genuine AI risk management programs now will be positioned for whatever enforcement timeline materializes. Organizations that chase the deadline and treat compliance as documentation will be exposed when the enforcement machinery catches up. That gap grows wider every quarter.
The One Thing You Won’t Hear About But You Need To
NVIDIA and Johns Hopkins Gave You a Blueprint for Defending AI Agents Against Prompt Injection
Researchers from NVIDIA and Johns Hopkins University published “Architecting Secure AI Agents: Perspectives on System-Level Defenses Against Indirect Prompt Injection Attacks” on March 31 (ArXiv 2603.30016). The paper addresses how AI agents are vulnerable not to direct attacks on the model but to malicious instructions embedded in data the agent processes during task execution. The authors articulate three architectural positions. First, agents in dynamic environments need dynamic replanning with security policy updates built into the replanning loop. Second, security decisions requiring contextual judgment should still involve LLMs, but only within system designs that strictly constrain what the model can observe and decide. Third, ambiguous situations should treat human interaction as a core design consideration, not an edge case to minimize.
Why it matters
This paper frames indirect prompt injection as an architectural problem, not a model alignment problem. You cannot align your way out of it. You design it out or you accept the risk.
The principle of strictly constraining what the model can observe and decide has immediate practical application as your primary defense lever, more effective than filtering or detection approaches.
The human oversight design principle directly contradicts how most agentic deployments are being built, with human review treated as friction to reduce rather than a security control to preserve.
What to do about it
Read the paper. At 12 pages, it is short enough to share with your AI architects and security engineers before the next deployment review meeting.
Audit any agentic AI system currently in your environment against the observation scope and decision authority questions. Broad scope plus broad authority equals your highest-risk deployment.
Make human oversight an explicit design requirement in your AI agent security standards. Document the specific conditions under which an agent must pause and request human authorization.
Rock’s Musings
Nobody outside the AI security research community covered this paper. That is precisely why it belongs here. The breach reports get attention. The architecture guidance that would prevent the next breach sits on ArXiv with a few hundred downloads. I have been arguing at RockCyber for two years that agentic AI security is an architecture problem. You do not solve it with better prompts or stronger models. You solve it with privilege constraints, observation scope limits, and honest human oversight design. NVIDIA and Johns Hopkins gave you a 12-page framework for that conversation. If your next AI agent deployment review does not address these three principles, you are building exposure, not capability.
👉 For ongoing analysis of agentic AI governance frameworks, the conversation continues at RockCyber Musings.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
The views and opinions expressed in RockCyber Musings are my own and do not represent the positions of my employer or any organization I’m affiliated with.
References
Axios. (2026, March 31). Anthropic leaked its own Claude source code. https://www.axios.com/2026/03/31/anthropic-leaked-source-code-ai
Axios. (2026, April 2). Exclusive: Gottheimer presses Anthropic on source code leaks and safety protocols. https://www.axios.com/2026/04/02/gottheimer-anthropic-source-code-leaks
BleepingComputer. (2026, March 27). CISA: New Langflow flaw actively exploited to hijack AI workflows. https://www.bleepingcomputer.com/news/security/cisa-new-langflow-flaw-actively-exploited-to-hijack-ai-workflows/
Bloomberg. (2026, March 27). European Commission’s data stolen in hack on AWS account. https://www.bloomberg.com/news/articles/2026-03-27/european-commission-s-data-stolen-in-hack-on-aws-account
Bloomberg. (2026, April 1). Anthropic takes down thousands of GitHub repos trying to yank its leaked source code. https://www.bloomberg.com/news/articles/2026-04-01/anthropic-scrambles-to-address-leak-of-claude-code-source-code
CNBC. (2026, March 31). Anthropic leaks part of Claude Code’s internal source code. https://www.cnbc.com/2026/03/31/anthropic-leak-claude-code-internal-source.html
CoinDesk. (2026, March 27). Anthropic’s massive Claude Mythos leak reveals a new AI model that could be a cybersecurity nightmare. https://www.coindesk.com/markets/2026/03/27/anthropic-s-massive-claude-mythos-leak-reveals-a-new-ai-model-that-could-be-a-cybersecurity-nightmare
Fortune. (2026, March 27). Anthropic accidentally leaked details of a new AI model that poses unprecedented cybersecurity risks. https://fortune.com/2026/03/27/anthropic-leaked-ai-mythos-cybersecurity-risk/
Fortune. (2026, March 31). Anthropic leaks its own AI coding tool’s source code in second major security breach. https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/
Help Net Security. (2026, March 27). CISA sounds alarm on Langflow RCE, Trivy supply chain compromise after rapid exploitation. https://www.helpnetsecurity.com/2026/03/27/cve-2026-33017-cve-2026-33634-exploited/
Help Net Security. (2026, March 30). Second data breach at European Commission this year leaves open questions over resilience. https://www.helpnetsecurity.com/2026/03/30/european-commission-cyberattack-cloud-infrastructure-website/
IAPP. (2026). European Commission misses deadline for AI Act guidance on high-risk systems. https://iapp.org/news/a/european-commission-misses-deadline-for-ai-act-guidance-on-high-risk-systems
IAPP. (2026, March). EU Digital Omnibus: Analysis of key changes. https://iapp.org/news/a/eu-digital-omnibus-analysis-of-key-changes
Qualys ThreatPROTECT. (2026, March 26). CISA Added Langflow Vulnerability to its Known Exploited Vulnerabilities Catalog (CVE-2026-33017). https://threatprotect.qualys.com/2026/03/26/cisa-added-langflow-vulnerability-to-its-known-exploited-vulnerabilities-catalog-cve-2026-33017/
SecurityAffairs. (2026, March 27). The European Commission confirmed a cyberattack affecting part of its cloud systems. https://securityaffairs.com/190067/data-breach/the-european-commission-confirmed-a-cyberattack-affecting-part-of-its-cloud-systems.html
Sysdig. (2026, March 27). CVE-2026-33017: How attackers compromised Langflow AI pipelines in 20 hours. https://www.sysdig.com/blog/cve-2026-33017-how-attackers-compromised-langflow-ai-pipelines-in-20-hours
TechCrunch. (2026, March 27). European Commission confirms cyberattack after hackers claim data breach. https://techcrunch.com/2026/03/27/european-commission-confirms-cyberattack-after-hackers-claim-data-breach/
TechCrunch. (2026, April 1). Anthropic took down thousands of GitHub repos trying to yank its leaked source code. https://techcrunch.com/2026/04/01/anthropic-took-down-thousands-of-github-repos-trying-to-yank-its-leaked-source-code-a-move-the-company-says-was-an-accident/
The Hacker News. (2026, March 27). LangChain, LangGraph flaws expose files, secrets, databases in widely used AI frameworks. https://thehackernews.com/2026/03/langchain-langgraph-flaws-expose-files.html
The Hill. (2026, April 2). House Democrat pushes Anthropic on safety protocols, source code leak. https://thehill.com/policy/technology/5812881-gottheimer-presses-anthropic-ai-safety/
Tech Policy Press. (2026). EU’s AI Act delays let high-risk systems dodge oversight. https://www.techpolicy.press/eus-ai-act-delays-let-highrisk-systems-dodge-oversight/
Transparency Coalition. (2026, March 27). AI legislative update: March 27, 2026. https://www.transparencycoalition.ai/news/ai-legislative-update-march27-2026
Troutman Pepper Locke. (2026, March 30). Proposed state AI law update: March 30, 2026. https://www.troutmanprivacy.com/2026/03/proposed-state-ai-law-update-march-30-2026/
Aquasecurity. (2026). Trivy ecosystem supply chain temporarily compromised [GitHub Security Advisory GHSA-69fq-xp46-6x23]. https://github.com/aquasecurity/trivy/security/advisories/GHSA-69fq-xp46-6x23
European Parliament Think Tank. (2026, March 18). Enforcement of the AI Act. https://epthinktank.eu/2026/03/18/enforcement-of-the-ai-act/
Jiang, Z., et al. (2026, March 31). Architecting secure AI agents: Perspectives on system-level defenses against indirect prompt injection attacks [Preprint]. ArXiv. https://arxiv.org/abs/2603.30016



