Weekly Musings Top 10 AI Security Wrapup: Issue 29 March 6, 2026 - March 12, 2026
When AI Companies Sue the Government and OpenAI Enters the Security Market
The week of March 6-12, 2026, handed us a story that was coming... Anthropic filed suit against the Pentagon for blacklisting it as a national security risk. In the same week, the White House released a new cyber strategy, OpenAI launched a vulnerability-scanning agent aimed squarely at the enterprise security market, and two major federal regulatory deadlines expired. This is that week.
AI Security and AI governance collided this week in federal court, in congressional briefings, and in the server rooms of every organization running an AI agent they don’t fully understand. The governance frameworks that were supposed to provide clarity are instead amplifying uncertainty, and attackers are exploiting the gap in real time. Here’s what happened, what it means, and what to do about it, from someone who’s watched this industry long enough to be appropriately paranoid about all of it.
1. Anthropic sues the Pentagon for blacklisting it as a national security risk
Anthropic filed two federal lawsuits against the Trump administration after the Department of Defense designated the company a supply chain risk. That designation, typically reserved for foreign adversaries, bars Anthropic from federal contracts and requires defense contractors to certify they don’t use Claude in any DoD work. The root cause is Anthropic’s refusal to allow Claude for autonomous weapons or mass surveillance of American citizens. CEO Dario Amodei drew two red lines in contract negotiations, the Pentagon walked, and then labeled the company a national security threat (Fortune, Defense One). Anthropic warns the financial exposure runs to hundreds of millions of dollars.
Why it matters
This is the first time a U.S.-headquartered AI company has received the supply chain risk designation, a label previously applied only to foreign adversaries.
The case tests whether the executive branch can use procurement leverage to override AI developers’ safety commitments, a precedent that extends far beyond Anthropic.
Every CISO advising on AI vendor selection now has to factor whether a vendor’s ethics commitments make it a federal liability.
What to do about it
Map your Claude and Anthropic API dependencies now. Know which workflows break if this escalates.
Brief your board on what a supply chain risk designation means in federal contracting terms if your organization touches government work.
Watch for similar scrutiny applied to other AI vendors with published safety policies. This may not be a one-off.
Rock’s Musings
Anthropic drew a line in the sand (no autonomous weapons, no mass surveillance), and the government responded by calling them a threat. Think about what that signals to every AI developer watching. If you have safety principles that conflict with defense procurement, you get punished for them. The First Amendment angle is interesting, but the real issue is that the executive branch just discovered that supply chain risk designation is a very effective stick, and they used it on a domestic company for the first time. AI safety as a business value just became a liability under the current administration. Read that sentence twice.
2. Trump’s Cyber Strategy for America lands in five pages
On March 6, the White House released “President Trump’s Cyber Strategy for America” alongside an executive order on cybercrime (White House, Forrester). The document covers six pillars: offensive cyber operations to shape adversary behavior, regulatory streamlining, federal network modernization, critical infrastructure security, technological superiority, and cyber workforce development. At five pages, it’s the shortest national cybersecurity strategy in a decade. The strategy explicitly calls for more aggressive offensive operations, “unprecedented coordination” between the public and private sectors, and the building of a talent base fluent in autonomous systems and AI-enabled defense.
Why it matters
Five pages are either a vision document or a placeholder. For practical CISO purposes, it signals direction but provides almost no implementation guidance.
The offensive posture language has legal and escalation implications for any organization with a government nexus.
Workforce development framed as a national strategic asset means the government will be competing for the same AI security talent you’re trying to hire.
What to do about it
Map existing compliance obligations against the six pillars. Where regulations get streamlined, understand which requirements might disappear and which you need to maintain voluntarily.
Engage your federal liaison if you’re in a critical infrastructure sector. The public-private coordination language means more government asks are coming.
Start building for AI-fluent security talent now. The window before this becomes a serious hiring crunch is closing.
Rock’s Musings
Five pages tells you something: either there’s a lot more in the classified annex, or this is aspirational language waiting for someone to actually build the plumbing. The workforce section is the sleeper story. AI-enabled defense needs people who understand both AI failure modes and adversarial tradecraft simultaneously. That combination doesn’t exist at scale anywhere, and we’re being asked to build it at the same time AI is accelerating attacks. The gap between those two curves is where the next major breach lives.
3. OpenAI launches Codex Security and walks into the vulnerability scanning market
OpenAI released Codex Security as a research preview, a context-aware AI vulnerability scanning agent that evolved from Aardvark, an internal security research tool OpenAI had tested in private beta since October 2025 (Bloomberg, SecurityWeek). Codex Security analyzes code repositories, pressure-tests suspected vulnerabilities in sandboxed environments, generates proof-of-concept exploits to confirm impact, and proposes fixes. OpenAI’s own data shows it scanned 1.2 million commits over the preceding 30 days, surfacing 10,561 high-severity issues and approximately 800 critical vulnerabilities. The tool is available free for the next month to ChatGPT Pro, Enterprise, Business, and Edu customers. OpenAI says it can “identify complex vulnerabilities that other agentic tools miss” (TechRadar).
Why it matters
A free, frontier-model-powered vulnerability scanner from OpenAI immediately changes the competitive math for established AppSec vendors whose pricing models depend on the difficulty of this problem.
Generating proof-of-concept exploits to confirm vulnerability impact is a significant capability. In the wrong hands, or with a compromised account, this is an exploit generation service.
Organizations deploying Codex Security are giving OpenAI’s systems read access to their codebases. That data handling relationship deserves the same scrutiny as any privileged third-party tool.
What to do about it
Before enabling Codex Security on production repositories, review OpenAI’s data retention and training policies. Understand whether your code becomes training data.
Evaluate Codex Security against your existing SAST tooling on a representative code sample before replacing anything. “Better than other agentic tools” is a marketing claim until your team validates it.
The proof-of-concept exploit generation feature needs access controls. Restrict which engineers can trigger full exploit confirmation scans.
Rock’s Musings
OpenAI entering the vulnerability scanner market is not a product launch. It’s a statement about where AI is heading in security operations. The incumbents in SAST and DAST have been selling the same scan-and-report workflow for a decade. An agent that generates a proof-of-concept exploit to confirm a real finding changes the value proposition significantly. I’m not surprised OpenAI built this. I’m watching carefully how they handle the fact that generating exploit code is exactly the capability defenders need and attackers want. The account compromise scenario alone should give your red team ideas.
4. NIST AI Agent Standards RFI closes with 932 comments
The comment period for NIST’s Center for AI Standards and Innovation (CAISI) Request for Information on securing AI agent systems closed March 9 with 932 responses (Federal Register, NIST). The RFI, published in January 2026, sought input from industry, academia, and the security community on securing agentic AI development and deployment. The OpenID Foundation submitted a response addressing AI agent identity and authorization. A second comment period focused specifically on identity and authorization for AI agents remains open until April 2.
Why it matters
932 responses signals broad industry recognition of the problem. The quality of those comments determines whether the resulting standards have operational teeth.
Identity and authorization for AI agents is the structural gap behind most agent security failures. If NIST gets this right, it reshapes the risk calculus for enterprise agent deployment.
The listening sessions starting in April give practitioners a direct channel to shape what these standards require.
What to do about it
If your organization skipped the first RFI, submit to the identity and authorization comment period before April 2. Your implementation experience is exactly what NIST needs.
Start building your AI agent identity architecture now using OAuth 2.0 On-Behalf-Of flows with proper scope constraints. This is the emerging standard pattern.
Assign someone to track the AI Agent Standards Initiative. When draft standards publish later this year, you want your red-team comments in front of NIST before they finalize.
Rock’s Musings
Standards processes are slow by design, and the slowness here is appropriate because the identity and authorization problem for AI agents is genuinely hard. An agent acting on behalf of a user needs to carry that user’s permissions, not escalate to system-level access, and current tooling doesn’t enforce this reliably. The OIDF response to NIST gets the framing right: agent identity needs cryptographic binding, not just policy. If an agent claims to act on your behalf without a verifiable credential, you don’t have identity management. You have a trust-me system. You can read about the comments I submitted at “NIST AI Agent RFI (2025-0035): Human Oversight Is the Wrong Fix.”
5. Commerce and FTC hit their AI regulatory deadlines, and nothing changed yet
Two major deliverables from the December 2025 executive order on AI preemption came due on March 11. The Commerce Department submitted its review of state AI laws, identifying which ones the administration considers overly burdensome or in conflict with federal objectives. The FTC delivered a policy statement on how Section 5 of the FTC Act applies to AI and when state laws requiring alteration of model outputs are preempted by federal deceptive practices law (Mondaq, Digital Applied). Neither document invalidates any state law on its own. They are ammunition for the DOJ’s AI Litigation Task Force, established in January and yet to file any lawsuits. The administration is also conditioning $42 billion in BEAD broadband funding on states repealing AI regulations it deems onerous.
Why it matters
Organizations operating AI in multiple states face genuine legal uncertainty. State laws remain on the books. The federal government plans to fight them in court, and that litigation takes years.
The FTC’s Section 5 application to AI bias-mitigation requirements is legally untested territory.
The BEAD funding leverage is the most concrete near-term enforcement tool. Which states hold firm versus which fold will tell you a lot about regulatory durability.
What to do about it
Do not assume any state AI compliance requirement is going away. Build compliance architecture that can be toggled by jurisdiction as the legal landscape shifts.
Get legal counsel read into the Commerce Department report. Knowing which of your state compliance obligations are on the federal target list helps you prioritize risk posture.
Prepare for a two-to-three year period of overlapping requirements. Companies with modular, jurisdiction-aware compliance programs will weather this better.
Rock’s Musings
The administration created a fog of legal uncertainty and called it reducing regulatory burden. For most enterprises deploying AI, this makes compliance harder. You now have to track active federal litigation against state laws while still complying with those laws until courts rule otherwise. The FTC theory is worth watching closely: if the argument that requiring AI bias mitigation compels “deceptive output” holds, it guts a large category of state AI fairness requirements. If it fails, it sets a precedent limiting federal deceptive practices law’s reach into AI output governance. Either outcome reshapes the field.
6. OpenAI publishes its prompt injection defense playbook
On March 12, OpenAI published research and engineering guidance on defending AI agents against prompt-injection attacks (OpenAI, PrismNews). The guidance covers training techniques that help models treat different input channels with varying skepticism, architectural decisions that constrain privilege and limit blast radius, and layered verification to catch anomalous behavior. OpenAI also disclosed that it built a reinforcement learning-trained automated attacker to discover injection vulnerabilities internally, capable of steering agents through harmful multi-step workflows. The decision to publish openly reflects recognition that injection attacks threaten the entire developer ecosystem building on top of large language models.
Why it matters
Publishing the automated attacker methodology gives defenders a concrete model of what they’re fighting. Multi-step RL-trained attacks won’t be stopped with static guardrails.
The channel-skepticism approach, which trains models to treat external web content differently from system instructions, is an architectural fix that operates at inference time.
OpenAI’s disclosure accelerates industry defenses while giving attackers a clearer picture of which countermeasures to route around.
What to do about it
Apply privilege minimization immediately: agents should hold only permissions required for the specific task, expiring at task completion.
For agents consuming external content, validate that content before the agent ingests it. Treat external web data as untrusted input, period.
Build a prompt injection test suite and run it against production agents before every deployment. What you don’t test, you don’t know.
Rock’s Musings
OpenAI built an RL-trained machine to find injection vulnerabilities in their own systems. That machine now exists, and the same architecture will run on the offensive side of this problem within months, if it isn’t already. The deeper issue is architectural: language models cannot reliably distinguish instructions from data. That’s a fundamental property of how these systems process text, not a fixable bug. Any defense assuming the model will eventually learn to make that distinction is building on sand. The real fix is external. Don’t give agents access to resources they don’t need, and verify every external input before it reaches the model.
7. Google Cloud Threat Horizons reveals software exploits overtaking stolen credentials
Google Cloud’s Office of the CISO published its H1 2026 Threat Horizons Report on March 9, covering the second half of 2025 (Help Net Security, Security Boulevard). The headline finding is that exploitation of third-party software vulnerabilities jumped from 2.9% to 44.5% of initial cloud entry vectors in a single half-year period. The exploitation window has collapsed to days, with the React2Shell case showing crypto miners deployed within 48 hours of public vulnerability disclosure. North Korean threat group UNC4899 abused DevOps workflows and container breakout to steal millions in cryptocurrency. Threat actors also used LLMs to automate credential harvesting and accelerate the path from local developer access to full cloud admin privileges.
Why it matters
A jump from 2.9% to 44.5% in software exploitation isn’t an incremental change. Something shifted structurally in attacker methodology during H2 2025.
A 48-hour exploitation window means patch prioritization SLAs have to account for attacker speed, not just team capacity.
LLM-assisted credential harvesting is now in a major incident response dataset, no longer just theoretical research.
What to do about it
Reduce your vulnerability exposure window to 48 hours or less for critical and high-severity findings on internet-facing systems. Build the automation to get there.
Audit DevOps pipeline permissions. The UNC4899 vector targets the privilege elevation that happens when developers hold broad cloud access from local workstations.
Review whether AI coding tools introduce dependencies with unreviewed third-party code. Supply chain hygiene is now tier-one.
Rock’s Musings
For years, the orthodoxy was “credential hygiene is job one in the cloud.” Attackers just told you that orthodoxy is obsolete. They shifted to software exploitation because credential defenses got good enough. That’s how this works: defenders get strong on one vector, attackers rotate to the next. The current answer is patching speed. The LLM-assisted credential harvesting detail is quietly significant. It’s been in theoretical papers for two years, and now it’s in operational incident data from nation-state actors. Adjust your threat model accordingly.
8. AI agents are now helping criminals manage attack infrastructure
On March 8, The Register reported on Microsoft Threat Intelligence findings showing that North Korea’s Coral Sleet group is using AI and development platforms to rapidly build and manage attack infrastructure at scale. AI agents automate the creation of phishing infrastructure, manage C2 systems, and accelerate campaign tempo. The Unit 42 2026 Global Incident Response Report, published in February and drawing on 750 major incidents, showed the fastest 25% of attackers reaching data exfiltration in 72 minutes, down from 285 minutes the previous year. Identity weaknesses played a material role in almost 90% of investigations.
Why it matters
AI is now a documented operational capability in nation-state attack campaigns, not just an enterprise productivity tool.
The 4x speed increase in attack timelines means detection and response programs calibrated to last year’s data are already outdated.
87% of incidents unfolded across multiple attack surfaces, making correlation harder for defenders.
What to do about it
Review detection and response SLAs against the new attacker timeline. 72 minutes from initial access to exfiltration is shorter than most IR playbook trigger times.
Run tabletops assuming an AI-assisted attack infrastructure. Stress-test whether your team can detect and contain within the compressed timeline.
Identity controls remain the highest-leverage investment. 90% material involvement in incidents makes this your budget priority.
Rock’s Musings
The debate about whether attackers would use AI is over. It’s all about the economics. If you’re running persistent operations against multiple targets, automating the operational overhead with AI is exactly what you’d do. The 72-minute exfiltration timeline is the number that should break your IR program’s assumptions. Most enterprise programs are built around detection metrics measured in hours or days. You need automated detection with automated response triggers, not a playbook that assumes a human analyst will catch the initial alert.
9. Amazon pushes back on data linking AI coding to infrastructure outages
On March 10, The Register reported leaked briefing notes from an Amazon internal operations meeting flagging a “trend of incidents” characterized by “high blast radius” and “Gen-AI assisted changes.” The implication was that AI-assisted coding has made infrastructure changes more fragile. Amazon responded, saying they “have not seen compelling evidence that incidents are more common with AI tools.” The Veracode 2026 State of Software Security report, published February 24, found 82% of organizations carry security debt, a 36% year-over-year spike in high-risk vulnerabilities, and that more vulnerabilities are being created than fixed, with AI development velocity outstripping remediation capacity as a contributing factor.
Why it matters
Amazon’s internal concern, even disputed, comes from one of the largest cloud operators in the world. Internal friction at that scale is a signal worth tracking.
The Veracode data shows a systemic pattern. AI tools accelerate feature shipping and the introduction of vulnerabilities simultaneously, while remediation capacity doesn’t scale at the same rate.
82% of organizations carry security debt, with 60% classified as critical, which should be a material risk disclosure issue for most boards (materiality is another conversation for another time).
What to do about it
Require AI coding tools to integrate with static analysis before code reaches production. Velocity gains without security gates just accelerate debt accumulation.
Measure remediation rate alongside development velocity. If the gap is widening, you have a governance problem, not just a tooling problem.
Brief your board on the Veracode numbers. This is a material risk disclosure issue.
Rock’s Musings
Amazon’s denial matters. One leaked briefing note does not make a causal case. What it tells you is that someone inside one of the world’s largest cloud operators thought the correlation was worth flagging in an internal ops review. That’s a signal, not proof. The Veracode data is where I’m more confident: if your AI coding tools help developers write code 40% faster and that code contains the same flaw density as human-written code, you’ve just increased your vulnerability production rate by 40%. The only way this works in your favor is if you accelerate the remediation side at the same rate. Almost nobody is doing that.
10. Microsoft Patch Tuesday drops 77 CVEs
Microsoft pushed its March Patch Tuesday on March 11, fixing at least 77 vulnerabilities across Windows and other software (Kaseya, Check Point Research). This update cycle lands in an environment where, per the Google Cloud Threat Horizons data released the same week, exploitation windows for critical vulnerabilities have collapsed to 48 hours from public disclosure. AI-assisted exploit development is further compressing the time between CVE publication and the availability of weaponized exploits.
Why it matters
77 CVEs in one month means your patch management team works against a sprint clock every Patch Tuesday. Prioritization methodology matters more than ever.
Critical Microsoft CVEs are being probed within 48 hours of this disclosure per current attacker timelines. Your patch SLA has to account for that.
AI-assisted exploit development means the gap between disclosure and exploitation continues to narrow.
What to do about it
Build risk-tiered patching protocols: critical internet-facing systems within 24-48 hours, critical internal systems within 72 hours, high severity within a week.
Prioritize remote code execution vulnerabilities from the March 11 batch first. Review the Microsoft advisory for specific critical CVEs.
Apply compensating controls like network segmentation and least-privilege configurations for systems where immediate patching isn’t operationally feasible.
Rock’s Musings
Patch Tuesday used to feel routine. It isn’t anymore because the time between a CVE being added to the NVD and an attacker scanning for it has gone from weeks to hours. If your patch SLA is still “30 days for critical,” you’re operating with a policy written for a threat environment that no longer exists. That’s not a patch management problem. That’s a governance problem. Fix the policy first.
The One Thing You Won’t Hear About But You Need To
CISA adds an actively exploited n8n RCE to its known exploited list, and 24,700 instances are still unpatched
On March 12, CISA added CVE-2025-68613 to its Known Exploited Vulnerabilities catalog, a critical expression-injection vulnerability in the n8n workflow automation platform with a CVSS score of 9.9 (The Hacker News, The Register). The flaw was patched three months ago in the December 2025 versions. Federal agencies have until March 25 to patch. The problem: Shadowserver data shows 24,700 instances remain unpatched online, with 12,300 in North America and 7,800 in Europe. This matters beyond the CVE itself because n8n is one of the most widely used platforms for building AI automation workflows and AI agent pipelines. Organizations deploying AI agents frequently use n8n as the orchestration layer connecting those agents to enterprise data sources.
Why it matters
An unpatched RCE in the orchestration layer of an AI workflow means that an attacker who owns the n8n instance can access every connected system the AI agents touch, including credentials, APIs, and data stores.
24,700 exposed instances three months after a publicly known critical patch represents a systemic patching failure in a category of software organizations that have not been treated as critical infrastructure.
CISA’s KEV addition triggers mandatory remediation timelines for federal agencies, but most n8n deployments are in private enterprise environments with no equivalent enforcement mechanism.
What to do about it
Search your environment for n8n now. It is frequently deployed by individual teams or developers outside formal IT procurement, so your asset inventory may not show it.
If you find unpatched instances, treat them as compromised until proven otherwise. Rotate every credential and API key the n8n instance had access to.
Apply the same logic to every workflow automation tool in your environment: Zapier, Make, and similar platforms are potential RCE targets and connect to the same sensitive data sources.
Rock’s Musings
This story isn’t getting the attention it deserves because nobody considers workflow automation as critical security infrastructure. It’s where developers wire things together quickly, connect AI agents to Slack, Salesforce, and internal APIs, and then move on to the next problem. The security team doesn’t own it. The AI team doesn’t think they need to patch it. The result is a critical RCE sitting at the center of your AI agent architecture, exposed to the internet, with a patch that’s been available for three months. CISA flagging active exploitation on March 12 means this is not theoretical. Someone is using this right now. Go find your n8n instances.
If you found this analysis useful, subscribe at rockcybermusings.com for weekly intelligence on AI security developments.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
References
Axios. (2026, March 6). OpenAI rolls out Codex Security to automate code security reviews. https://www.axios.com/2026/03/06/openai-codex-security-ai-cyber
Baker Botts. (2026, March). March 2026: Federal deadlines that will reshape the AI regulatory landscape. MONDAQ. https://www.mondaq.com/unitedstates/new-technology/1755166/march-2026-federal-deadlines-that-will-reshape-the-ai-regulatory-landscape
Bloomberg. (2026, March 6). OpenAI unveils Codex Security tool to detect database vulnerabilities. https://www.bloomberg.com/news/articles/2026-03-06/openai-releases-ai-agent-security-tool-for-research-preview
Check Point Research. (2026, March 9). 9th March: Threat Intelligence Report. https://research.checkpoint.com/2026/9th-march-threat-intelligence-report/
CISA. (2026, March 12). CISA adds one known exploited vulnerability to catalog. https://www.cisa.gov/known-exploited-vulnerabilities-catalog
CNBC. (2026, March 10). Amazon convenes ‘deep dive’ internal meeting to address outages. https://www.cnbc.com/2026/03/10/amazon-plans-deep-dive-internal-meeting-address-ai-related-outages.html
Defense One. (2026, March 9). Anthropic sues over a dozen federal agencies and government leaders. https://www.defenseone.com/business/2026/03/anthropic-sues-over-dozen-federal-agencies-and-government-leaders/411997/
Digital Applied. (2026, March). FTC AI policy deadline March 11: Compliance guide. https://www.digitalapplied.com/blog/ftc-ai-policy-deadline-march-11-compliance-readiness
Forrester. (2026, March). White House announces the 2026 cyber strategy for America. https://www.forrester.com/blogs/white-house-announces-the-2026-cyber-strategy-for-america/
Fortune. (2026, March 9). Anthropic sues Pentagon after being labeled a threat to national security. https://fortune.com/2026/03/09/anthropic-sues-pentagon-ai-supply-chain-risk-trump-administration/
Google Cloud. (2026, March 9). Cloud threat horizons report H1 2026. https://cloud.google.com/security/report/resources/cloud-threat-horizons-report-h1-2026
Help Net Security. (2026, March 11). Software vulnerabilities push credential abuse aside in cloud intrusions. https://www.helpnetsecurity.com/2026/03/11/google-cloud-environments-cyber-threats-report/
Kaseya. (2026, March 11). The week in breach news: March 11, 2026.
https://www.kaseya.com/?post_type=post&p=26754
Microsoft Security Blog. (2026, March 6). AI as tradecraft: How threat actors operationalize AI. https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/
National Institute of Standards and Technology. (2026, January). CAISI issues request for information about securing AI agent systems. https://www.nist.gov/news-events/news/2026/01/caisi-issues-request-information-about-securing-ai-agent-systems
National Institute of Standards and Technology. (2026, February). Announcing the AI agent standards initiative for interoperable and secure innovation. https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure
OpenAI. (2026, March 6). Codex Security: Now in research preview. https://openai.com/index/codex-security-now-in-research-preview/
OpenAI. (2026, March 12). Understanding prompt injections: A frontier security challenge. https://openai.com/index/prompt-injections/
OpenAI. (2026). Continuously hardening ChatGPT Atlas against prompt injection attacks. https://openai.com/index/hardening-atlas-against-prompt-injection/
OpenID Foundation. (2026). OIDF responds to NIST on AI agent security. https://openid.net/oidf-responds-to-nist-on-ai-agent-security/
Palo Alto Networks. (2026, February). 2026 Unit 42 global incident response report: Attacks now 4x faster. https://www.paloaltonetworks.com/blog/2026/02/unit-42-global-ir-report/
PrismNews. (2026, March). OpenAI releases engineering playbook to shield AI agents from prompt injection. https://www.prismnews.com/news/openai-releases-engineering-playbook-to-shield-ai-agents
Security Boulevard. (2026, March). 83% of cloud breaches start with identity, AI agents are about to make it worse. https://securityboulevard.com/2026/03/83-of-cloud-breaches-start-with-identity-ai-agents-are-about-to-make-it-worse/
SecurityWeek. (2026, March 6). OpenAI rolls out Codex Security vulnerability scanner. https://www.securityweek.com/openai-rolls-out-codex-security-vulnerability-scanner/
TechRadar. (2026, March 6). OpenAI releases Codex Security to spot the next big cyber risks to your company. https://www.techradar.com/pro/security/openai-releases-codex-security-to-spot-the-next-big-cyber-risks-to-your-company-promises-to-identify-complex-vulnerabilities-that-other-agentic-tools-miss
The Hacker News. (2026, March 12). CISA flags actively exploited n8n RCE bug as 24,700 instances remain exposed. https://thehackernews.com/2026/03/cisa-flags-actively-exploited-n8n-rce.html
The Register. (2026, March 6). Anthropic sues US over national security blacklist. https://www.theregister.com/2026/03/06/anthropic_left_with_no_other/
The Register. (2026, March 8). Manage attack infrastructure? AI agents can now help. https://www.theregister.com/2026/03/08/deploy_and_manage_attack_infrastructure/
The Register. (2026, March 10). Amazon insists AI coding isn’t source of outages. https://www.theregister.com/2026/03/10/amazon_ai_coding_outages/
The Register. (2026, March 12). CISA says n8n critical bug exploited in real-world attacks. https://www.theregister.com/2026/03/12/cisa_n8n_rce/
U.S. Federal Register. (2026, January 8). Request for information regarding security considerations for artificial intelligence agents. https://www.federalregister.gov/documents/2026/01/08/2026-00206/request-for-information-regarding-security-considerations-for-artificial-intelligence-agents
Veracode. (2026, February 24). 2026 state of software security report. BusinessWire. https://www.businesswire.com/news/home/20260224526703/en/Veracode-2026-State-of-Software-Security-Report-Reveals-Four-Out-of-Five-Organizations-Are-Drowning-in-Security-Debt
White House. (2026, March). White House unveils President Trump’s cyber strategy for America. https://www.whitehouse.gov/articles/2026/03/white-house-unveils-president-trumps-cyber-strategy-for-america/



