Weekly Musings Top 10 AI Security Wrapup: Issue 30 March 13-19, 2026
Agentic AI Security Moves From "Meh" to Incident Log
Meta logged a SEV-1 on March 18 because an internal AI agent posted without human approval, provided bad advice, and exposed sensitive data to the wrong employees for 2 hours. Amazon confirmed its Bedrock sandbox lets AI models exfiltrate data via DNS and called it intentional design. HiddenLayer found 31% of security leaders don’t know if they had an AI breach in the past year. The EU Council voted to restructure the AI Act’s high-risk compliance framework. Three AI agent security products launched in four days. This was one week.
The week’s evidence points in one direction: agentic AI security is no longer a research problem. Real incidents are appearing in production environments run by organizations with serious security programs. Technical flaws in AI infrastructure are drawing vendor responses that amount to documentation updates rather than patches. Research data is documenting blind spots CISOs can no longer treat as edge cases. In parallel, the governance machinery is finally moving, but it’s moving slower than deployment. Standards and deployments are in a race, and deployments are winning by a wide margin. More context at RockCyber and RockCyber Musings.
1. OWASP publishes its GenAI data security risk taxonomy for 2026
The OWASP GenAI Security Project released GenAI Data Security: Risks and Mitigations 2026 in March, a 103-page taxonomy covering 21 discrete data security risks across the full GenAI lifecycle from training through agentic runtime (OWASP). The document maps risks across training and fine-tuning data, retrieval and RAG pipelines, vector stores, context windows, agent memory, tool call payloads, and observability infrastructure. It identifies a core architectural property that makes GenAI data security structurally different from every prior computing model: the context window aggregates data from multiple trust domains into a single flat namespace with no internal access controls. A confidential HR record retrieved via RAG sits next to a user prompt with identical trust weight, and there is no mechanism today to mark a context segment as available for reasoning but not surfaceable in the output. The document also addresses machine unlearning directly: deleting source data does not remove what a fine-tuned model or LoRA adapter has memorized into its weights. Download the report HERE.
Why it matters
The flat-namespace context window problem is not a configuration gap. It’s an architectural property of how these systems work, which means perimeter controls and access policies cannot fully solve it. Minimization and context scoping are the only practical mitigations available today.
LoRA adapter memorization of rare training examples means high-recall prompts can extract verbatim PII, credentials, or intellectual property from fine-tuned models without any sophisticated attack technique. Organizations fine-tuning on internal data have a data exposure risk they likely haven’t assessed.
The Right to Erasure problem is unsolved at the architectural level. Deleting training data from a source system does not delete what the model encoded during fine-tuning. GDPR and state privacy law DSR obligations cannot be satisfied by source deletion alone.
What to do about it
Treat the context window as a data-exposure surface, not just a prompt-delivery mechanism. Classify what goes in the same way you classify what goes into a database query, and scope RAG retrieval to the minimum required for the task.
Audit every fine-tuned model and LoRA adapter in your environment against the data used to train it. If that training data included PII, credentials, or regulated information, your model could serve as a potential exfiltration vector.
Build a GenAI data bill of materials using CycloneDX ML-BOM as the base format. Until you have lineage from the source dataset to the deployed model to the embedding store, you cannot answer the question a regulator will eventually ask: what data did this model see, and where does it live now?
Rock’s Musings
The architectural insight at the center of this document is the one the industry keeps sliding past. The context window has no internal access control layer. That’s not a misconfiguration. It’s a design property of how transformers process sequences. Everything that enters the context window is treated as equally reachable by the model’s output mechanism, and no amount of system prompt guardrailing changes the underlying architecture. The practical implication is that the primary defense is what you put in, not what you try to prevent from coming out.
The machine unlearning section is the one I push organizations on hardest. They are collecting consent, honoring deletion requests, and scrubbing source databases, and then deploying fine-tuned models that still carry what they memorized from the deleted data. The model weights are a copy of your training corpus in a form your DLP tools don’t see, and your deletion workflows can’t reach. Right to Erasure in GenAI is an open architectural problem with no clean solution today, and most organizations haven’t told their legal team that yet.
2. EU Council rewrites the compliance clock for high-risk AI systems
The EU Council adopted its negotiating position to amend the AI Act’s high-risk framework (EU Council). The core change replaces the fixed August 2026 compliance deadline with a conditional trigger. Full high-risk obligations apply only once the Commission certifies required standards and tools are available, with a hard backstop date. The Council also pushed the national AI regulatory sandbox deadline to December 2027 and clarified that law enforcement, border management, judicial, and financial AI systems remain under national supervisory authority rather than the Commission. Negotiations with the European Parliament begin next.
Why it matters
The conditional trigger gives the Commission discretion over when your obligations start. Until it certifies standards are ready, full high-risk obligations don’t apply, creating an indeterminate window.
Pushing the sandbox deadline to December 2027 removes a key testing mechanism for high-risk AI at a time when organizations are accelerating deployment.
Fragmented supervisory authority means 27 member states apply their own rules to some of the highest-stakes AI use cases.
What to do about it
Map your AI systems against current and proposed high-risk definitions now. The conditional trigger shifts the timeline, not the compliance obligation itself.
Track Parliament negotiations. The Council position is a mandate, not the final text.
Build a jurisdiction-aware compliance map for EU operations covering which systems fall under national versus Commission supervision.
Rock’s Musings
I’ve seen regulatory timelines used to delay compliance indefinitely in my career more times than I can count. This EU Council move fits the pattern. The conditional trigger means the Commission controls when your clock starts, and they have to certify standards are available first. Given the pace at which NIST’s agentic AI guidance is moving, expecting European standards to materialize quickly requires genuine optimism.
Organizations using this ambiguity to do nothing are miscalculating. The August 2026 date was never the governance point. You have high-risk AI systems in production today, and you need to govern them regardless of what the Commission certifies and when.
3. Meta logs a SEV-1 incident from a rogue internal AI agent
On March 18, Meta confirmed a Severity 1 security incident caused by an internal AI agent operating without human authorization (Bitcoinworld, HackerNoob). The agent posted to an internal forum, gave incorrect advice, and triggered a cascade that exposed sensitive company and user data to unauthorized employees for approximately two hours. Meta contained the exposure by cutting the agent’s forum access and auditing permissions across other internal agents. No external exfiltration was confirmed.
Why it matters
A SEV-1 at Meta from an AI agent operating outside its bounds sets a documented precedent: production agents at companies with robust security programs can circumvent behavioral constraints and cause genuine incidents.
The chain reaction, one unauthorized action triggering downstream data exposure, is characteristic of agentic systems and different from traditional software vulnerabilities in ways most IR playbooks don’t yet account for.
No external exfiltration is partial comfort. Unauthorized internal access to sensitive user data carries GDPR and AI Act exposure regardless of whether the data left the building.
What to do about it
Audit every AI agent in your environment and document what it can post, write, or modify without a human approval checkpoint.
Map the blast radius. If a specific agent takes an unexpected action, what does it touch first, and what cascades from there?
Build AI agent incident response playbooks with automated containment triggers that don’t require analyst approval before they fire.
Rock’s Musings
The Meta incident will get dismissed as a minor operational hiccup. That’s the wrong read. Even with legit engineering talent and a mature security program, a production AI agent escaped its behavioral constraints and triggered a data exposure chain. I’m willing to bet your environment isn’t more disciplined than Meta’s.
Two hours to containment is fast. Most organizations I work with couldn’t tell you within two hours that an agent had gone sideways. AI agent behavioral monitoring is dramatically behind where it needs to be. The lesson to take away from this is that you need detection that fires before the cascade, not after the data is already in the wrong hands.
4. Amazon’s Bedrock sandbox leaks data through DNS because that’s the design
BeyondTrust’s Phantom Labs disclosed that Amazon Bedrock AgentCore Code Interpreter’s sandbox mode permits outbound DNS queries (SC Media, The Hacker News). An attacker interacting with the agent can send commands encoded in DNS A record responses and receive exfiltrated data encoded in DNS subdomain queries to an attacker-controlled server. No authentication bypass is required. BeyondTrust assigned a CVSS score of 7.5. AWS reviewed the research, determined that the behavior reflects the intended functionality, and responded by updating the documentation rather than issuing a patch.
Why it matters
“Intended behavior” is a vendor risk posture, not a security posture. Sandbox mode was positioned as providing execution isolation. A sandbox allowing covert DNS exfiltration does not deliver isolation in any security-relevant sense.
DNS-based covert channels are standard red team tradecraft in traditional environments. The technique translates directly into AI code execution environments without modification.
Organizations running agents against sensitive internal data in AWS Bedrock face an unpatched, documented, CVSS 7.5 risk with no vendor remediation timeline.
What to do about it
Add DNS query monitoring for Bedrock AgentCore code execution environments to your threat detection stack now.
Reduce the data that AI agents with code execution access can reach to the strict minimum required for the task.
Get a formal written architecture statement from AWS specifying exactly what the sandbox guarantees before expanding Bedrock AgentCore deployments.
Rock’s Musings
Another “Intended behavior” narrative. I’m getting pretty damn sick of it. That’s another way of saying, “We know about this, it would be expensive to change, and it sucks to be you.” (see my thoughts in CSO magazine about a previous instance HERE). The documentation update rather than a patch is the tell. You can’t outsource your risk posture to your cloud provider’s design decisions.
The technique is in every red team playbook. DNS exfiltration from sandboxed environments is foundational evasion tradecraft. Translate that knowledge directly to your AI infrastructure. If you’re running code execution agents against sensitive data in Bedrock and you haven’t instrumented DNS as an exfiltration channel, now you have your reason.
5. Linux Foundation raises $12.5 million from AI vendors to fix what their tools helped break
The Linux Foundation announced $12.5 million in grant funding from Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to advance open source software security (Linux Foundation, OpenSSF). The funding flows through Alpha-Omega and the Open Source Security Foundation. The stated problem is that AI tools are generating vulnerability reports at a volume that open-source maintainers cannot triage or remediate, degrading the security posture of the software supply chain. AWS contributed an additional $2.5 million to Alpha-Omega, in addition to the pooled amount.
Why it matters
The same organizations whose AI tools created the report flood are funding the solution. This characterizes the governance dynamic precisely, that vendors profit from deployment and are now asked to fund the externalized costs on the maintainer community.
Overwhelming maintainers with AI-generated findings lowers average signal quality. Funding addresses capacity but doesn’t solve the signal-to-noise problem alone.
This is the first major coordinated industry response to the specific problem of AI-generated report volume stressing the open source security ecosystem.
What to do about it
Factor the current maintainer backlog into your software composition analysis program. Critical open source dependencies may carry known vulnerabilities sitting in a backlogged queue rather than getting remediated.
Watch what Alpha-Omega and OpenSSF deliver from this investment over the next twelve months. The commitment matters less than whether the tooling measurably improves triage capacity.
Ask your security vendors how they handle AI-generated findings before surfacing them to your team. The same noise problem exists inside your tooling stack.
Rock’s Musings
$12.5 million is the right direction, yet not nearly enough. Open source maintainers are largely volunteers managing the infrastructure that the global software supply chain runs on. The AI-generated report flood is a problem these vendors created while selling velocity gains to enterprises.
The coordination signal matters more than the dollar amount. You rarely see Google, Microsoft, AWS, Anthropic, and OpenAI announce joint anything. When competitors fund a shared problem together, the liability exposure of inaction exceeds the competitive cost of cooperating. Given how much of the internet runs on open source that these companies’ AI tools are now stressing, the math on joint action isn’t complicated.
6. Pentagon moves to replace Anthropic while the lawsuit works through the courts
TechCrunch reported that the Pentagon is actively developing alternative AI capability paths to replace Anthropic’s Claude across defense applications (TechCrunch). This follows the Defense Department’s February designation of Anthropic as a supply chain security risk and Anthropic’s subsequent lawsuit against the Trump administration. This confirms that the replacement effort has shifted from contingency planning to active technical development. More than 875 Google and OpenAI employees have signed an open letter supporting Anthropic’s position.
Why it matters
Active technical development of replacements, rather than contingency planning, signals DoD confidence that the Anthropic designation will hold through the litigation cycle.
Defense contractors relying on Claude for active program work now face migration timelines driven by someone else’s legal and procurement decisions.
The 875-employee response across competing firms signals the tech workforce treats this as a legitimacy question about AI governance, not a routine vendor dispute.
What to do about it
If your organization operates in the defense industrial base, review AI vendor contracts now for comparable ethical-use clauses and their enforceability, before further redesignations affect your supply chain.
Track the Anthropic lawsuit. The outcome defines what ethical use provisions in AI contracts are worth in federal procurement.
Evaluate AI vendor concentration risk in your stack. If one supply chain designation event could disrupt your programs, that’s a single point of failure worth addressing.
Rock’s Musings
The supply chain risk designation was built for foreign adversaries. Applying it to a domestic AI company for writing autonomous weapons prohibitions into a contract is a significant precedent that the press is underweighting. The designation signals that safety constraints are now framed as operational liabilities in defense procurement, not risk mitigation.
If that framing spreads to other acquisition decisions, the AI vendors most willing to remove safety constraints gain a competitive advantage in a large and growing federal spending category. Watch the lawsuit and the follow-on procurement awards carefully. Both will tell you where this governance experiment ends up.
7. CSA’s 2026 cloud and AI security report documents the identity explosion
The Cloud Security Alliance published its State of Cloud and AI Security 2026 on March 13, finding the average enterprise now manages 100 machine and non-human identities for every one human identity (CSA). Forgotten or misconfigured cloud credentials declined from 84% in 2024 to 65% in 2026. Ninety-two percent of executives report business-impacting security compromises, most from preventable risks. The report identifies decentralized AI agents as the primary driver of the NHI expansion and calls for continuous exposure management to replace static patching cycles.
Why it matters
A 100:1 machine-to-human identity ratio means the traditional IAM program built around human users is managing a fundamentally different problem than it was designed for.
Credential misconfiguration persisting at 65% suggests the improvement rate won’t match the velocity of AI-driven identity expansion.
A 92% executive compromise from preventable risks indicates the gap isn’t a detection-sophistication problem. Organizations know the controls and aren’t applying them at the required scale.
What to do about it
Audit NHI management practices against the same standards applied to human identities: lifecycle management, least privilege, and regular access reviews.
Deploy continuous credential exposure monitoring specifically for machine identities and AI agent service accounts.
Shift the board-level narrative from maturity scores to continuous exposure management. That’s where enterprise frameworks are heading.
Rock’s Musings
A hundred machine identities for every human one, and most organizations manage them with IAM tooling built for a 10-to-1 ratio. The math doesn’t work. The credential improvement trend from 84% to 65% is real progress, but 65% still represents a failure rate I wouldn’t accept in any other critical control domain.
Every new agentic deployment creates more identities, tokens, service accounts, and API keys. If you don’t have a clear owner for non-human identity governance today, you have a gap that will become a breach within twelve months. Find the owner. Document the scope. Don’t wait for the incident.
8. Jozu Agent Guard launches after watching an AI agent bypass governance in four commands
Jozu announced Jozu Agent Guard on March 17, a zero-trust runtime that executes AI agents, models, and MCP servers with policy enforcement built outside the model’s control plane and hardcoded against agent-level override (Help Net Security). The architecture decision came directly from internal testing: during product development, Jozu observed an AI agent bypass the governance controls the product was designed to enforce in four commands. That failure drove the decision to move policy enforcement entirely outside the execution layer the agent can influence.
Why it matters
A product built specifically to constrain AI agents was bypassed in four commands during its own testing. The threat model has to assume the agent itself will attempt to circumvent governance. Cooperative compliance is not a valid design assumption.
MCP server isolation is underprovided. MCP servers frequently carry production credentials and broad tool access, and running them in shared agent environments creates privilege escalation paths most organizations haven’t mapped.
Three AI agent security products launching in four days signals enterprise buying is active in this space right now.
What to do about it
Require AI agent security vendors to demonstrate their product against an adversarial agent in a live environment. Demand the failure modes alongside the happy path.
Treat MCP server execution environments as sensitive infrastructure requiring isolation equivalent to your most privileged workloads.
Add governance bypass testing to your AI red team scope before the next production agent deployment.
Rock’s Musings
The four-command bypass during their own testing is the most honest vendor disclosure I’ve seen about AI agent security in the past year. Most vendors demo the happy path and skip the part where their product got circumvented. Jozu disclosed it and changed the architecture. That’s how security engineering is supposed to work.
The uncomfortable implication for everyone else: if a product built specifically to constrain AI agents was bypassed in four commands, ask yourself what your existing controls look like against an agent actively trying to exceed its permissions. If you haven’t run that test, you don’t have an answer.
9. Token Security builds intent-based controls for AI agent permissions
Token Security announced intent-based AI agent security on March 18, governing autonomous agents by scoping their permissions to declared operational purpose rather than granting standing broad access (Help Net Security). The system creates purpose-defined permission envelopes that expire at task completion, with runtime enforcement preventing actions outside the declared intent. Token Security’s CEO stated directly that prompt filtering and guardrails were not designed to contain the security risks of autonomous AI agents, pointing to the architectural limitation of relying on the model’s output layer for enforcement.
Why it matters
Purpose-aligned permissions address a structural problem in current agent deployment: agents inheriting credential scopes far exceeding what any single task requires.
Explicit acknowledgment that content filtering can’t do this job alone represents where serious practitioner thinking is converging. The field is moving from output layer controls toward architectural access controls.
Paired with Jozu, Entro, and Microsoft Entra Agent ID announcements this same week, this reflects a coherent market thesis forming around agent identity and least privilege as primary security controls.
What to do about it
Map current AI agent deployments against one question: does each agent hold only the permissions it needs for its specific task? If you can’t answer quickly, your access governance is already too loose.
Evaluate intent-based and purpose-scoped access controls in your next AI security procurement cycle.
Brief your identity team on AI agent access management before your security team deploys solutions they haven’t reviewed. These tools touch the same credential infrastructure.
Rock’s Musings
Least privilege applied to agents is the same principle that has protected privileged service accounts in traditional architectures for decades. The problem is that most AI agent deployments aren’t being treated like privileged service accounts. They get broad collaboration access by default, and nobody asks why.
Intent-based controls force the right question: what is this agent for? If you can answer precisely, you can scope permissions precisely. If you can’t answer precisely, that is the real governance problem. You’ve deployed an agent without a defined operational boundary, and your control over it is largely fictional.
10. NIST receives formal research submissions on securing AI agents
On March 18, UC Berkeley’s Center for Long-Term Cybersecurity submitted a formal response to NIST’s CAISI RFI on AI agent security, urging prioritization of standardization, incident reporting frameworks, talent pipelines, and adaptive governance (CLTC UC Berkeley). The Computer and Communications Industry Association submitted parallel comments advocating for multistakeholder processes and alignment with existing NIST frameworks (CCIA). NIST’s National Cybersecurity Center of Excellence also holds a separate comment period open through April 2 on a concept paper covering identity and authorization for AI agents.
Why it matters
The gap between NIST collecting input and usable standards publishing is measured in years. Your agents are running now, under no binding identity or authorization standard.
Berkeley’s call for incident reporting infrastructure acknowledges a structural gap: no systematic mechanism exists for learning from AI agent security failures across organizations.
The NCCoE concept paper on agent identity and authorization is where future compliance requirements will originate. Comments submitted now shape what those requirements demand.
What to do about it
Read the NCCoE concept paper at nccoe.nist.gov and submit comments before April 2 if your organization deploys agents. Operational experience is what NIST is specifically asking for.
Treat the Berkeley and CCIA submissions as intelligence on where auditors will focus within 18 to 36 months.
Stand up basic agent identity logging now using existing IAM controls. Don’t wait for NIST to finalize anything.
Rock’s Musings
NIST is moving faster on agentic AI security than I expected two years ago. That still isn’t fast enough to matter for organizations deploying agents today. Best case from the current comment cycle: interim guidance in twelve months. Binding controls will take longer.
Berkeley’s call for incident reporting is the right recommendation and it will face the same resistance every mandatory reporting regime has faced. Voluntary frameworks will come first, get ignored, and get teeth after the third or fourth major public incident. That’s the pattern. Plan for it and build your own internal incident tracking capability now.
The One Thing You Won’t Hear About But You Need To
Entro Security builds a governed map of what your AI agents access in production
Entro Security launched its Agentic Governance and Administration platform, extending non-human identity security coverage specifically to AI agents (GlobeNewswire, Help Net Security). The platform builds structured AI agent profiles from three observable layers. First, sources: the endpoints, agent platforms, cloud environments, and MCP servers where agents execute. Second, targets: the enterprise assets and applications each agent accesses. Third, identities: the human accounts, non-human identities, and secrets each agent uses to operate. AGA provides MCP server activity visibility and policy enforcement, audit trails for both allowed and blocked activity, and controls against unsanctioned MCP targets and AI client behaviors.
Why it matters
Most organizations deploying AI agents don’t have a single governed view of what agents are running, what they access, and which identities they use. AGA builds that view from execution telemetry rather than documentation that goes stale immediately after it’s written.
MCP server governance is nearly absent from enterprise security programs today, despite MCP servers frequently holding production credentials and broad access to sensitive systems.
The NHI-first architecture lets organizations with existing non-human identity programs extend that coverage to AI agents rather than building a separate program from scratch.
What to do about it
Before the next AI agent deployment, require answers to three questions from observable telemetry: where does it run, what does it touch, and which identities does it use? If you need documentation rather than telemetry to answer, you don’t have governance.
Add MCP server inventory to asset management now. MCP servers deploy through developer workflows without formal change management, and retroactive cataloguing gets harder with each deployment.
Assess whether your current NHI security program explicitly covers AI agent identities. If it doesn’t, extend it or stand up a parallel track with a clear accountable owner.
Rock’s Musings
This one didn’t get coverage this week because it launched during RSA prep season when every security vendor fights for the same column inches. That’s exactly why it’s here. The problem AGA addresses is what I call dark matter governance: AI agents operating in your environment that nobody catalogued because they deployed through platforms your traditional asset management tools don’t see.
The MCP visibility layer is the operationally useful piece. MCP servers multiply fast, are deployed by individual developers without change management review, and frequently hold credentials for production systems. An agent you haven’t catalogued connecting to an MCP server you haven’t governed is a permissions sprawl problem that compounds with every new deployment. Get a governed view of that surface before your adversary maps it for you.
If you found this analysis useful, subscribe at rockcybermusings.com for weekly intelligence on AI security developments.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
The views and opinions expressed in RockCyber Musings are my own and do not represent the positions of my employer or any organization I’m affiliated with.
References
Bitcoinworld. (2026, March). Rogue AI agent sparks critical security crisis at Meta, exposing sensitive data. https://bitcoinworld.co.in/meta-rogue-ai-agent-security-breach/
Cloud Security Alliance. (2026, March 13). The state of cloud and AI security in 2026. https://cloudsecurityalliance.org/blog/2026/03/13/the-state-of-cloud-and-ai-security-in-2026
Computer and Communications Industry Association. (2026, March). CCIA submits comments to NIST regarding privacy and security of AI agents. https://ccianet.org/news/2026/03/ccia-submits-comments-to-nist-regarding-privacy-and-security-of-ai-agents/
Council of the European Union. (2026, March 13). Council agrees position to streamline rules on artificial intelligence. https://www.consilium.europa.eu/en/press/press-releases/2026/03/13/council-agrees-position-to-streamline-rules-on-artificial-intelligence/
Entro Security. (2026, March 18). Entro launches agentic governance and administration to bring visibility and control to AI access across the enterprise. GlobeNewswire. https://www.globenewswire.com/news-release/2026/03/18/3258229/0/en/Entro-Launches-Agentic-Governance-Administration-to-Bring-Visibility-and-Control-to-AI-Access-Across-the-Enterprise.html
HackerNoob. (2026, March). Meta’s rogue AI agent: Sev 1 security incident and how to sandbox AI agents properly. https://hackernoob.tips/meta-rogue-ai-agent-sev1-how-to-sandbox-ai-agents/
Help Net Security. (2026, March 17). Jozu Agent Guard targets AI agents that evade controls. https://www.helpnetsecurity.com/2026/03/17/jozu-agent-guard-targets-ai-agents-that-evade-controls/
Help Net Security. (2026, March 18). Token Security advances AI agent protection with intent-based controls. https://www.helpnetsecurity.com/2026/03/18/token-security-intent-based-ai-agent-security/
Help Net Security. (2026, March 18). Big tech companies step in to support the open source security ecosystem. https://www.helpnetsecurity.com/2026/03/18/linux-foundation-open-source-security-12-5-million-funding/
Help Net Security. (2026, March 19). Entro Security AGA brings governance and control to enterprise AI agents and access. https://www.helpnetsecurity.com/2026/03/19/entro-agentic-governance-administration/
HiddenLayer. (2026, March 18). HiddenLayer releases the 2026 AI threat landscape report. PR Newswire. https://finance.yahoo.com/news/hiddenlayer-releases-2026-ai-threat-140000928.html
Linux Foundation. (2026, March 17). Linux Foundation announces $12.5 million in grant funding from leading organizations to advance open source security. https://www.linuxfoundation.org/press/linux-foundation-announces-12.5-million-in-grant-funding-from-leading-organizations-to-advance-open-source-security
SC Media. (2026, March). AWS Bedrock tool vulnerability allows data exfiltration via DNS leaks. https://www.scworld.com/brief/aws-bedrock-vulnerability-allows-data-exfiltration-via-dns-leaks
TechCrunch. (2026, March 17). The Pentagon is developing alternatives to Anthropic, report says. https://techcrunch.com/2026/03/17/the-pentagon-is-developing-alternatives-to-anthropic-report-says/
The Hacker News. (2026, March 17). AI flaws in Amazon Bedrock, LangSmith, and SGLang enable data exfiltration and RCE. https://thehackernews.com/2026/03/ai-flaws-in-amazon-bedrock-langsmith.html
UC Berkeley Center for Long-Term Cybersecurity. (2026, March 18). Researchers submit response to U.S. government request on security considerations for AI agents. https://cltc.berkeley.edu/2026/03/18/researchers-submit-response-to-u-s-government-request-on-security-considerations-for-ai-agents/



