Weekly Musings Top 10 AI Security Wrapup: Issue 31 March 20-26, 2026
RSA 2026: Every Vendor Sold an Agent. A Supply Chain Attack Ran Quietly in the Background
RSA Conference 2026 closed Thursday in San Francisco. Thirty thousand attendees, six hundred exhibitors, one word on every booth banner: agentic. While the industry competed on keynotes and happy hours, LiteLLM, deployed in hundreds of enterprise AI stacks, got infected with credential-stealing code through a misconfigured GitHub Actions workflow. Malicious releases went live March 19 and March 22. Most of your security team was watching keynotes.
Underneath the conference noise, genuine signal emerged. Zenity’s CTO demonstrated live zero-click exploits against ChatGPT, Salesforce, and Microsoft Copilot on the conference floor. Palo Alto Networks Unit 42 documented new attack paths through the Model Context Protocol. HackerOne disclosed a 540% year-over-year surge in validated prompt injection vulnerabilities. The EU AI Office’s second draft Code of Practice on AI-generated content transparency is open for feedback through March 30, with prescriptive new requirements that narrow compliance discretion significantly. NIST published AI 800-4, the first federal framework for monitoring AI systems in production, with no vendor booth to announce it.
Here’s what matters and what to do about it.
1. Zenity Launches Guardian Agents and Demonstrates 0-Click AI Exploits at RSA
Zenity launched Guardian Agents at RSA 2026 on March 23, positioning it as continuous, contextual security for AI agents across SaaS, cloud, and endpoint environments. CTO Michael Bargury ran live demonstrations titled “Your AI Agents Are My Minions,” showing zero-click prompt injection chains that manipulated Cursor into leaking developer secrets via support emails, Salesforce agents into exfiltrating customer data to an attacker-controlled server, and ChatGPT into producing persistent attacker-chosen outputs across conversations (The Register, March 23, 2026, and Help Net Security, March 24, 2026).
Why it matters
Zero-click attacks eliminate the human review checkpoint most AI security frameworks assume is present. When agents act without user input, your primary detection layer disappears before the threat is visible.
Live exploitation of production enterprise systems on a conference floor is harder to dismiss than a threat model in a whitepaper.
Guardian Agents signals a market category forming in real time. The evaluation criteria you set today will shape purchasing decisions for the next several years.
What to do about it
Inventory every AI agent in your environment before your next board meeting. If you can’t enumerate them, you can’t monitor them.
Require vendors to document in writing which actions their agents take without explicit human approval. Non-answers are critical control gaps.
Run adversarial testing against your three highest-access agents this quarter, targeting credential extraction, data exfiltration, and cross-system manipulation.
Rock’s Musings
Bargury’s demonstration strategy was the most honest thing at RSA this week: show the attack, then show the defense. Live exploitation on production systems is harder to dismiss than a slide deck built around the word autonomous. The inconvenient reality is that most enterprises already have agents running with email access, CRM credentials, and code repository permissions, with no runtime monitoring on what those agents decide to do. Selecting an AI security vendor is not the same thing as having an answer to the problem he demonstrated on the conference floor.
2. LiteLLM Infected with Credential-Stealing Code via Trivy Misconfiguration
The Register reported March 24 that LiteLLM, a widely deployed open-source LLM API proxy, was compromised through a misconfigured Trivy GitHub Actions workflow. Attackers modified version tags on the trivy-action GitHub Action to inject malicious code into workflows organizations were already running, producing malicious releases on March 19 and March 22. The maintainer confirmed that anyone who installed and ran the project during that window should assume credentials available to their environment were exposed.
Why it matters
LiteLLM sits in the critical path of many enterprise AI deployments. One compromised abstraction library reaches hundreds of downstream production systems simultaneously.
The attack exploited version tags, not direct code injection. CI/CD pipelines relying on tags rather than pinned commits ran malicious code without detection. That’s a systemic configuration gap across most enterprise pipelines.
The attack ran during RSA week when security teams were distracted. The timing was likely not accidental.
What to do about it
Audit every environment that pulled a LiteLLM update between March 19 and March 24. Treat those environments as potentially compromised until you confirm otherwise.
Pin all GitHub Actions to specific commit hashes, not version tags. Tags are mutable and can be silently overwritten. Commits are not.
Establish software bill of materials practices for all AI and ML dependencies. Supply chain attacks will keep finding environments where that inventory doesn’t exist.
Rock’s Musings
LiteLLM is exactly the kind of library that lands in enterprise AI stacks without a security review, installed by an ML engineer who needed to route calls to three model providers before the sprint ended. Trivy is a security tool. Attackers used a security tool misconfiguration to compromise a release pipeline for another widely used tool. If there’s a cleaner argument for applying security rigor to your own security tooling, I haven’t heard it. Your AI dependency chain needs the same scrutiny as your application dependencies. Good intentions at install time are not a compensating control.
3. Palo Alto Networks Unit 42 Documents MCP Attack Vectors
Palo Alto Networks Unit 42 published research the week of March 20 documenting new attack paths through the Model Context Protocol, including prompt injection delivered through MCP’s sampling interface. Security researchers tracked 30 CVEs filed against MCP implementations in the preceding 60 days, including CVE-2026-25536 (cross-client data leak in the MCP TypeScript SDK) and CVE-2026-23744 (remote code execution in MCPJam Inspector). A scan of more than 500 public MCP servers found that 38% lacked authentication entirely (Unit 42, March 2026, and Adversa.ai, March 2026).
Why it matters
MCP is the connective tissue between AI agents and enterprise tools. A vulnerability in this protocol exposes the entire agent ecosystem built on top of it, not one isolated system.
Thirty CVEs in 60 days signals that security review did not happen before shipping at scale. Every API ecosystem that launches with deployment velocity ahead of security assessment follows this arc.
Thirty-eight percent of scanned servers lacking authentication is systemic failure. Authentication is the minimum viable control. Everything built on top of unauthenticated servers is exposed.
What to do about it
Inventory every MCP server in your environment and treat unauthenticated instances as critical findings requiring immediate action.
Require authentication, authorization, and comprehensive logging for any MCP server with access to production systems or sensitive data.
Demand specific CVE status and patch timelines from your AI infrastructure vendors. Vague answers signal high risk and a vendor not tracking its own exposure.
Rock’s Musings
Thirty CVEs in 60 days is not a patching problem. It’s a design problem. MCP shipped fast because the builders cared more about what AI agents could reach than how securely they could reach it. The 38% authentication gap is the number that should end budget debates about AI infrastructure security investment. Roughly two in five MCP servers operate on the assumption that only authorized parties will talk to them, which is exactly wrong in a protocol designed to connect agents to external resources. That assumption creates direct paths to your production data.
4. HackerOne Reports 540% Surge in Validated Prompt Injection Vulnerabilities
HackerOne announced Agentic Prompt Injection Testing on March 21, paired with platform data showing a 540% year-over-year increase in validated prompt injection vulnerabilities. The service executes structured, multi-turn adversarial scenarios against live AI applications, evaluating whether injection attempts produce actual data exposure or unauthorized tool execution across interconnected agent systems (HackerOne Blog, March 2026, and Cybersecurity Insiders, March 21, 2026).
Why it matters
A 540% increase in validated vulnerabilities means real researchers are finding real exploitable conditions in production systems, not theoretical edge cases.
Traditional application security testing does not cover agent-specific attack paths. If your AI agents aren’t explicitly in scope for your red team or bug bounty program, you have a documented blind spot.
Unit 42’s concurrent research on indirect prompt injection through web content eliminates the “attacker needs direct access” objection. Agents read the web. The web is the attack surface.
What to do about it
Add AI agents to your red team scope explicitly as a primary target category, not an afterthought appended to an existing engagement.
Require prompt injection testing as part of every AI agent release process, treated as a gate equivalent to penetration testing for any externally facing application.
Track prompt injection findings as a distinct vulnerability class in your risk register. You can’t demonstrate improvement to your board on metrics you’re not collecting separately.
Rock’s Musings
Five hundred forty percent ends the debate about whether prompt injection is a real threat. I’ve heard the objection that attackers need direct access to craft payloads. Unit 42’s indirect injection research, published this same week, shows agents reading manipulated instructions from ordinary websites they visit in the course of normal operation. Your agents don’t need to be directly targeted; they need to visit the wrong page. The gap between organizations deploying AI agents and organizations testing those agents adversarially is the largest unaddressed risk exposure I see in enterprise AI programs right now.
5. Microsoft Publishes Secure Agentic AI Framework and Confirms Agent 365 May 1 GA
Microsoft published “Secure Agentic AI End-to-End” on March 20, documenting its approach to extending Zero Trust architecture across the full AI agent lifecycle: data ingestion, model training, deployment, and runtime behavioral monitoring. The post confirmed Agent 365, Microsoft’s governance control plane for enterprise AI agents, will reach general availability on May 1, 2026, with agent identity, authorization scope, and behavioral monitoring treated as distinct security domains from traditional human-user ZT controls (Microsoft Security Blog, March 20, 2026).
Why it matters
A confirmed May 1 GA date gives enterprises in Microsoft environments a concrete six-week planning horizon. Governance framework adoption takes time and that clock is already running.
Extending Zero Trust to AI agents is architecturally correct. Most ZT implementations weren’t designed with agent identity or behavioral monitoring in mind, making the gap assessment non-trivial work.
Publishing detailed technical frameworks before product GA signals Microsoft wants enterprises building governance practices now, before the product ships.
What to do about it
Map your current ZT architecture against the agent-specific requirements described in the March 20 post. Focus on gaps in agent identity and behavioral monitoring specifically.
Begin internal stakeholder alignment on Agent 365 if you’re in a Microsoft 365 environment. Six weeks is not enough time to start that conversation from zero.
Document agent permissions, access patterns, and decision scopes using whatever visibility tools you have today rather than waiting for Microsoft tooling.
Rock’s Musings
“End-to-end” is doing heavy lifting as a title. What Microsoft describes is extending known security primitives to a new execution context. That’s necessary work and not a complete answer. The hard problems are behavioral: distinguishing authorized agent actions from manipulated ones, detecting policy violations in real time, and maintaining audit trails that survive an incident investigation. Agent 365 is worth watching. If the behavioral monitoring is substantive, it’ll move the market. If it’s a compliance dashboard, enterprises will check the box while actual risk sits unaddressed underneath it.
6. Cisco Releases DefenseClaw Open Source on Final Day of RSA
Cisco released DefenseClaw to GitHub on March 27, the final day of RSA 2026, as an open-source framework for scanning agent skills and sandboxing agent execution. The release accompanied Zero Trust Access for AI agents and a free AI Defense Explorer Edition targeting security practitioners. Cisco plans integration with NVIDIA OpenShell for hardware-level execution sandboxing, addressing execution isolation that software-only monitoring cannot replicate (Cisco Newsroom, March 2026, and UC Today, March 2026).
Why it matters
Open-source agent security scanning means organizations can start building security into agent development pipelines without a procurement cycle or a budget line.
Hardware-anchored execution sandboxing addresses a control gap that software-only monitoring cannot close. Execution isolation for agents is systematically underinvested across the industry relative to the risk.
The open-source and Explorer Edition strategy targets developers before enterprise procurement cycles form, competing for architectural mindshare with builders rather than just buyers.
What to do about it
Pull DefenseClaw and run it against a non-production agent environment this month. Validate real-world utility before committing to any commercial evaluation.
Evaluate the NVIDIA sandboxing integration if you’re running NVIDIA infrastructure. Test in isolation before production consideration.
Track Cisco’s AI Defense commercial roadmap. Free Explorer Editions typically precede commercial tier launches by 12 to 18 months, and starting your evaluation now means you’ll have data when the pitch arrives.
Rock’s Musings
Releasing open-source code on the last day of the conference changes the conversation from “will enterprises buy this” to “pull the repo and see for yourself.” That’s a credible move when the code is real and the threat model is honest. Run DefenseClaw against your actual agent environment before making any claims about coverage. The larger play is Cisco’s bid for the enterprise AI security architecture position using network visibility, an established security portfolio, and enterprise relationships most competitors would need a decade to build. DefenseClaw is a credible opening move. Watch the next 18 months of product decisions to judge the hand.
7. Google Deploys Gemini Agents to Process 10 Million Dark Web Posts Daily
Google announced at RSA 2026 on March 23 that Gemini AI agents are processing more than 10 million dark web posts daily to surface threats relevant to specific organizations. The capability integrates with Google Security Operations alongside new agentic automation features, currently in preview, that let security teams combine AI-driven investigation with deterministic automated response workflows (The Register, March 23, 2026, and Google Cloud Blog, March 2026).
Why it matters
Ten million posts per day changes the economics of dark web threat intelligence. Organizations that couldn’t sustain comprehensive monitoring programs gain access to Google-scale processing at a fraction of the previous cost.
Pairing AI-driven investigation with deterministic automation preserves human-defined control while extending agent reach into high-volume, low-judgment tasks. That’s the right architectural pattern for agentic SOC work.
Preview status means GA behavior, SLA, and security review standards remain unfinalized. Your production SOC is not where you run this experiment yet.
What to do about it
Assess your current dark web monitoring coverage gap against what this capability covers. If there’s a meaningful difference, prioritize a pilot evaluation once the feature reaches GA.
Review preview terms carefully before enabling agentic automation in any production SOC workflow. Preview features carry materially different risk profiles than GA releases.
Define which SOC workflows you’d delegate to agents and where human approval must remain. Build that policy before the tools arrive, not after they’re already running.
Rock’s Musings
Threat intelligence is the most defensible application of AI agents in security operations right now. Failure modes are recoverable: the agent misses a threat and your other controls have a chance at it. Compare that to agentic incident response, where the failure mode might be blocking a production system or destroying forensic evidence. Start with intelligence, not response. The preview framing signals Google is collecting operational data before committing to GA behavior guarantees, which is reasonable product discipline. It also means you wait for GA before running this where failures have material consequences.
8. Novee Launches Autonomous AI Red Teaming Platform for LLM Applications
Novee announced autonomous AI red teaming for LLM applications on March 24 at RSA Conference 2026. The platform deploys an AI pentesting agent that executes multi-turn adversarial scenarios against live systems, simulating attacker chaining techniques across prompt injection, jailbreaks, data exfiltration paths, and agent behavior manipulation, covering any LLM-powered system regardless of model provider with optional CI/CD pipeline integration (GlobeNewswire, March 24, 2026, and Help Net Security, March 24-25, 2026).
Why it matters
Traditional pentesting tools were designed for pre-LLM application security problems. Novee builds red teaming from actual LLM vulnerability research, producing findings that adapted traditional tools miss.
CI/CD pipeline integration lets security teams catch prompt injection and agent manipulation issues before production deployment rather than after an incident surfaces them.
Two distinct companies announced adversarial AI testing capabilities at RSA 2026 in the same week. Market formation around this problem is accelerating.
What to do about it
Evaluate Novee’s beta against a non-production LLM application to understand what it surfaces relative to your existing security testing coverage.
Map the gap between your current SDL and what LLM-specific adversarial testing would require. The gap is almost certainly larger than you expect it to be.
Add AI-native red teaming as a release gate requirement for any LLM application reaching production. Make it a gate, not a post-deployment recommendation that teams skip.
Rock’s Musings
Two autonomous AI red teaming announcements in one RSA week tells you the market is accepting that testing AI systems requires AI-specific tooling, not adapted traditional approaches. That’s a healthy development even if the tools themselves are early. The CI/CD integration angle is the most practically valuable feature: security issues caught before production deployment cost a fraction of what they cost after deployment. If you’re shipping LLM applications without adversarial testing in the pipeline, you’re making a risk decision that most boards don’t know they’re making.
9. EU AI Office Second Draft Code of Practice Enters Final Feedback Window
The EU AI Office published its second draft Code of Practice on AI-Generated Content Transparency on March 3, with the stakeholder feedback window closing March 30. The second draft moves from high-level principles toward prescriptive, technically detailed commitments, narrowing compliance discretion and signaling how regulators will likely assess conformance in practice. A third and final version is expected by June 2026, ahead of the August 2 applicability date for AI-generated content transparency obligations (Herbert Smith Freehills Kramer, March 2026, and BABL AI, March 2026).
Why it matters
Draft 2’s shift to prescriptive technical commitments closes the interpretation space organizations were using to plan flexible compliance programs. The gap between “we have a policy” and “we meet the technical specification” narrowed significantly this month.
The March 30 feedback deadline is this weekend. If your organization has substantive views on requirements that are technically unworkable, the window to influence the final text is closing.
August 2 is not distant. Organizations waiting for final text before beginning compliance work are accepting a six-week implementation sprint under real enforcement conditions.
What to do about it
Read Draft 2 this week. The technical specificity represents a meaningful change from Draft 1, and your compliance planning may need adjustment.
Submit feedback before March 30 if the current draft creates compliance constraints you believe are technically unworkable for your AI content operations.
Begin implementation planning against Draft 2 requirements now. The June final text will refine but won’t fundamentally restructure what’s already written.
Rock’s Musings
Every organization waiting for final text before starting EU AI Act compliance work is playing a game where the timeline gets worse each quarter they wait. Draft 2 is prescriptive enough to start serious implementation planning. The adjustments you’ll need when Draft 3 drops will be smaller than the work you’ll need to compress into six weeks if you start in June. The transparency labeling requirements are more technically demanding than most organizations appreciate from reading summaries. Download Draft 2 from the EU’s digital strategy portal and read it against your actual AI content production workflows. That gap analysis is the starting point for everything else.
10. RSA 2026 Reveals a Contested Market for AI Agent Governance Control Planes
A pattern emerged across RSA 2026 beyond individual product launches: the governance control plane for AI agents is being actively contested by multiple major vendors. Microsoft’s Agent 365 (GA May 1), Cisco’s DefenseClaw (released March 27), SentinelOne’s Prompt AI Agent Security control plane, and Nudge Security’s AI agent discovery expansion all launched during the conference week, each addressing the same fundamental problem: enterprises deploy AI agents and lose track of what those agents do, access, and decide autonomously (SecurityWeek, March 2026, and Biometric Update, March 2026).
Why it matters
Multiple major vendors converging on the same problem in the same week signals enterprises are actively requesting governance solutions, not absorbing vendor-manufactured demand.
Competition between Microsoft’s integrated control plane and point solutions from Cisco, SentinelOne, and Nudge creates a real architectural decision. Choose wrong and you own the integration debt for years.
None of these products fully solves behavioral monitoring. They address discovery, policy enforcement, and visibility. Real-time behavioral anomaly detection for agents remains an open engineering challenge.
What to do about it
Define your AI agent governance requirements before evaluating any vendor. Required capabilities: inventory discovery, permission auditing, behavioral logging, and human approval workflows for high-risk actions.
Assess whether your environment favors an integrated control plane or best-of-breed point solutions based on your actual architecture, not vendor marketing claims.
Ask every vendor during evaluation: how does the product detect when an agent takes an authorized action it was manipulated into taking? The answer quality will differentiate vendors quickly.
Rock’s Musings
When four vendors announce competing governance control planes at the same conference in the same week, you’re watching a market category consolidate in real time. That’s interesting for analysts and exhausting for practitioners who have to evaluate all of it while managing agents already running in production without any governance. My advice: don’t let the governance platform debate distract from the more urgent problem of knowing what agents you currently have. Most enterprises have agents deployed that security teams didn’t authorize, can’t enumerate, and have no logs on. Governance tooling is the right investment. Knowing what you’re governing is the prerequisite.
The One Thing You Won’t Hear About But You Need To
NIST Publishes AI 800-4: The First Federal Framework for Monitoring AI Systems in Production
NIST published AI 800-4, “Challenges to the Monitoring of Deployed AI Systems,” in March 2026. Built from three practitioner workshops with more than 200 experts across academia, industry, and ten-plus federal agencies, plus an 87-paper literature review, it maps the gaps, barriers, and open questions in monitoring AI systems after deployment. It covers six monitoring categories: functionality, operational health, human factors, security, safety, and compliance. It received no RSA booth, no vendor keynote, and no sponsored coverage (NIST News, March 2026, and NIST AI 800-4 PDF, March 2026).
Why it matters
Most organizations deploying AI monitor latency and availability. AI 800-4 addresses whether the model behaves consistently with its training distribution and produces outputs that align with policy, which are the failures that matter most and the ones traditional monitoring misses entirely.
NIST explicitly identifies human-AI interaction monitoring as the most under-researched gap in the field. Workshop practitioners raised it far more than published literature covers. If your AI monitoring program doesn’t address how users interact with and respond to AI outputs, you’re missing the category NIST calls most underdeveloped.
The document is vendor-neutral and grounded in practitioner experience, directly applicable to conversations with regulators and auditors who want evidence of a structured AI monitoring program.
What to do about it
Download NIST AI 800-4 from nist.gov and route it to whoever owns your AI security program. It’s the most actionable government guidance on operational AI monitoring published to date.
Map your current monitoring coverage against the document’s six categories. The gaps will be immediately apparent and the prioritization logic writes itself once you have the map.
Use AI 800-4 as the foundation for your AI monitoring program documentation. When regulators ask how you monitor AI systems in production, a NIST-aligned program gives you a defensible, auditable answer.
Rock’s Musings
The honest state of enterprise AI monitoring: most organizations have logs showing their AI system responded. They don’t have logs showing whether the response was correct, consistent with training distribution, within policy boundaries, or manipulated by adversarial input. That visibility gap is how AI security incidents become AI security incidents. You don’t catch the drift until the outcome is undeniable and the damage is done. NIST AI 800-4 doesn’t get coverage because nobody can sell it. The organizations that read it and build monitoring programs from its framework will answer regulatory questions coherently in 18 months when enforcement catches up to deployment rates. The organizations that attended every RSA keynote and skipped the NIST publication will be writing incident reports instead. For more on building AI governance programs that survive regulatory scrutiny, visit rockcybermusings.com. If you need help turning frameworks like AI 800-4 into operating programs your security team can actually run, reach out at rockcyber.com.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
The views and opinions expressed in RockCyber Musings are my own and do not represent the positions of my employer or any organization I’m affiliated with.
References
Bargury, M. (2026, March 23). Your AI agents are my minions [Conference presentation]. RSA Conference 2026, San Francisco, CA.
Claburn, T. (2026, March 24). LiteLLM infected with credential-stealing code via Trivy. The Register. https://www.theregister.com/2026/03/24/trivy_compromise_litellm/
Claburn, T. (2026, March 23). AI agents are ‘gullible’ and easy to turn into your minions. The Register. https://www.theregister.com/2026/03/23/pwning_everyones_ai_agents/
Claburn, T. (2026, March 23). Google unleashes Gemini AI agents on the dark web. The Register. https://www.theregister.com/2026/03/23/google_dark_web_ai/
Cisco. (2026, March). Cisco reimagines security for the agentic workforce. Cisco Newsroom. https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2026/m03/cisco-reimagines-security-for-the-agentic-workforce.html
Google Cloud. (2026, March). RSAC 26: Supercharging agentic AI defense with frontline threat intelligence. Google Cloud Blog. https://cloud.google.com/blog/products/identity-security/rsac-26-supercharging-agentic-ai-defense-with-frontline-threat-intelligence
HackerOne. (2026, March). Agentic prompt injection testing for AI security. HackerOne Blog. https://www.hackerone.com/blog/agentic-prompt-injection-testing
HackerOne introduces agentic prompt injection testing as AI security risks accelerate. (2026, March 21). Cybersecurity Insiders. https://www.cybersecurity-insiders.com/hackerone-introduces-agentic-prompt-injection-testing-as-ai-security-risks-accelerate/
Herbert Smith Freehills Kramer. (2026, March). Transparency obligations for AI-generated content under the EU AI Act: From principle to practice. https://www.hsfkramer.com/notes/ip/2026-03/transparency-obligations-for-ai-generated-content-under-the-eu-ai-act-from-principle-to-practice
EU releases second draft of AI Act Code of Practice on labeling AI-generated content. (2026, March). BABL AI. https://babl.ai/eu-releases-second-draft-of-ai-act-code-of-practice-on-labeling-ai-generated-content/
Microsoft Security. (2026, March 20). Secure agentic AI end-to-end. Microsoft Security Blog. https://www.microsoft.com/en-us/security/blog/2026/03/20/secure-agentic-ai-end-to-end/
NIST. (2026, March). New report: Challenges to the monitoring of deployed AI systems. https://www.nist.gov/news-events/news/2026/03/new-report-challenges-monitoring-deployed-ai-systems
NIST. (2026). NIST AI 800-4: Challenges to the monitoring of deployed AI systems. National Institute of Standards and Technology. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.800-4.pdf
Novee. (2026, March 24). Novee introduces autonomous AI red teaming to uncover security flaws in LLM applications [Press release]. GlobeNewswire. https://www.globenewswire.com/news-release/2026/03/24/3261278/0/en/Novee-Introduces-Autonomous-AI-Red-Teaming-to-Uncover-Security-Flaws-in-LLM-Applications.html
Novee introduces autonomous AI red teaming to hunt LLM vulnerabilities. (2026, March 24). Help Net Security. https://www.helpnetsecurity.com/2026/03/24/novee-ai-red-teaming-for-llm-applications/
Palo Alto Networks Unit 42. (2026, March). New prompt injection attack vectors through MCP sampling. https://unit42.paloaltonetworks.com/model-context-protocol-attack-vectors/
SecurityWeek. (2026, March). RSAC 2026 conference announcements summary: Day 1. https://www.securityweek.com/rsac-2026-conference-announcements-summary-day-1/amp/
Zenity AI agents contextual security. (2026, March 24). Help Net Security. https://www.helpnetsecurity.com/2026/03/24/zenity-ai-agents-contextual-security/
Zenity. (2026, March 23). Zenity sets the foundation for guardian agents. Zenity Newsroom. https://zenity.io/company-overview/newsroom/company-news/zenity-sets-the-foundation-for-guardian-agents



