Weekly Musings Top 10 AI Security Wrapup: Issue 23 December 5, 2025 - December 11, 2025
GPT Agents, Black Hat London, And The Week Agentic AI Got A Rulebook
I’m in London this week supporting the global launch of the OWASP GenAI Security Project Agentic Security Initiative Top 10 for Agentic Applications. It’s the first globally peer-reviewed Top 10 focused on agentic AI applications, backed by a broad coalition of cloud providers, banks, and national cybersecurity centers. (PR Newswire; OWASP)
I’m also attending Black Hat Europe. It turned into a live case study on what happens when AI security becomes the main act. A dedicated AI Security Summit, an adjacent Agentic AI Security Summit, AI-heavy trainings, and a Startup Spotlight dominated by agent runtime protection all sent the same message: AI agents are now an operational attack surface, not a research topic.
Also outside the conference center, the Linux Foundation launched the Agentic AI Foundation to standardize the plumbing that lets agents talk to your systems, while new research showed fresh prompt-injection paths through that same plumbing.
1. OWASP Top 10 For Agentic Applications Becomes The Baseline
Summary
OWASP’s GenAI Security Project released the OWASP Top 10 for Agentic Applications, a new framework focused on the specific risks of autonomous, tool-using AI agents. The list is the product of contributions from more than 100 experts across cloud providers, banks, governments, and security vendors, and is positioned as the benchmark for securing agentic AI through 2026. (PR Newswire; OWASP)
The Top 10 covers issues like unsafe tool orchestration, agent privilege escalation, memory and context abuse, supply chain risks in agent frameworks, and misuse of external protocols such as MCP. It arrives alongside OWASP’s broader GenAI Security Project, which already maintains the LLM Top 10 and AI security guidance, and alongside new hands-on resources like the FinBot agentic CTF and solution guides.
OWASP positioned the Agentic Top 10 as a companion to, not a replacement for, the existing LLM Top 10, reflecting the shift from “chatbots in the browser” to agents driving actions in SaaS, cloud, and internal systems.
Why It Matters
You finally have a shared language for agent risks across security, engineering, and product.
Regulators and auditors now have a concrete, community-backed reference for “reasonable” agent controls.
Vendors will start claiming “OWASP Agentic Top 10 coverage” in product pitches, so you need to know what that really means.
What To Do About It
Map every current and planned AI agent against the Top 10 and mark “not applicable” sparingly.
Align your AI risk assessment and internal control library to OWASP’s categories, so your teams speak the same risk language.
Ask every AI vendor to show how they handle the Agentic Top 10 items in architecture, logging, and incident response.
Rock’s Musings
I am biased here. I sit on the OWASP GenAI Security Project Agentic Security Initiative’s core leadership team and worked with the crew that built this list. That also means I know exactly how much ugly real-world incident data went into it. This is not a theoretical academic checklist. It is a synthesis of red-team findings, production outages, and “we really should not talk about this in public” war stories.
If you’ve already tuned your program to the OWASP LLM Top 10, don’t treat this as optional extra credit. Agents are different. They chain tools, hold state, and make autonomous decisions. That changes everything from threat modeling to logging. The Top 10 for Agentic Applications gives you a floor, not a ceiling.
Use it like you would a standard, such as ETSI TS 104 223, to benchmark your controls and justify investment, not as a checkbox exercise. If you want a pragmatic breakdown, I published a deep dive on the list and how to operationalize it in your environment at RockCyber Musings.
2. No-Code Copilot Agents Are Already Leaking Company Data
Summary
Dark Reading analyzed a Tenable report on how Microsoft Copilot’s no-code agent builders and similar tools can leak sensitive data when business users wire agents directly into production SaaS and internal systems. The piece walks through scenarios in which poorly scoped agents expose customer records, internal documents, or regulated data through misconfigured connectors, overly broad permissions, and a lack of runtime monitoring. (Dark Reading)
This isn’t just a Microsoft story. Security vendors and practitioners are reporting the same pattern across Copilot Studio, n8n, Zapier, Salesforce, and other no-code environments. Non-technical users ship agents that read and write real data, with minimal governance, often outside security’s line of sight. Platforms such as Nokod and Obsidian have launched agent security products specifically because low-code and no-code agents are over-privileged and significantly more data-hungry than human users.
Microsoft’s own guidance on autonomous agents now explicitly warns that SaaS-based agents built on Copilot Studio require dedicated governance and security guardrails, which many organizations have not yet implemented.
Why It Matters
No-code agents mean your “developers” now include HR, finance, and operations staff with no threat-modeling background.
These agents often get broader access than humans and operate 24x7, amplifying any misconfiguration.
Incident response teams cannot triage what they cannot see, and most orgs lack an agent inventory.
What To Do About It
Treat no-code agent builders as development platforms and route them through existing SDLC, change control, and DLP controls.
Require security review and least-privilege access for any agent that can touch production or regulated data.
Evaluate agent-specific runtime protection and governance tools, especially if you are standardizing on Copilot Studio or similar.
Rock’s Musings
Every time I hear “it’s just a Copilot workflow, not an app,” I want to slap someone. Functionally, these no-code agents are applications with persistent memory, system access, and the ability to chain actions. The fact that they were built by a marketing manager instead of a developer certainly doesn’t make them safer.
I’ve seen agents pull entire customer datasets into memory to answer a single question that should have only required a narrow query. I’ve seen agents granted tenant-wide SharePoint access “because the form would not save otherwise.” None of this shows up in your traditional AppSec dashboards.
You’re already behind the curve if your AI governance program, CARE-style or otherwise, doesn’t treat agent builders as Tier 1 platforms. Put a gate in front of these tools, add them to your AI Risk Assessment, and assume that anything a human can click, an agent can click faster and at scale.
3. Black Hat Europe Turns Into An AI Security Summit
Summary
Black Hat Europe 2025 devoted a full-day AI Security Summit in London on December 9, framing AI security as a first-class track alongside its traditional research briefings. The summit focused on three core themes: securing AI applications and models, using AI in cybersecurity products, and understanding how attackers weaponize AI. (Black Hat)
As mentioned above, adjacent to the main conference, OWASP’s GenAI Security Project hosted an Agentic AI Security Summit that connected offensive security practitioners with AI security specialists, with a strong emphasis on agentic risks and AI bill of materials practices.
Vendors, researchers, and community projects used the summit as a launchpad to discuss agent runtime controls, AI bill of materials, MCP security, and new guidance like the Agentic Top 10. I loved how conversations shifted from “what do we do about AI?” to detailed debates about AI-driven threats, agentic defenses, and SOC workflows.
Why It Matters
When Black Hat gives AI security its own summit, you can expect regulators and boards to treat it as a core discipline.
The cross-pollination between offensive security and AI governance is accelerating best practices.
Your peers aren’t just talking about AI safety in the abstract; they are swapping concrete playbooks and tools.
What To Do About It
Ask your team who attended or streamed sessions and capture three specific control ideas to bring home.
Map summit themes back to your AI roadmap: runtime controls, AI BOM, MCP security, and agent governance.
Use the summit agenda as a checklist for your 2026 AI security training and hiring plan.
Rock’s Musings
I have been going to Black Hat for a long time, but it’s my first non-USA one. It’s clear that AI security isn’t a side-track anymore. It is becoming a primary discipline, like application security or cloud security.
What struck me in hallway-con was how quickly the vocabulary has matured. People weren’t asking “should we use agents.” They were arguing about how to enforce runtime guardrails across MCP servers, how granular AI BOMs need to be, and how to reconcile AI governance with existing GRC frameworks. That is the kind of argument I like.
You need to move AI security out from under “innovation” to “core capability.” Otherwise, you’ll feel this gap when you sit across from regulators or major customers. Use the talks and themes from this summit as raw input to your own RISE and CARE-aligned roadmap. The bar just moved.
4. Arsenal’s Eight Open Source AI Security Tools Point To The Next Stack
Summary
The Black Hat Europe 2025 Arsenal showcased eight open-source AI security tools that together sketch the emerging reference stack for defending AI systems. Analyses of the Arsenal program highlight tools such as AI-Infra-Guard for AI infrastructure scanning and jailbreak evaluation, Harbinger for AI red teaming, RedAiRange as an AI cyber range, and SQL-Data-Guard for securing LLM-driven database access.
These tools cover the full lifecycle: discovering AI assets, testing models against adversarial prompts, evaluating unsafe tool bindings, building realistic attack labs, and instrumenting interactions between LLMs and data stores. Community writeups stress that while none of these tools are complete products, they are already in use by security teams experimenting with AI-native testing and governance workflows.
Why It Matters
Open tools define the baseline of what “normal” AI security capabilities will look like in the next two years.
Your team can experiment now without waiting for vendors to productize every idea.
These tools make AI-specific threat modeling and testing tangible for your engineers.
What To Do About It
Stand up a small lab that uses at least one Arsenal tool to test a real internal AI application.
Ask your red-team and AppSec leads which tools map best to your current AI stack and integrate them into exercises.
Treat successful experiments as prototypes for requirements in future commercial AI security procurements.
Rock’s Musings
I love Arsenal because it shows where practitioners are actually spending nights and weekends. This year, the pattern is obvious. Teams are tired of retrofitting Web AppSec tools onto AI systems and are building AI-native capabilities instead.
The lesson is that there is no single “AI security platform” to save you. Expect a toolbox. Asset discovery for models and agents. Prompt-security test harnesses. MCP scanners. Database guards. Cyber ranges that can recreate the Anthropic-style AI espionage scenarios you have read about. Your job is to give teams room to test these tools, then fold the winners into a coherent program.
If you want a mental model, think back to early cloud security. A decade ago, teams glued together open-source tools before CNAPP became a product category. We are at that stage again, now for AI.
5. Microsoft Moves To “In Scope By Default” At Black Hat Europe
Summary
At Black Hat Europe, Microsoft’s Security Response Center announced a shift to an “In Scope by Default” model for coordinated security research and bug bounties across its online services. Instead of narrow product-by-product scopes, Microsoft is treating all online services as in scope unless explicitly excluded, with the aim of better matching how attackers actually operate. (Microsoft)
The blog emphasizes that threat actors do not respect product boundaries, and that bounty programs must reflect that reality. For AI security, this matters because Copilot, AI agents, and AI-powered features span multiple services and APIs. A model where researchers can probe the integrated AI surface without worrying about artificial scope lines should yield more realistic findings.
Why It Matters
Major vendors shifting to a broader research scope sets an expectation for the rest of the market.
AI-enabled features often span services, so narrow scopes create blind spots.
Researchers now have stronger incentives to test complex AI workflows end-to-end.
What To Do About It
Pressure your own vendors to adopt similarly broad scopes, especially for AI features.
Review your internal vulnerability disclosure and red-team rules of engagement for artificial boundaries.
For your own products, pilot an “in scope by default” approach on at least one AI-heavy service.
Rock’s Musings
This is one of those policy changes that sounds boring until you have to respond to a real incident. Narrow bug bounty scopes around AI features are the security equivalent of putting traffic cones around only half the intersection. You still get hit.
Microsoft moving to in-scope by default for online services is an admission that the old way of slicing responsibility by product does not match agentic reality. Copilot workflows span Exchange, SharePoint, Teams, custom apps, and third-party plugins. Attackers see one surface, not five.
If you are serious about AI security, your research program needs to match that view. In practice, that means giving your internal red-team and external researchers enough room to follow an agent through the entire workflow, not just the neat part that fits a product org chart. Use this announcement as air cover when you push for similar changes internally.
6. Startup Spotlight: Capsule And Geordie Bring Runtime Guardrails To AI Agents
Summary
Black Hat Europe’s Startup Spotlight Competition selected Capsule Security and Geordie AI among its finalists, both focused on securing AI agents at runtime. Black Hat’s materials describe Capsule as delivering runtime-first protection and control for AI agents across cloud, devices, and AI agent SaaS platforms, with features like auto-discovery, permission mapping, and real-time guardrails that stop prompt injection and rogue actions.
Geordie is positioned as an “agent-native” security platform that provides real-time visibility, risk intelligence, and mitigation for autonomous agents embedded in enterprise workflows, backed by investors like Ten Eleven Ventures and General Catalyst.
Combined with Capsule’s emerging integrations into Copilot Studio and Microsoft’s AI runtime ecosystem, the Startup Spotlight lineup signals that runtime agent security is moving from niche idea to mainstream product category.
Why It Matters
Investors and conference curators are betting that “AI agent security” will be a standalone market, not just a feature.
Runtime controls for agents fill gaps that static configuration and traditional AppSec cannot cover.
These companies explicitly align with frameworks like NIST AI RMF and ISO 42001, giving you governance hooks.
What To Do About It
Add “agent runtime security” to your 2026 security architecture roadmap and vendor evaluation matrix.
When piloting agent platforms, require at least one proof-of-concept integration with a dedicated runtime guardrail tool.
Use vendor claims about NIST AI RMF, ISO 42001, and EU AI Act alignment as a starting point for due diligence, not the finish line.
Rock’s Musings
I pay a lot of attention to which startups Black Hat puts on stage. The presence of Capsule and Geordie tells you the market now recognizes AI agents as a discrete control problem, not an add-on to existing tools.
Both companies talk about runtime, visibility, and guardrails in language that rhymes with what I hear from CISOs behind closed doors. They want to know what their agents are actually doing, in production, across multiple environments, with clear evidence for auditors. That is very different from “we scan your prompts for bad words.”
If you are still treating agentic AI as a sidecar to identity or API security, you will find yourself juggling bespoke scripts and dashboards in a year. Start experimenting now with platforms whose first job is to control agents, not just report on them. That gives you leverage when you go back to your board and say, “We are not just shipping agents, we are instrumenting them.”
7. AI Agents At Runtime: Sponsored Sessions Turn Misbehavior Into A Board Topic
Summary
Among the sponsored sessions at Black Hat Europe, one stood out. “AI Agents at Runtime: Stopping Misbehavior Before It Becomes a Breach,” presented by Capsule’s CTO. The session, listed under the AI, ML & Data Science track, focuses on runtime monitoring and prevention of rogue agent behavior, aligning with Capsule’s positioning as an AI agent runtime security platform.
While sponsored content is marketing, it is also a signal of what buyers care about. The vendors know it would be a waste of money otherwise. The fact that a runtime agent session made it into the Black Hat Europe agenda reflects growing concern around autonomous agents changing system state, invoking tools, and escalating privileges with minimal direct human oversight.
Why It Matters
Runtime misbehavior is where theoretical agent risks turn into reportable incidents.
Boards and regulators will start asking how you detect and stop “agents gone rogue.”
Vendors are framing agent misbehavior as a distinct class of runtime risk that needs dedicated controls.
What To Do About It
Update your threat models to include agent misbehavior scenarios, not just model jailbreaks and data leaks.
Ensure your logging and SIEM pipelines capture enough telemetry to reconstruct agent actions across tools.
Pilot runtime enforcement policies that can block or require approval for high-risk agent actions.
Rock’s Musings
I usually take sponsored sessions with many, many grains of salt; however, you ignore them completely at your own risk. Vendors spend green money to talk about what customers are worried about right now. This year, that worry is clearly “what happens when agents act on their own in production.”
You’re running with scissors… blind… if your current controls can only tell you what prompt an agent started with. You need to know which tools it called, what records it touched, what it wrote back, and whether that behavior deviated from normal. In other words, you need something closer to runtime EDR for agents than a static prompt filter.
If you sit on a board or leadership team, start asking, “How would we know if an internal Copilot-style agent started mass-editing entitlements or exfiltrating sensitive data through a connector?” If the answer is a long pause, you have work to do.
8. Training The Defenders: Mastering LLM Integration Security
Summary
Black Hat Europe’s training program expanded to include courses like “Mastering LLM Integration Security,” aimed at teaching teams how to secure LLM integrations, mitigate AI-driven risks, and handle AI agents in production environments. Training providers such as Claranet highlight this as a first-time offering at Black Hat, focused on real-world LLM integration risks and defensive patterns.
The broader training schedule shows AI, ML, and data science content threaded throughout, including labs on AI agents, automation tools, and AI-driven threat hunting. This is a shift from previous years, where AI often appeared only as a research topic rather than a skills track.
Why It Matters
Training supply is finally catching up with AI security demand.
Your engineers can learn concrete patterns for safe LLM integration instead of improvising.
AI security skills will differentiate teams that can actually ship secure AI features.
What To Do About It
Budget specific seats for AI security trainings in 2026, not just general AppSec or cloud courses.
Prioritize courses that cover LLM integration, MCP security, and agent governance, not only high-level AI concepts.
Tie training outcomes to concrete milestones, like shipping a secure reference architecture or playbook.
Rock’s Musings
Skills are the real bottleneck. You can buy all the AI security products you want, but if your team does not know how to design, review, and monitor AI systems, you are writing checks that your people cannot cash.
Seeing “Mastering LLM Integration Security” in the Black Hat catalog is encouraging. It means we are moving past the “AI is magic” phase and into the “this is just another integration problem with new failure modes” phase. That is exactly where you want your teams to operate.
Treat AI security training as a core component of your security program. You need enough knowledge up front to design the right architecture, and enough ongoing training to keep pace with new attack patterns, such as MCP sampling and agent session smuggling.
9. Linux Foundation’s Agentic AI Foundation Sets Standards For Agents
Summary
The Linux Foundation announced the Agentic AI Foundation (AAIF), a new consortium dedicated to open standards and governance for AI agents. Founding contributions include Anthropic’s Model Context Protocol (MCP), OpenAI’s AGENTS.md specification for coding agents, and Block’s Goose framework, all transferred into an open, community-governed structure. (Linux Foundation; Anthropic)
AAIF is backed by major players such as Google, Microsoft, AWS, Bloomberg, and Cloudflare, and aims to keep agent infrastructure interoperable, neutral, and extensible. Reporting from outlets like Wired, The Verge, and others frames this as an attempt to avoid a fragmented, proprietary “agent internet” by standardizing how agents connect to tools, data, and each other.
Why It Matters
Agent standards will influence how secure your AI stack is by default.
Open governance gives enterprises a path to influence protocol evolution, including security requirements.
Regulators may eventually look to AAIF protocols as de facto baselines for “responsible” agent deployments.
What To Do About It
Ask your AI platform vendors how they plan to align with MCP and AGENTS.md, and how they will handle AAIF updates.
Ensure your security architects understand MCP’s implications for access control, logging, and isolation.
Consider joining or tracking AAIF working groups relevant to your sector so you are not surprised by where the standards go.
Rock’s Musings
Whenever infrastructure standardizes, security requirements tend to get baked in late and unevenly. AAIF is a chance to do that differently. If MCP and related standards become the default “wiring harness” for your agents, the security model of that harness becomes a board-level issue.
Security teams can’t afford to treat AAIF as someone else’s plumbing problem. This is where you push for mandatory authentication patterns, least-privilege scopes, and standard telemetry fields. It is also where you call out design decisions that make prompt injection and data exfiltration easier than they need to be.
If you let only vendors and protocol designers drive this conversation, you will inherit whatever security posture they find convenient. Get involved. Get your architects, privacy leads, and AI governance owners to read the AAIF materials now, not after you discover that half your new agents rely on a protocol nobody threat-modeled.
10. Agentic AI Security Reports Show Visibility And Governance Are Missing
Summary
Akto released its “State of Agentic AI Security 2025” report, finding that only 21% of enterprises have full visibility into agent behaviors, permissions, tool usage, and data access. The report highlights that most organizations lack clear ownership for AI agent security, and that 79% have no formal governance policy for agent permissions and monitoring. (Akto; PR Newswire)
Digital Commerce 360’s coverage of the report frames this as a “security gap” emerging from rapid agent adoption, with executives acknowledging that agents are in production without adequate oversight. Salt Security’s parallel analysis describes “Agentic AI Security” as a fourth pillar of cybersecurity alongside identity, data, and API security, underscoring how agent systems introduce attack surfaces that do not map cleanly to existing categories.
Why It Matters
You probably have agents in production that no single team fully owns or understands.
Lack of visibility into agent behavior is incompatible with any serious governance claim.
Boards will increasingly ask for concrete metrics on agent inventory, coverage, and incidents.
What To Do About It
Run an agent discovery exercise across SaaS, cloud, and internal platforms, and publish a single inventory.
Assign explicit ownership for agent security, ideally tied to an existing function like AppSec or AI governance.
Add agent-specific KPIs to your risk dashboards, including visibility coverage, incident counts, and policy violations.
Rock’s Musings
These numbers match what I hear from CISOs almost word for word. Agents are everywhere, often created by well-meaning teams trying to automate their work. Security leaders know they exist, but cannot answer basic questions like “how many agents do we have that can touch customer data.” That should scare you.
The framing of agentic AI as a fourth pillar is helpful if you use it as a forcing function, not a marketing slogan. It means you treat agents as first-class citizens in your architecture. They get their own inventory, policies, logging standards, and incident playbooks. They are not just a feature inside identity, data, or API tools.
Use structured assessments like the AI Security Baseline Playbook or a RISE/CARE-driven governance model, and plug these metrics into your next board update. Show where you have visibility, where you do not, and what it will take to close the gap before the next audit asks the same questions.
11. The One Thing You Won’t Hear About But You Need To: MCP Sampling Prompt Injection
Summary
Palo Alto Networks’ Unit 42 published research on new prompt-injection attack vectors that exploit the Model Context Protocol’s “sampling” feature, which lets MCP servers ask an AI client to run prompts on their behalf. Malicious servers can use sampling to inject hidden instructions, hijack conversations, trigger unauthorized tool calls, and drain system resources. (Unit 42)
Follow-on coverage from security outlets and vendors shows that this is not an isolated concern. Red Hat, Elastic, and other observers have warned that MCP sampling opens the door for server-side prompt injection, while multiple advisories describe how unsecured MCP servers on the open internet expose LLM-integrated applications to abuse, resource theft, and data exfiltration.
This all lands the same week that MCP becomes a founding project of the Agentic AI Foundation, which virtually guarantees its widespread adoption.
Why It Matters
MCP is rapidly becoming the default way agents connect to tools and data, so its weaknesses propagate widely.
Sampling flips the trust boundary: servers can effectively “drive” your client’s model without user awareness.
Misconfigured MCP servers can become a quiet path for resource abuse, data leakage, and privilege escalation.
What To Do About It
Disable MCP sampling for any untrusted or third-party servers, and require explicit approval where needed.
Treat MCP servers as high-sensitivity workloads and apply strict authentication, authorization, and network controls.
Add MCP traffic and sampling events to your logging, detection, and threat-hunting playbooks.
Rock’s Musings
This is the research that should be screaming from every keynote but is instead circulating quietly among practitioners. MCP is incredibly useful. It is also a new protocol surface that most organizations have not threat-modeled with the rigor they apply to APIs or identity systems.
Sampling changes the mental model. Instead of your agent reaching out to a tool, the tool can ask your agent to think on its behalf. That is powerful. It is also a perfect place to hide malicious behavior, especially if your logging focuses on the initial user prompt rather than the prompts generated through sampling.
In previous Weekly Musings issues, I have talked about MCP as the next big control point. If you let MCP proliferate without clear policies around which servers can request sampling, how you isolate contexts, and what telemetry you keep, you are building an invisible attack path straight into your AI stack. This is where a serious AI risk assessment and runtime controls for agents and MCP servers stop being optional.
Citations
Akto. (2025, December 8). Akto’s 2025 state of agentic AI security report finds only 21% of enterprises have visibility [Press release]. PR Newswire. https://www.prnewswire.com/news-releases/aktos-2025-state-of-agentic-ai-security-report-finds-only-21-of-enterprises-have-visibility-302635105.html
Akto. (2025, December 8). The state of agentic AI security 2025 [Blog post]. https://www.akto.io/blog/state-of-agentic-ai-security-2025
Anthropic. (2025, December 9). Donating the Model Context Protocol and establishing the Agentic AI Foundation [Blog post]. https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation
Black Hat. (2025). Black Hat Europe 2025 [Conference site]. https://blackhat.com/eu-25/
Black Hat. (2025, November 13). Black Hat Europe 2025 summits to expose critical AI threats, nation-state attacks, and financial sector vulnerabilities [Press release]. https://blackhat.com/html/press/2025-11-13.html
Black Hat. (2025). AI Security Summit [Summit page]. https://blackhat.com/eu-25/ai-summit.html
Black Hat. (2025). Arsenal schedule – Black Hat Europe 2025 [Program]. https://blackhat.com/eu-25/arsenal/schedule/index.html
Black Hat. (2025). Sponsored sessions schedule – Black Hat Europe 2025 [Program]. https://blackhat.com/eu-25/sponsored-sessions/schedule/index.html
Black Hat. (2025). Trainings schedule – Black Hat Europe 2025 [Program]. https://blackhat.com/eu-25/training/schedule/index.html
Black Hat. (2025, November 19). Black Hat Europe 2025 unveils finalists for inaugural startup spotlight competition [Blog post]. https://blackhat.com/html/blog/2025-11-19.html
Black Hat. (2025). Startup Spotlight competition – Black Hat Europe 2025 [Program]. https://blackhat.com/eu-25/spotlight.html
Dark Reading. (2025, December 11). Copilot’s no-code AI agents liable to leak company data. https://www.darkreading.com/application-security/copilot-no-code-ai-agents-leak-company-data
EclecticIQ. (2025, December 3). We’re at Black Hat Europe [Blog post]. https://blog.eclecticiq.com/were-at-black-hat-europe
Eliyahu, T. (2025, October 27). Black Hat Europe 2025 Arsenal: 8 AI security tools transforming cybersecurity [Blog post]. Medium. https://medium.com/%40Ethansalan/black-hat-europe-2025-arsenal-8-ai-security-tools-transforming-cybersecurity-ccd08c472aaa
Linux Foundation. (2025, December 9). Linux Foundation announces the formation of the Agentic AI Foundation (AAIF), anchored by new project contributions including Model Context Protocol (MCP), goose and AGENTS.md [Press release]. https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation
Microsoft Security Response Center. (2025, December 11). Evolving our approach to coordinated security research: In scope by default [Blog post]. https://www.microsoft.com/en-us/msrc/blog/2025/12/in-scope-by-default
Microsoft. (2025, August 26). Securing and governing the rise of autonomous agents [Blog post]. https://www.microsoft.com/en-us/security/blog/2025/08/26/securing-and-governing-the-rise-of-autonomous-agents/
OWASP GenAI Security Project. (2025, December 9). OWASP GenAI Security Project releases Top 10 risks and mitigations for agentic AI security [Blog post]. https://genai.owasp.org/2025/12/09/owasp-genai-security-project-releases-top-10-risks-and-mitigations-for-agentic-ai-security/
OWASP GenAI Security Project. (2025, December 9). OWASP Top 10 for Agentic Applications – The benchmark for agentic security in the age of autonomous AI [Blog post]. https://genai.owasp.org/2025/12/09/owasp-top-10-for-agentic-applications-the-benchmark-for-agentic-security-in-the-age-of-autonomous-ai/
OWASP GenAI Security Project. (2025). OWASP Top 10 for Agentic Applications 2026 [Framework]. https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/
OWASP. (2025). AI security overview [Web page]. https://owaspai.org/docs/ai_security_overview/
Palo Alto Networks Unit 42. (2025, December 6). New prompt injection attack vectors through MCP sampling [Blog post]. https://unit42.paloaltonetworks.com/model-context-protocol-attack-vectors/
Red Hat. (2025, July 1). Model Context Protocol (MCP): Understanding security risks and controls [Blog post]. https://www.redhat.com/en/blog/model-context-protocol-mcp-understanding-security-risks-and-controls
Salt Security. (2025, December 8). Agentic AI security: The emerging fourth pillar of cybersecurity [White paper and blog]. https://salt.security/blog/agentic-ai-security-the-emerging-fourth-pillar-of-cybersecurity
SiliconANGLE. (2025, December 11). Model Context Protocol security risks grow as unsecured servers appear across the internet. https://siliconangle.com/2025/12/11/model-context-protocol-security-risks-grow-unsecured-servers-appear-across-internet/
Unit 42. (2025, December 9). New prompt injection attack via malicious MCP servers let attackers drain resources [Summaries and advisories]. Cybersecurity News; GBHackers; Cyber Security Review. https://cybersecuritynews.com/prompt-injection-malicious-mcp-servers/; https://gbhackers.com/malicious-mcp-servers-enable-stealthy-prompt-injection/; https://www.cybersecurity-review.com/new-prompt-injection-attack-vectors-through-mcp-sampling/
Wired. (2025, December 9). OpenAI, Anthropic, and Block are teaming up to make AI agents play nice. https://www.wired.com/story/openai-anthropic-and-block-are-teaming-up-on-ai-agent-standards
The Verge. (2025, December 10). AI companies want a new internet – and they think they’ve found the key. https://www.theverge.com/ai-artificial-intelligence/841156/ai-companies-aaif-anthropic-mcp-model-context-protocol
Digital Commerce 360. (2025, December 8). Agentic AI rush exposes a growing security gap across enterprises. https://www.digitalcommerce360.com/2025/12/08/agentic-ai-rush-exposes-security-gap-across-enterprises/
RockCyber. (2025). AI strategy and governance – RISE Framework for AI Strategy™ [Service page]. https://www.rockcyber.com/ai-strategy-and-governance
RockCyber. (2025). AI risk assessment [Service page]. https://www.rockcyber.com/ai-risk-assessment
Rock Lambros. (2025, December 10). It’s here!!! The OWASP Top 10 for Agentic Applications just dropped [Substack post]. RockCyber Musings. https://www.rockcybermusings.com/p/owasp-top-10-agentic-applications-security-guide
Rock Lambros. (2025, July 28). AI security baseline playbook: My take on ETSI TS 104 223 [Substack post]. RockCyber Musings. https://www.rockcybermusings.com/p/ai-security-baseline-etsi-ts-104-223




Brilliant roundup on agent security! The MCP sampling risks caught my attention becuase most teams are treating it as just another API layer. I've seen similar trust-boundary flips in service mesh deployments, where the "helper" service ends up having more context than the primary app. The timing with AAIF launch makes this even more critcal to get right early.