Weekly Musings Top 10 AI Security Wrapup: Issue 19 October 31, 2025 - November 6, 2025
When Security & Governance Can’t Keep Pace: AI Moves Faster Than Security Teams Can Respond
Speed kills security. Organizations deploy AI systems that spawn sub-agents, rewrite their own code, and execute actions before security teams finish their morning coffee. Traditional governance tools built for quarterly audits can’t monitor agents that evolve by the millisecond. China tightens the screws with new cybersecurity laws. OpenAI ships a vulnerability hunter powered by GPT-5. Shadow AI spreads through enterprises like wildfire. And nobody knows if the person on the other end of that video call is real anymore. Welcome to November 2025, where the gap between AI deployment and AI security just got wider.
This week’s stories share a common thread. AI systems are moving faster than the frameworks designed to control them. Organizations rush to adopt agentic AI for productivity gains while security teams scramble to understand what their employees are actually using. Nation-states update regulations to address AI infrastructure risks. Vendors race to build tools that can keep pace with autonomous agents. And criminals leverage the same AI capabilities to bypass biometric authentication and launch increasingly sophisticated attacks.
Since this is my newsletter, I think it’s my prerogative that the first two musings on this list are news from yours truly this week 😄
1. AAGATE: Finally, a Way to Govern AI That Doesn’t Break When Agents Spawn
Summary
AAGATE (Agentic AI Governance Assurance & Trust Engine) provides the first open-source, production-ready reference architecture that operationalizes the NIST AI Risk Management Framework for agentic AI systems (ArXiv). The platform integrates four specialized security frameworks into a single Kubernetes-native control plane: MAESTRO for threat mapping, OWASP AIVSS for risk measurement, CSA Red Teaming for adversarial testing, and SSVC for response prioritization. Traditional governance operates on quarterly cycles while agentic AI evolves continuously. AAGATE solves this by implementing millisecond-scale monitoring through a zero-trust service mesh, shadow agent monitoring, and automated kill switches. The platform is available now as an open-source MVP at GitHub.
Why It Matters
Traditional security can’t monitor machine-speed decisions. Agentic AI makes autonomous choices in milliseconds. Quarterly audits and annual compliance reviews become useless when agents rewrite their own code or spawn unauthorized sub-agents before you can detect the drift.
NIST AI RMF had no practical implementation. Organizations understood they needed to map, measure, and manage AI risks but lacked tactical blueprints showing how to wire this up in production environments with real workloads and actual threat models.
Speed of deployment outpaces speed of governance. Agents that can browse the web, execute code, and hit production APIs create risk surfaces that expand faster than security teams can inventory them, leaving massive blind spots in enterprise attack surfaces.
What To Do About It
Deploy the AAGATE MVP in your development environment. Clone the repository, run it locally, and test it against your existing AI agent workflows. You need to see how millisecond-scale governance works before production deployment catches you unprepared.
Map your agent architecture to MAESTRO threat layers. Document every tool call, every API endpoint, and every data source your agents access. Build a real-time registry of agent capabilities and cryptographic identities before someone spins up a rogue process.
Implement kill switches for high-risk operations. Configure automatic network isolation for any agent that attempts to access sensitive data sources or executes behaviors outside its defined permissions. Milliseconds matter when preventing cascading failures.
Rock’s Musings
I built this because I got tired of watching security teams lose the governance race. Traditional AppSec assumes stable systems that don’t rewrite themselves overnight. Agents break that assumption completely.
When an AI agent can spawn three sub-agents to handle a customer request and one of them hallucinates a database query that leaks PII, your quarterly audit cycle is worthless. The cascade happens in 47 milliseconds. By the time your security team opens their incident response runbook, the damage is done. We needed governance that operates at the same speed as the threats. AAGATE proves you can do this today with existing frameworks and open-source tools. The code is real, the architecture is tested, and the community can improve it faster than any single vendor could.
2. TokenTally: The LLM Cost Calculator That Actually Models Real Conversations
Summary
TokenTally is an open-source LLM cost calculator that models multi-turn conversations with context accumulation and prompt caching instead of treating every API call as a single interaction (TokenTally GitHub). The tool provides dual calculators for chatbot workflows and batch operations, supporting 16 models across OpenAI and Anthropic. Context accumulation follows three strategies: minimal (50 tokens per turn), moderate (150 tokens per turn), and full (300 tokens per turn). Prompt caching simulation calculates Claude’s 90% savings on system prompts after the first turn. The optimization engine analyzes configurations and suggests improvements with dollar amounts attached. A 5,000-conversation monthly load using Claude Sonnet 4 costs $159.05 with caching versus $173.46 without caching.
Why It Matters
Token costs don’t scale linearly like traditional infrastructure. Output tokens cost 4x to 8x more than input tokens. Context accumulates with each conversation turn. Five turns of dialogue can cost 4x more than a single interaction, breaking budget forecasts that assume static per-request pricing.
Most AI startups can’t explain their costs. Organizations see monthly bills spike from $50 to $5,000 without understanding why. Usage explodes faster than price drops. Bigger context windows and multi-turn conversations accumulate tokens in ways generic calculators completely miss.
Prompt caching saves real money at scale. Claude’s caching reduces system prompt costs by 90% after the first turn. At 10,000 conversations monthly, that’s $135 in savings. Organizations choosing GPT-4o instead of cached Claude Sonnet 4 pay 7% more without realizing caching exists.
What To Do About It
Model your actual conversation patterns before setting pricing. Measure average conversation length, message sizes, and monthly volume. Calculate first-turn costs separately from later turns. Use TokenTally to simulate cache hits and context accumulation for your specific workload.
Compare models with your real numbers, not vendor pricing pages. Plug your parameters into TokenTally once, then switch models to see instant cost comparisons. A 97.5% savings exists between premium and budget models at the same conversation volume.
Enable prompt caching wherever available. Claude offers 90% savings on cached system prompts. At 5,000 conversations monthly, caching saves $14. At 500,000 conversations, that’s $1,400 per month or $16,800 annually for flipping one configuration flag.
Rock’s Musings
A friend asked me how to budget a startup’s chatbot. I didn’t have a good answer, so I built one. Most token calculators assume single API calls. Real chatbots accumulate context with every turn. By turn five of a conversation, you’re processing 4x the tokens you started with, and your finance team thinks something broke.
Nothing broke. That’s just how LLM conversations work. The gap between theoretical single prompts and real multi-turn conversations with caching and context accumulation is the gap between accurate budgets and explaining cost overruns to your CFO three months later. TokenTally fills that gap. The tool is open source, runs client-side with zero dependencies, and gives you real numbers based on your actual workload. Stop guessing what your LLM costs will be.
3. OpenAI Ships Aardvark: GPT-5 Agent That Hunts Vulnerabilities Autonomously
Summary
OpenAI unveiled Aardvark, an AI-powered security researcher built on GPT-5 technology that autonomously detects and fixes software vulnerabilities (Cyber Press). The tool, now available in private beta, functions as an autonomous security agent that continuously monitors source code repositories to identify vulnerabilities, assess their exploitability, and generate targeted patches. Unlike traditional vulnerability detection tools that rely on fuzzing or software composition analysis, Aardvark uses large language model reasoning to analyze code behavior similarly to human security researchers. OpenAI’s testing indicates approximately 1.2% of code commits introduce bugs that could have serious security consequences. Software vulnerabilities represent a systemic risk across industries, with over 40,000 CVEs reported in 2024 alone.
Why It Matters
Traditional vulnerability scanning can’t keep pace with modern development velocity. Static analysis and manual code reviews introduce delays that slow deployment cycles. Aardvark provides continuous, automated security analysis that strengthens defenses without throttling development teams’ momentum.
1.2% of commits introduce security bugs at scale. In large codebases with hundreds of daily commits, that percentage creates dozens of potential vulnerabilities weekly. Manual review can’t catch them all, and delayed discovery means vulnerabilities sit in production for days or weeks.
GPT-5 reasoning enables human-like vulnerability analysis. The model understands code context, data flow, and potential exploitation paths in ways that pattern-matching tools can’t replicate, finding logical flaws that traditional scanners miss completely.
What To Do About It
Apply for Aardvark’s private beta if you operate at scale. Organizations with large development teams and high commit volumes benefit most from autonomous vulnerability detection. Test the tool in isolated environments before production integration to understand its false positive rates.
Maintain human oversight for patch deployment. Autonomous vulnerability detection doesn’t mean autonomous patching. Review Aardvark’s suggested fixes for unintended side effects or logic errors before merging changes into production branches.
Integrate Aardvark findings into existing security workflows. Feed detected vulnerabilities into your SIEM, ticketing systems, and incident response processes. Autonomous detection only works if your team can act on findings efficiently.
Rock’s Musings
Autonomous security agents are here, and they’re coming for your vulnerability backlog. Aardvark represents a fundamental shift in how we think about code security. Traditional scanners look for known patterns. They miss logical flaws, subtle race conditions, and context-dependent vulnerabilities that only human reviewers typically catch.
GPT-5 reasoning changes that equation. The model analyzes code the way a security engineer would, understanding data flow and potential exploitation paths. That’s powerful. It’s also dangerous if you trust it blindly. Autonomous vulnerability detection needs human oversight for patch deployment. An AI-generated fix could introduce new bugs or break existing functionality in ways the model didn’t anticipate. The smart play is to let Aardvark find the problems and let humans validate the solutions.
Vibe Coding + Vibe Patching… what could possibly go wrong?
4. Google’s 2026 Forecast: AI-Driven Attacks Will Outpace Traditional Defenses
Summary
Google Cloud Security’s Cybersecurity Forecast 2026 warns that AI will become a standard weapon for attackers, accelerating social engineering, malware creation, and impersonation campaigns (SiliconANGLE). Multimodal generative AI tools capable of manipulating voice, text, and video will fuel a new wave of business email compromise and hyper-realistic phishing. Prompt injection is forecast to grow as enterprises integrate large language models into everyday workflows, with attacks bypassing internal controls and leading to large-scale data breaches. Shadow agents, or unauthorized AI agents that employees deploy independently, create invisible and uncontrolled data pipelines that expose organizations to security, compliance, and privacy risks. The combination of ransomware, data theft, and multifaceted extortion is forecast to be the most financially disruptive category of cybercrime globally in 2026.
Why It Matters
AI levels the playing field for attackers with limited resources. Generative AI tools reduce the cost and complexity of launching sophisticated attacks. Criminals who lacked technical skills can now generate convincing phishing content, realistic voice impersonations, and deepfake videos at scale.
Prompt injection remains an unsolved security problem. As organizations integrate LLMs into customer-facing systems and internal workflows, attackers will exploit these interfaces to bypass access controls, extract sensitive data, and manipulate model behavior.
Shadow agents create compliance blind spots. Employees deploying unauthorized AI tools generate uncontrolled data pipelines that security teams can’t monitor. These systems process sensitive information, share it with third parties, and create liability exposure without oversight.
What To Do About It
Implement AI-specific threat detection in your SOC. Traditional SIEM rules can’t identify AI-generated phishing or prompt injection attempts. Deploy tools that analyze LLM interactions for manipulation patterns and unauthorized data access.
Inventory all AI systems across your organization. Shadow agents are the new shadow IT. Discover what employees are using, where data flows, and which third-party AI services process company information without security review.
Train employees to recognize AI-enhanced social engineering. Hyper-realistic voice clones and deepfake videos bypass traditional phishing awareness training. Update programs to address multimodal impersonation attacks targeting high-value employees.
Rock’s Musings
Google’s forecast isn’t speculative. It’s a mirror reflecting what’s already happening at smaller scales. AI-generated phishing is already more convincing than human-written attempts. Voice cloning bypasses verbal verification protocols. Deepfake videos defeat visual confirmation. These are current realities that most organizations aren’t prepared to handle.
The shadow agent problem is particularly dangerous. Employees who deploy unauthorized AI tools aren’t trying to cause harm. They’re trying to be more productive. But those productivity gains come with hidden costs: uncontrolled data pipelines, compliance violations, and security blind spots that your existing tools can’t see. You can’t secure what you can’t inventory, and you can’t inventory what employees spin up without telling IT.
5. China Tightens AI Infrastructure Controls with New Cybersecurity Law Amendments
Summary
China’s Standing Committee of the National People’s Congress released amendments to the Cybersecurity Law in October 2025, with new incident reporting requirements taking effect November 1, 2025 (Inside Privacy). The amendments mark the first major update since the law’s 2017 enactment and explicitly reference artificial intelligence for the first time, affirming national support for AI development while strengthening ethical norms and safety oversight. The Administrative Measures for National Cybersecurity Incident Reporting consolidate previously scattered requirements into a unified framework with clear thresholds, timelines, and reporting procedures for onshore infrastructure. Organizations must report “relatively major” incidents, such as data breaches involving more than one million individuals or economic losses exceeding RMB 5 million ($700,000), within four hours. Fines can reach up to RMB 10 million for entities and RMB 1 million for individuals for serious violations.
Why It Matters
China’s AI regulations set precedent for other nations. As one of the first countries to explicitly address AI in cybersecurity law, China’s approach influences how other jurisdictions think about AI infrastructure protection and incident reporting obligations.
Four-hour reporting windows create operational pressure. Multinational companies operating in China must detect, confirm, and report significant incidents within tight timeframes, requiring automated monitoring systems and clear escalation procedures that work across time zones.
Extraterritorial reach extends beyond critical infrastructure. The amendments expand jurisdiction to any foreign conduct that endangers China’s network security, not just attacks on critical information infrastructure, creating compliance obligations for international organizations with minimal Chinese operations.
What To Do About It
Audit your China operations for AI system deployments. Identify which systems fall under the new reporting requirements. Document where data is stored, which networks systems connect to, and which employees have administrative access to AI infrastructure.
Establish automated incident detection for four-hour compliance. Manual monitoring can’t meet reporting deadlines for data breaches affecting one million individuals. Deploy SIEM rules, anomaly detection, and automated alert escalation procedures.
Review legal exposure for extraterritorial provisions. Consult with legal counsel to understand how the law’s expanded jurisdiction affects your organization. The amendments target foreign conduct that endangers Chinese network security regardless of whether it targets critical infrastructure.
Rock’s Musings
China’s approach to AI governance is instructive whether you operate there or not. The four-hour reporting window is the operational reality for any organization processing data onshore. That deadline creates forcing functions for detection capabilities, escalation procedures, and executive decision-making that most companies lack.
The explicit AI references in the Cybersecurity Law signal how governments think about regulating this technology. They’re not waiting for perfect information or comprehensive frameworks. They’re moving now with imperfect rules that will evolve through enforcement actions and court cases. Organizations that wait for regulatory clarity before building AI governance programs will find themselves scrambling to comply with requirements that already went into effect.
6. Shadow AI Crisis: 97% of Companies Have AI-Generated Code, 81% Lack Visibility
Summary
Cycode’s State of Product Security for the AI Era 2026 report reveals that while 97% of organizations are using or piloting AI coding assistants, and all confirm having AI-generated code in their codebases, 81% lack visibility into AI usage, and 65% report increased security risk associated with AI (Financial Content). The study of over 400 CISOs and security practitioners found that nearly one-third (30%) state AI now creates the majority of code in their organizations. The absence of oversight has created a massive “Shadow AI” problem, forcing a radical shift in enterprise security strategy as unmanaged AI becomes the top security concern. In response, 100% of organizations plan to invest more of their budget in AI-related security initiatives in the next 12 months.
Why It Matters
Shadow AI spreads across all departments, not just technical teams. Unlike shadow IT, which mainly affected engineering organizations, unauthorized AI adoption spans marketing, finance, HR, and operations, meaning sensitive business data flows through uncontrolled systems.
AI-generated code introduces invisible vulnerabilities. When 30% of code comes from AI assistants operating without security review, organizations inherit vulnerabilities they didn’t write, can’t see, and don’t know how to audit effectively.
Governance lags adoption by massive margins. The gap between 97% adoption and 81% lacking visibility creates a security blind spot where organizations know AI exists but can’t identify which systems use it, what data they process, or what risks they create.
What To Do About It
Inventory all AI systems employees are using right now. Deploy monitoring tools that identify AI service connections through cloud access security brokers. Catalog which departments use which tools and what data types flow through them.
Implement mandatory code review for all AI-generated code. Developers using GitHub Copilot, ChatGPT, or other coding assistants must submit generated code for security review before merging to production branches. Treat AI-written code as untrusted input.
Create approved AI tool alternatives with security controls. Employees use shadow AI because approved options don’t exist or don’t meet their needs. Provide vetted alternatives that balance productivity with data protection requirements.
Rock’s Musings
The shadow AI crisis isn’t coming; it arrived months ago. Every organization I talk to has the same problem at different scales. Employees are using AI tools without permission because those tools make them more productive, and IT hasn’t provided approved alternatives that work as well.
The problem compounds when you consider code generation. Thirty percent of your codebase coming from AI assistants means you’re inheriting vulnerabilities from models trained on public repositories that include buggy, insecure, and outdated patterns. Those vulnerabilities don’t announce themselves. They sit quietly in your production systems until someone exploits them. You can’t ban AI coding assistants; developers will use them anyway. The only realistic path forward is visibility, governance, and approved alternatives that meet both security requirements and developer needs.
7. Prompt Injection in AI Browsers: The Unsolved Vulnerability in Agent Mode
Summary
Security researchers have identified systemic prompt injection vulnerabilities across all major AI-powered browsers, including OpenAI’s ChatGPT Atlas, Perplexity’s Comet, and Opera Neon (TechCrunch). Brave’s research revealed that attackers can embed hidden commands in web pages that AI browsers execute when asked to summarize content, leading to data exfiltration, clipboard manipulation, and unauthorized actions. OpenAI’s CISO, Dane Stuckey, acknowledged that “prompt injection remains a frontier, unsolved security problem” (The Register). The vulnerabilities allow malicious instructions embedded in external content like websites or PDFs to override user intent and manipulate agent behavior. Perplexity attempted twice to fix vulnerabilities reported by Brave, but still hasn’t fully mitigated this kind of attack as of late October 2025.
Why It Matters
AI browsers collapse the boundary between data and instructions. Traditional browsers treat web content as passive data. AI browsers treat content as potential commands, creating a massive attack surface where any website can inject malicious instructions.
Agent mode grants AI unprecedented system access. When ChatGPT Atlas or similar tools can browse web pages, read emails, and access databases on your behalf, a single prompt injection can expose all connected accounts and data sources.
No reliable defense exists yet. Multiple vendors have attempted fixes, but researchers continue finding bypass techniques. The fundamental problem stems from LLM architecture, not implementation bugs, making complete mitigation extremely difficult.
What To Do About It
Disable agent mode for sensitive account access. Don’t allow AI browsers to interact with banking, email, or corporate systems until vendors demonstrate robust prompt injection defenses. The productivity gains aren’t worth the data exposure risks.
Isolate AI browsing from regular browsing. Use separate browser profiles or virtual machines for AI-assisted browsing. Don’t remain logged into sensitive accounts when testing AI browser features.
Review clipboard contents before pasting. Prompt injection attacks can overwrite your clipboard with malicious links. Verify what you’re pasting, especially after using AI browser features, to prevent phishing attempts.
Rock’s Musings
Prompt injection is the SQL injection of the AI era, except worse. SQL injection had well-understood mitigations: parameterized queries, input validation, and web application firewalls. Prompt injection has none of these reliable defenses because the vulnerability exists in the LLM architecture itself, not in how developers use it.
When your browser’s AI agent reads a Reddit post containing hidden instructions in white text on white background, it sees those instructions the same way it sees legitimate user commands. The model can’t reliably distinguish between “summarize this article” and “send all emails to attacker.com.” That’s a fundamental problem with no easy fix. OpenAI’s CISO saying it’s an “unsolved security problem” isn’t reassuring; it’s an admission that we’re shipping powerful agent capabilities before we know how to secure them properly. The smart move is to treat AI browsers as experimental tools for non-sensitive tasks until vendors prove they can defend against attacks that researchers keep discovering.
8. iProov Achieves First Certification for Deepfake-Resistant Biometric Verification
Summary
iProov’s biometric injection attack detection technology passed independent evaluation by Ingenium Biometrics to the Level 2 (High) standard set out in Europe’s CEN TS 18099, making it the first vendor to receive certification that its technology meets NIST Special Publication 800-63-4 Digital Identity Guidelines requirements (Biometric Update). The European standard is the only one established for defending against deepfakes and synthetic media and will be used as the starter document for a global ISO/IEC standard. Testing was completed in September 2025. The certification validates iProov’s Dynamic Liveness technology, which uses patented Flashmark signals to confirm a user’s real-time presence and prevent spoofing and injection attacks.
Why It Matters
Gartner predicts 30% of enterprises will lose confidence in face biometrics by 2026. The sophistication of AI deepfakes is eroding trust in biometric authentication systems that many financial institutions depend on for bank transfers and identity verification.
Digital injection attacks increased 200% in 2025. Attackers inject synthetic images or videos into authentication pipelines, bypassing presentation attack detection and other safeguards, creating financial losses and regulatory scrutiny for organizations relying on biometric verification.
NIST SP 800-63-4 mandates phishing-resistant authentication for high-assurance scenarios. The updated guidelines introduce specific provisions to address deepfakes and AI-generated attacks, requiring biometric systems to undergo rigorous testing for spoof resistance.
What To Do About It
Require CEN TS 18099 certification for any new biometric vendors. Don’t accept vendor claims about deepfake resistance without independent testing. The European standard provides objective verification that systems can defend against synthetic media attacks.
Implement multi-factor authentication with biometrics. Don’t rely solely on face verification. Combine biometric checks with device fingerprinting, behavioral analytics, and other contextual signals to create defense-in-depth against sophisticated attacks.
Test your existing biometric systems against deepfake attacks. Hire penetration testing teams to attempt injection attacks and synthetic media spoofing. Identify weaknesses before criminals do and upgrade systems that fail testing.
Rock’s Musings
The certification matters because deepfakes are getting good enough to fool most biometric systems. An Indonesian financial institution suffered 1,100 deepfake attacks to bypass their loan application service. That’s not theoretical risk; it’s operational reality for organizations relying on face verification as their primary authentication method.
iProov’s certification provides a measurable standard in a market full of marketing claims about AI resistance. CEN TS 18099 Level 2 testing means independent auditors verified the technology can defend against sophisticated synthetic media attacks, not just replay attacks and printed photos. That distinction matters when criminals use generative AI to create real-time deepfakes that bypass legacy liveness detection. Organizations betting their security on biometric authentication need vendors who can prove their defenses work, not just claim they do.
9. Google Cloud Report: Virtualization Infrastructure Emerges as Critical Blind Spot
Summary
Google Cloud Security’s 2026 forecast identifies virtualization infrastructure as a critical blind spot due to a confluence of systemic vulnerabilities (GBHackers). The foundational layer, long considered a pillar of strength, is now emerging as a vulnerability point as adversaries target the infrastructure that enables cloud computing and enterprise virtualization. The report warns that attackers are increasingly targeting third-party providers and exploiting zero-day vulnerabilities to conduct large-scale data exfiltration operations. Beyond traditional cybercrime, the report warns of an emerging “on-chain cybercrime economy” as threat actors migrate core operational components onto public blockchains, gaining unprecedented resilience against traditional law enforcement takedown efforts.
Why It Matters
Virtualization layer compromise affects all workloads. A single vulnerability in hypervisor software or virtual machine management tools can expose every workload running on that infrastructure, multiplying the blast radius of successful attacks.
Cloud service providers become high-value targets. Attackers focus on infrastructure that supports multiple customers, understanding that one successful breach can provide access to dozens or hundreds of victim organizations.
On-chain cybercrime operations resist takedowns. When criminals move command and control infrastructure, payment processing, and data marketplaces onto public blockchains, traditional law enforcement strategies become ineffective.
What To Do About It
Audit virtual machine configurations for unnecessary privileges. Review which VMs have access to hypervisor management interfaces, which can interact with other tenants’ resources, and which run with elevated permissions that attackers could exploit.
Implement network segmentation at the virtualization layer. Don’t rely solely on application-level firewalls. Use microsegmentation to isolate workloads at the hypervisor level, preventing lateral movement if one VM is compromised.
Monitor for unusual virtualization layer activity. Deploy monitoring tools that track VM creation, snapshot operations, and hypervisor API calls. Attackers often use administrative interfaces to move laterally or exfiltrate data.
Rock’s Musings
The virtualization layer has been invisible infrastructure for so long that security teams forgot it exists. Hypervisors, virtual machine managers, and cloud orchestration platforms became trusted foundations we stopped questioning. That trust creates the blind spot Google’s report highlights.
When attackers compromise the virtualization layer, they don’t just get one system; they get every workload running on that infrastructure. A hypervisor exploit provides access to tenant isolation boundaries, memory contents, and administrative interfaces that control entire cloud environments. The on-chain cybercrime economy compounds this problem by making criminal infrastructure as resilient as legitimate blockchain applications. You can’t seize servers that don’t exist or shut down marketplaces that live on distributed ledgers. Organizations need to rethink their security models for infrastructure they can’t directly control or easily monitor.
10. Microsoft Digital Defense Report: Identity Attacks Surge 32% in First Half of 2025
Summary
Microsoft’s sixth annual Digital Defense Report reveals that more than 97% of identity attacks are password attacks, with identity-based attacks surging by 32% in the first half of 2025 (Microsoft On the Issues). The vast majority of malicious sign-in attempts organizations receive are large-scale password guessing attempts using credentials from credential leaks. Attackers get usernames and passwords for these bulk attacks largely from credential leaks, with a surge in the use of infostealer malware by cybercriminals. Infostealers can secretly gather credentials and information about online accounts, like browser session tokens, at scale. In 80% of the cyber incidents Microsoft’s security teams investigated last year, attackers sought to steal data, with over half of cyberattacks with known motives driven by extortion or ransomware.
Why It Matters
Password-based authentication remains the weakest link. Despite decades of security warnings, 97% of identity attacks exploit password vulnerabilities, proving that traditional authentication methods can’t defend against modern credential theft operations.
Infostealer malware operates at industrial scale. Criminals buy stolen credentials and browser session tokens on cybercrime forums, making it easy for anyone to access accounts for purposes such as ransomware delivery without sophisticated hacking skills.
Data theft drives most attacks, not espionage. Organizations face opportunistic criminals looking for financial gain rather than nation-state actors conducting intelligence operations, fundamentally changing how security teams should prioritize defenses.
What To Do About It
Deploy FIDO2-based phishing-resistant authentication. Hardware security keys and biometric authentication prevent credential theft from working. Attackers can steal passwords but can’t replicate physical devices or biometric data.
Monitor for impossible travel and anomalous session behavior. When credentials are stolen, usage patterns change. Track login locations, device fingerprints, and access times to identify compromised accounts before attackers cause damage.
Implement comprehensive endpoint protection against infostealers. Traditional antivirus can’t detect modern information-stealing malware. Deploy behavioral analysis and memory protection tools that identify credential harvesting before data leaves the system.
Rock’s Musings
Password attacks work because passwords remain the default authentication method across most organizations. We’ve known for decades that password-based security is fundamentally broken, yet here we are in 2025 with 97% of identity attacks exploiting the same vulnerability.
The infostealer surge makes this worse. Attackers don’t need to guess passwords anymore; they buy them in bulk from criminal marketplaces. Your password strength doesn’t matter when malware harvests it directly from browser memory along with session tokens that bypass authentication entirely. The solution isn’t better password policies or mandatory rotation schedules. It’s eliminating passwords altogether and moving to phishing-resistant authentication that attackers can’t steal or replicate. FIDO2 hardware keys solve this problem today. The only question is how long organizations wait before credential theft forces them to deploy what they should have implemented years ago.
The One Thing You Won’t Hear About But You Need To: Paying Ransoms All But Guarantees You’ll Be Hit Again
11. The One Thing You Won’t Hear About But You Need To: Paying Ransoms Guarantees You’ll Be Hit Again
Summary
CrowdStrike’s 2025 State of Ransomware Survey reveals that 83% of organizations that paid a ransom were attacked again, and 93% had data stolen anyway (CrowdStrike). The report surveyed global organizations and found that 76% struggle to match the speed and sophistication of AI-powered attacks, with 48% citing AI-automated attack chains as today’s greatest ransomware threat. Nearly 50% of organizations fear they can’t detect or respond as fast as AI-driven attacks can execute, with fewer than 25% recovering within 24 hours and nearly 25% suffering significant disruption or data loss. The report identifies a critical leadership disconnect, with 76% reporting a gap between leadership’s perceived ransomware readiness and actual preparedness. AI makes phishing lures more convincing, with 87% saying AI-enhanced social engineering bypasses traditional defenses.
Why It Matters
Paying ransoms marks you as a profitable target. Criminals share information about which organizations pay. When you transfer funds, you signal that your data is valuable enough to pay for and that your defenses are weak enough to breach successfully.
Data theft happens before encryption in most attacks. Organizations that pay ransoms to decrypt systems discover that attackers already exfiltrated sensitive data. The payment recovers systems but doesn’t prevent data leaks, regulatory fines, or reputational damage.
AI-automated attack chains collapse response windows. Traditional incident response assumes hours or days to contain breaches. AI-driven attacks execute in minutes, making manual response procedures obsolete before security teams can mobilize.
What To Do About It
Build recovery capabilities that don’t require ransom payments. Maintain offline backups, test restoration procedures quarterly, and ensure critical systems can be rebuilt from clean images without paying criminals for decryption keys.
Implement automated threat detection and response. Manual security operations can’t keep pace with AI-automated attack chains. Deploy tools that detect and contain threats in seconds, not hours, to match the speed of AI-driven attacks.
Prepare executives for no-pay decisions before breaches occur. The leadership disconnect stems from unrealistic confidence in recovery capabilities. Run tabletop exercises that demonstrate actual response timelines and recovery costs without ransom payments.
Rock’s Musings
The ransom payment data destroys the last argument for paying criminals. Organizations convinced themselves that paying was the pragmatic choice to recover systems quickly and minimize disruption. The numbers prove that logic is completely backward.
83% of organizations that paid got hit again. That’s not coincidence; it’s business strategy. Criminals track who pays and prioritize those targets for future attacks because the return on investment is proven. Paying ransoms doesn’t end your problems; it guarantees you’ll face them again with the same attackers or their colleagues who bought your contact information on criminal forums. The 93% data theft rate is equally damning. Organizations pay to decrypt systems while sensitive data is already posted on leak sites or sold to competitors. You get your systems back but still face regulatory investigations, customer lawsuits, and reputation damage from data exposure.
The AI automation angle makes this worse. When attacks execute faster than humans can respond, the only viable defense is automated detection and response that operates at machine speed. Organizations clinging to manual incident response procedures are bringing knives to drone fights. The leadership disconnect is the most dangerous finding. When executives believe their organizations are more prepared than reality supports, they make decisions based on false confidence that collapse under actual breach conditions.
👉 What do you think? Ping me with the story that keeps you up at night, or the one you think I overrated.
👉 The Wrap-Up drops every Friday. Stay safe, stay skeptical.
👉 For deeper dives, visit RockCyber.
Citations
AI Cyber Magazine. (n.d.). Governing the Ungovernable. https://aicybermagazine.com/governing-the-ungovernable/
ArXiv. (2025). AAGATE: A NIST AI RMF-Aligned Governance Platform for Agentic AI. https://arxiv.org/abs/2510.25863
Biometric Update. (2025, November 5). iProov certified for biometric deepfake protection with Ingenium IAD test. https://www.biometricupdate.com/202511/iproov-certified-for-biometric-deepfake-protection-with-ingenium-iad-test
Brave. (2025, October 21). Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers. https://brave.com/blog/unseeable-prompt-injections/
CrowdStrike. (2025, October 21). CrowdStrike 2025 Ransomware Report: AI Attacks Are Outpacing Defenses. https://www.crowdstrike.com/en-us/press-releases/ransomware-report-ai-attacks-outpacing-defenses/
Cyber Press. (2025, November 3). OpenAI Launches Aardvark GPT-5 Agent to Automatically Detect and Fix Vulnerabilities. https://cyberpress.org/openai-launches-aardvark-vulnerabilities/
Financial Content. (2025, November 5). Report: ‘Shadow AI’ Crisis Looms as 100% of Companies Have AI-Generated Code, But 81% of Security Teams Lack Visibility. https://markets.financialcontent.com/wral/article/bizwire-2025-11-5-report-shadow-ai-crisis-looms-as-100-of-companies-have-ai-generated-code-but-81-of-security-teams-lack-visibility
Fortune. (2025, October 23). Experts warn OpenAI’s ChatGPT Atlas has security vulnerabilities that could turn it against users. https://fortune.com/2025/10/23/cybersecurity-vulnerabilities-openai-chatgpt-atlas-ai-browser-leak-user-data-malware-prompt-injection/
GBHackers. (2025, November 5). Google Warns: AI Makes Cyber Threats Faster and Smarter by 2026. https://gbhackers.com/google-warns-ai/
GitHub. (n.d.). AAGATE Repository. https://github.com/rocklambros/AAGATE
GitHub. (n.d.). TokenTally Repository. https://github.com/TikiTribe/TokenTally
Global Policy Watch. (2025, October 29). China Amends Cybersecurity Law and Incident Reporting Regime to Address AI and Infrastructure Risks. https://www.globalpolicywatch.com/2025/10/china-amends-cybersecurity-law-and-incident-reporting-regime-to-address-ai-and-infrastructure-risks/
Help Net Security. (2025, October 29). AI agents can leak company data through simple web searches. https://www.helpnetsecurity.com/2025/10/29/agentic-ai-security-indirect-prompt-injection/
Inside Privacy. (2025, October 29). China Amends Cybersecurity Law and Incident Reporting Regime to Address AI and Infrastructure Risks. https://www.insideprivacy.com/cybersecurity-2/china-amends-cybersecurity-law-and-incident-reporting-regime-to-address-ai-and-infrastructure-risks/
Medium. (n.d.). Most GenAI startups today burn cash without clear paths to profitability. https://medium.com/analysts-corner/most-genai-startups-today-burn-cash-without-clear-paths-to-profitability-who-will-survive-ca5f10b73d30
Microsoft On the Issues. (2025, October 16). Extortion and ransomware drive over half of cyberattacks. https://blogs.microsoft.com/on-the-issues/2025/10/16/mddr-2025/
NIST. (2023). AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework
Rock Cyber Musings. (n.d.). AAGATE: Governing the Ungovernable AI Agent. https://www.rockcybermusings.com/p/aagate-governing-the-ungovernable-operationalizing-nist-ai-rmf-agentic-ai
Rock Cyber Musings. (n.d.). I Built TokenTally After a Friend Asked: “How Do I Budget for ChatGPT?” https://www.rockcybermusings.com/p/tokentally-open-source-llm-cost-calculator-ai-startups
SiliconANGLE. (2025, November 4). Google Cloud report warns of AI-driven cyberattacks and global extortion surge in 2026. https://siliconangle.com/2025/11/04/google-cloud-report-warns-ai-driven-cyberattacks-global-extortion-surge-2026/
Simon Willison. (2025, November 2). New prompt injection papers: Agents Rule of Two and The Attacker Moves Second. https://simonwillison.net/2025/Nov/2/new-prompt-injection-papers/
TechCrunch. (2025, October 25). The glaring security risks with AI browser agents. https://techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/
The Information. (n.d.). Anthropic’s Gross Margin Flags Long-Term AI Profit Questions. https://www.theinformation.com/articles/anthropics-gross-margin-flags-long-term-ai-profit-questions
The Register. (2025, October 28). AI browsers wide open to attack via prompt injection. https://www.theregister.com/2025/10/28/ai_browsers_prompt_injection/
TokenTally. (n.d.). Live application. https://tokentally.com/



