Weekly Musings Top 10 AI Security Wrapup: Issue 27 January 9, 2026 - January 15, 2026
Deepfakes are Front and Center. Agentic AI Rewrites the Threat Model: 87% See AI Vulnerabilities as Fastest-Growing Risk
87% of cybersecurity leaders now identify AI vulnerabilities as the fastest-growing risk in their organizations. That number comes from the World Economic Forum’s Global Cybersecurity Outlook 2026, released this week ahead of Davos. When nearly nine out of ten security professionals agree on anything, you pay attention. The data confirms what we’ve been tracking in these pages: your AI tools aren’t just assets anymore. They’re attack surfaces.
This week crystallized a pattern that will define enterprise security for the next several years. Agentic AI systems are rewriting the threat model. The autonomous capabilities that make these tools valuable are also the ones that make them dangerous when compromised. ServiceNow’s Now Assist was caught with a critical flaw that allowed unauthenticated attackers to impersonate any user with nothing but an email address. NIST opened public comment on how to secure AI agents. And multiple governments launched formal investigations into Grok after it enabled mass creation of nonconsensual intimate imagery.
AI agents are production systems that process sensitive data, make decisions, and execute actions. When they fail, they fail fast and at scale. When they are hijacked, they bring the attacker into your perimeter using legitimate credentials. Your 2026 security roadmap needs to account for this new reality.
1. World Economic Forum Declares AI Vulnerabilities the Fastest-Growing Cyber Risk
The World Economic Forum released its Global Cybersecurity Outlook 2026 on January 12, drawing on survey responses from 804 C-suite executives across 57 countries (World Economic Forum). The headline finding: 87% of respondents identified AI-related vulnerabilities as the fastest-growing cyber risk over 2025, up from baseline concerns about ransomware and supply chain disruption.
Why it matters
Data leaks linked to generative AI (34%) now outweigh concerns about adversarial AI capabilities (29%), marking a reversal from 2025 when adversarial AI topped the list at 47%
Organizations assessing the security of AI tools nearly doubled from 37% in 2025 to 64% in 2026
CEOs rate cyber-enabled fraud as their top concern, surpassing ransomware for the first time
What to do about it
Review your AI security assessment cadence and move from one-time reviews to periodic evaluations before deployment
Quantify data exposure risks from generative AI tools accessing sensitive repositories
Align board-level risk reporting with the WEF framework to speak the language your executives are hearing at Davos
Rock’s Musings
The WEF report is useful because it gives you ammunition. Your board members will attend Davos or read the coverage. They’ll come back asking what you’re doing about AI security. Now you have numbers to support the budget request you’ve been preparing.
What I find more interesting is the split between CEOs and CISOs on priority risks. CEOs worry about fraud and phishing. CISOs worry about ransomware and supply chains. That gap represents a communication problem. If your leadership thinks fraud is the big threat while you’re focused on supply chain resilience, you’re not aligned. And misalignment means someone gets surprised.
2. ServiceNow “BodySnatcher” Vulnerability Enables Unauthenticated User Impersonation
AppOmni disclosed CVE-2025-12420 on January 13, a critical vulnerability in ServiceNow’s Now Assist AI Agents and Virtual Agent API carrying a CVSS score of 9.3 (AppOmni). The flaw allows unauthenticated attackers to impersonate any ServiceNow user using only their email address, bypassing MFA and SSO controls entirely.
Why it matters
Attackers can execute privileged AI agent workflows without credentials by chaining a hardcoded platform-wide secret with account-linking logic that trusts email addresses
ServiceNow operates as the IT service management backbone for 85% of Fortune 500 companies
The vulnerability transforms a standard chatbot into a launchpad for malicious AI agent execution
What to do about it
Self-hosted customers should upgrade to Now Assist AI Agents version 5.1.18 or later and Virtual Agent API version 3.15.2 or later
Cloud-hosted customers received automatic patches on October 30, 2025, but should verify deployment
Audit Virtual Agent provider configurations and enforce MFA for all account-linking flows
Rock’s Musings
This is the most severe AI-driven vulnerability uncovered to date. Not my words. That’s how AppOmni characterized it. An attacker halfway across the globe with nothing but an email address could impersonate an administrator, execute AI agents, and create backdoor accounts with full privileges.
The vulnerability existed because developers made convenient assumptions about trust. Email addresses aren’t secrets. Hardcoded platform secrets aren’t secrets either, once they’re shared across every deployment. The lesson here extends beyond ServiceNow. Your AI agents need cryptographic identity verification, not email-based trust models. The convenience that makes AI integration easy also makes it exploitable.
3. NIST Opens Public Comment on AI Agent Security Framework
The National Institute of Standards and Technology published a Request for Information in the Federal Register on January 8 seeking input on securing AI agent systems (Federal Register, docket NIST-2025-0035). The RFI specifically targets systems capable of taking autonomous actions that impact real-world systems, marking the first federal effort focused on agentic AI security rather than general LLM concerns.
Why it matters
NIST is distinguishing between chatbot security and agent security, recognizing that autonomous action introduces unique risk categories
The agency is seeking concrete examples, best practices, and actionable recommendations rather than theoretical frameworks
Comments are due March 9, 2026, and will inform technical guidelines for measuring and improving AI system security
What to do about it
Submit organizational experiences with AI agent deployment through regulations.gov
Document your internal agent security controls, including permission scoping, action boundaries, and human-in-the-loop requirements
Monitor CAISI outputs for emerging standards that may influence audit and compliance expectations
Rock’s Musings
NIST is asking the right question at the right time. The RFI explicitly excludes general chatbot security and retrieval-augmented generation systems that don’t act autonomously. They want to know about agents that make persistent changes to the external state.
If your organization runs AI agents in production, you should submit comments. Not because regulators will read your specific submission and act on it. Documenting what you do will force you to articulate your security model. Most organizations can’t answer basic questions about agent permissions and monitoring. The comment process is a forcing function for internal clarity.
4. n8n Workflow Platform Hit by Critical Vulnerabilities and Supply Chain Attack
The popular n8n workflow automation platform faced a one-two punch this week” critical remote code execution vulnerabilities in its core platform and a supply chain attack targeting its community node ecosystem.
On January 7, Cyera Research Labs disclosed CVE-2026-21858, a CVSS 10.0 unauthenticated RCE affecting an estimated 100,000 servers globally (CyberScoop). On January 12, Endor Labs reported that attackers had uploaded malicious npm packages masquerading as Google Ads integrations to steal OAuth credentials (Endor Labs).
The platform vulnerabilities allow unauthenticated attackers to access arbitrary files by exploiting improper handling of webhook requests. A separate flaw, CVE-2025-68668, enables authenticated users to bypass the Python sandbox and execute system commands. The supply chain attack exploited the fact that community nodes run with the same access as n8n itself, with no sandboxing between node code and the runtime.
Why it matters
n8n serves as a centralized credential vault holding OAuth tokens and API keys for dozens of integrated services, including Google, Salesforce, and Stripe.
Community nodes can read environment variables, access the file system, make outbound network requests, and receive decrypted credentials during workflow execution.
The supply chain attack campaign remains active, with new malicious packages appearing as recently as January 13.
What to do about it
Upgrade to n8n version 2.0.0 or later immediately. Version 1.121.0 addresses the unauthenticated RCE.
Disable community nodes on self-hosted instances by setting N8N_COMMUNITY_PACKAGES_ENABLED to false.
Audit installed packages against known malicious names, including n8n-nodes-hfgjf-irtuinvcm-lasdqewriit and variants, and monitor outbound traffic for connections to unfamiliar domains.
Rock’s Musings
n8n is having a rough week. Between critical platform vulnerabilities and a supply chain attack targeting its extension ecosystem, organizations running the platform face attack vectors from multiple directions simultaneously.
The compound risk here is significant. Workflow automation platforms occupy a privileged position in enterprise infrastructure. They hold credentials, execute actions across services, and operate continuously without manual oversight. When that platform has critical vulnerabilities and its extension model treats npm packages as trusted code, the blast radius extends to every connected system. If you run n8n, treat this as a priority one incident.
5. US Senate Unanimously Passes DEFIANCE Act Enabling Deepfake Victim Lawsuits
The US Senate passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act by unanimous consent on January 13, creating a federal civil remedy for victims of nonconsensual intimate deepfakes (Roll Call). The bill allows victims to sue creators and distributors for minimum damages of $150,000, with higher damages for cases involving retaliation or leading to harassment.
Senator Dick Durbin, the bill’s sponsor, cited the Grok crisis as evidence of urgency, noting that X failed to prevent the chatbot from generating exploitative images even after reporting the problem. The bill now moves to the House, where it stalled in 2024.
Why it matters
The DEFIANCE Act builds on the Take It Down Act signed in 2025, which criminalized publishing nonconsensual intimate imagery. Together, they create a comprehensive federal framework for deepfake accountability.
The bill establishes civil liability for anyone who creates, possesses with the intent to distribute, or knowingly receives nonconsensual sexually explicit deepfakes.
Over 90% of deepfake content online is nonconsensual sexually explicit imagery, with women targeted in nine out of ten cases.
What to do about it
Review content moderation policies for any AI tools that could generate intimate imagery to assess DEFIANCE Act exposure.
Implement reporting mechanisms for nonconsensual intimate imagery to demonstrate compliance efforts.
Brief legal teams on emerging civil liability exposure from AI-generated content, particularly for platforms hosting user-generated content.
Rock’s Musings
Bipartisan unanimous consent in the current Senate tells you something about the political temperature on this issue. The Grok deepfake controversy created visible, sympathetic victims. Legislators responded.
For enterprise security, the DEFIANCE Act matters because it creates financial consequences for AI safety failures. If your AI tools can generate intimate imagery, and someone uses them to create nonconsensual content, and that content gets traced back to your platform, you’re potentially in the liability chain. Safety filters are not just ethical choices anymore. They are a legal risk mitigation.
6. UK Regulator Launches Formal Investigation Into X Over Grok Deepfakes
The UK’s communications regulator, Ofcom, opened a formal investigation into X under the Online Safety Act on January 12, examining whether the platform’s Grok chatbot violates its legal duties to protect users from illegal content (Financial Times). The investigation focuses on Grok’s ability to generate nonconsensual intimate images, including depictions of minors.
Why it matters
Ofcom has enforcement powers, including fines up to £18 million or 10% of global revenue, and potential UK service restrictions
The investigation represents the first major Online Safety Act enforcement action targeting AI-generated content
Indonesia and Malaysia have already temporarily blocked Grok, while the European Commission extended document retention orders
What to do about it
Monitor regulatory developments in jurisdictions where your organization deploys AI image generation capabilities
Review content moderation and safety filter implementations against evolving legal standards
Document decision-making processes for AI safety features to demonstrate reasonable compliance efforts
Rock’s Musings
The Grok situation is what happens when you ship an AI product without adequate safety controls and then argue about it publicly. Elon Musk responded to the crisis by challenging his followers to “break Grok image moderation.” That’s not a security posture. That’s a liability multiplier.
For security leaders watching this unfold, the lesson isn’t about Grok specifically. The lesson is that AI safety failures create regulatory exposure. Your generative AI deployments need documented safety reviews before they become targets of investigation. The time to demonstrate due diligence is before enforcement begins.
7. California Attorney General Launches Investigation Into xAI Over Deepfake CSAM
California Attorney General Rob Bonta announced on January 14 that his office is investigating xAI over the “proliferation of nonconsensual sexually explicit material produced using Grok” (California AG). The investigation marks the first major US government enforcement action addressing AI-generated intimate imagery.
Why it matters
Analysis cited by the AG found more than half of 20,000 images generated by Grok between Christmas and New Year depicted people in minimal clothing, with some appearing to be children
California law prohibits using AI to create sexual images without consent, providing enforcement authority
Governor Newsom publicly called for the investigation, describing xAI’s actions as creating “a breeding ground for predators.”
What to do about it
Ensure AI content generation systems include age detection and identity verification controls
Implement logging and audit capabilities for content generation to support compliance demonstrations
Review state-level AI regulations that may apply to your organization’s generative AI deployments
Rock’s Musings
California often “leads” on technology regulation (I’ll save my thoughts on California for a blog). However, when their state AG opens an investigation, other jurisdictions watch. The xAI investigation will establish precedents for how AI safety failures translate into legal liability.
xAI’s response to the investigation was an automated email stating “Legacy Media Lies.” That response strategy will not survive legal discovery. Organizations deploying AI image generation should document their safety measures now, while they can still shape the narrative. Waiting until regulators ask questions means answering from a defensive position.
8. Microsoft January 2026 Patch Tuesday Fixes 114 Flaws, Including Actively Exploited Zero-Day
Microsoft released its first Patch Tuesday of 2026 on January 14, addressing 114 security vulnerabilities across Windows, Office, Azure, SharePoint, and SQL Server (BleepingComputer). One vulnerability, CVE-2026-20805, is actively exploited in the wild. The Desktop Window Manager information disclosure flaw allows attackers to leak memory addresses, defeating Address Space Layout Randomization (ASLR) protections. CISA immediately added CVE-2026-20805 to its Known Exploited Vulnerabilities catalog, requiring federal agencies to patch by February 3. Eight vulnerabilities received Critical ratings, including CVE-2026-20854, a remote code execution flaw in the Windows Local Security Authority Subsystem Service that could enable credential theft and lateral movement across enterprise networks.
Why it matters
The actively exploited zero-day enables attackers to bypass memory protections and chain with other exploits for privilege escalation.
LSASS vulnerabilities directly threaten Active Directory environments and domain-wide compromise.
Secure Boot certificate expiration issues (CVE-2026-21265) create a compliance time bomb requiring action before June 2026.
What to do about it
Prioritize patching systems with internet exposure, particularly Windows Server Update Services and SMB servers.
Test Office patches in staging environments before deployment to avoid Preview Pane exploit vectors.
Review Microsoft’s Secure Boot certificate deployment playbook to prevent boot failures when certificates expire in June.
Rock’s Musings
January Patch Tuesdays are historically large because vendors delay releases during the holidays. What makes this one notable is the Desktop Window Manager zero-day already being exploited before the patch dropped. DWM has been a “frequent flyer” on Patch Tuesday since 2022 with 20 CVEs patched, but this is the first information disclosure bug exploited in the wild. Information disclosure sounds boring until you realize it’s the setup punch for privilege escalation.
The LSASS remote code execution flaw deserves your attention even without active exploitation. LSASS handles authentication requests. A successful exploit there doesn’t just compromise one system. It gives attackers the keys to move laterally through your entire domain. That’s the difference between an incident and a breach. Don’t let holiday patch fatigue turn into February regret.
9. OpenAI Launches ChatGPT Health With Isolated Data Architecture
OpenAI announced ChatGPT Health on January 7, a dedicated health experience with isolated data storage, encrypted health conversations, and optional medical record integration (OpenAI). The company reports over 230 million weekly health queries and is positioning the product with enterprise healthcare offerings, including HIPAA-compliant ChatGPT for Healthcare with Business Associate Agreements.
Why it matters
Health conversations operate in a separate space with purpose-built encryption and isolation from other ChatGPT data
Health data and memories are not used to train OpenAI’s foundation models
Major healthcare institutions, including Stanford Medicine and Cedars-Sinai are already deploying ChatGPT for Healthcare
What to do about it
Evaluate data isolation claims against your organization’s healthcare compliance requirements
Review the OpenAI Business Associate Agreement terms before deploying in HIPAA-regulated environments
Monitor employee use of personal ChatGPT Health features with organizational data
Rock’s Musings
OpenAI is betting that healthcare represents a massive market opportunity if they can solve the trust problem. The architectural decisions here are interesting: separate memory, separate encryption, and no model training on health data. They’re building the compliance story alongside the product.
Whether this actually protects patient data depends on implementation details that aren’t fully public. The Center for Democracy and Technology notes that health data shared with AI tools often falls outside HIPAA protections. The burden remains on organizations to verify claims before deploying sensitive data. Trust but verify applies to AI vendors, too.
10. GreyNoise Documents 91,000 Attack Sessions Targeting LLM Deployments
GreyNoise published threat intelligence on January 10 documenting over 91,403 malicious sessions targeting AI infrastructure between October 2025 and January 2026 (GreyNoise). Two distinct campaigns emerged: SSRF exploitation (10,934 sessions) and systematic reconnaissance of 73+ LLM model endpoints (80,469 sessions).
Why it matters
The reconnaissance campaign targeted misconfigured proxy servers exposing commercial API access across GPT-4o, Claude, Llama, DeepSeek, Gemini, Mistral, Qwen, and Grok
Attack infrastructure traces to IPs with histories of CVE exploitation across more than 200 vulnerabilities
Professional capability indicators suggest reconnaissance feeding larger exploitation pipelines
What to do about it
Audit LLM proxy configurations for unintended external access
Configure Ollama and similar tools to accept models only from trusted registries
Implement egress filtering to prevent SSRF callbacks and detect enumeration patterns
Rock’s Musings
This is what AI exploitation looks like at scale. Two IPs generated over 80,000 sessions in eleven days, systematically testing model endpoints. The probes used innocuous queries like “How many states are there in the United States?” to identify responsive models without triggering alerts.
If you’re running exposed LLM endpoints, you’re already on someone’s target list. GreyNoise characterized the threat actors as professional operators building target lists. The enumeration campaign is reconnaissance for future exploitation. Lock down your proxy configurations now, before the follow-on attacks arrive.
The One Thing You Won’t Hear About But You Need To
AI Agents Are Becoming the New Insider Threat
Palo Alto Networks Chief Security Intelligence Officer Wendi Whitmore told The Register on January 4 that AI agents represent the new insider threat for 2026 (The Register). Gartner estimates 40% of enterprise applications will integrate with task-specific AI agents by year’s end, up from less than 5% in 2025. That rapid deployment creates a “superuser problem” when autonomous agents receive broad permissions that chain together access to sensitive applications.
Why it matters
AI agents are always-on, never sleep, and if improperly configured can access privileged resources continuously
A single prompt injection attack can turn a trusted agent into a “silent insider” executing unauthorized actions at machine speed
The “doppelganger” scenario involves task-specific agents approving transactions or signing contracts that would otherwise require C-suite manual approval
What to do about it
Apply least-privilege principles to AI agents exactly as you would to human identities
Implement human-in-the-loop checkpoints for actions with financial, operational, or security impact
Develop agent-specific monitoring that distinguishes between expected automated behavior and anomalous actions
Rock’s Musings
The insider threat model assumed humans were the insiders. You monitored user behavior, watched for anomalies, and investigated when someone accessed data outside their normal patterns. AI agents break that model. They don’t have normal patterns. They execute at machine speed. And they’re trusted by design.
The Palo Alto Networks research describes scenarios where compromised agents approve wire transfers, delete backups, or exfiltrate customer databases autonomously. These aren’t theoretical. They’re predictable consequences of granting agents the permissions they need to be useful. The same access that makes an agent productive makes it dangerous when hijacked.
Your 2026 security program needs an AI agent governance framework. Not eventually. Now. Before the agents you’re deploying this quarter become the breach vectors you’re investigating next year.
If you found this analysis useful, subscribe at rockcybermusings.com for weekly intelligence on AI security developments.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
References
AppOmni. (2026, January 13). BodySnatcher (CVE-2025-12420): A broken authentication and agentic hijacking vulnerability in ServiceNow. https://appomni.com/ao-labs/bodysnatcher-agentic-ai-security-vulnerability-in-servicenow/
California Attorney General. (2026, January 14). Attorney General Bonta launches investigation into xAI over nonconsensual intimate images. https://oag.ca.gov/news/press-releases
Cyera Research Labs. (2026, January 7). Ni8mare: Unauthenticated remote code execution in n8n (CVE-2026-21858). https://www.cyera.com/research-labs/ni8mare-unauthenticated-remote-code-execution-in-n8n-cve-2026-21858
Durbin, D. (2026, January 13). Durbin successfully passes bill to combat nonconsensual, sexually-explicit deepfake images. U.S. Senate. https://www.durbin.senate.gov/newsroom/press-releases/durbin-successfully-passes-bill-to-combat-nonconsensual-sexually-explicit-deepfake-images
Endor Labs. (2026, January 10). n8mare on auth street: Supply chain attack targets n8n ecosystem. https://www.endorlabs.com/learn/n8mare-on-auth-street-supply-chain-attack-targets-n8n-ecosystem
Federal Register. (2026, January 8). Request for information regarding security considerations for artificial intelligence agents (Docket No. NIST-2025-0035). https://www.federalregister.gov/documents/2026/01/08/2026-00206/request-for-information-regarding-security-considerations-for-artificial-intelligence-agents
Financial Times. (2026, January 12). Ofcom launches investigation into X over Grok deepfakes. https://www.ft.com/
Gatlan, S. (2026, January 14). Microsoft January 2026 Patch Tuesday fixes 3 zero-days, 114 flaws. BleepingComputer. https://www.bleepingcomputer.com/news/microsoft/microsoft-january-2026-patch-tuesday-fixes-3-zero-days-114-flaws/
GreyNoise. (2026, January 10). Threat actors actively targeting LLMs. https://www.greynoise.io/blog/threat-actors-actively-targeting-llms
Microsoft. (2026, January 14). Security Update Guide - January 2026. Microsoft Security Response Center. https://msrc.microsoft.com/update-guide/releaseNote/2026-Jan
National Institute of Standards and Technology. (2026, January 12). CAISI issues request for information about securing AI agent systems. https://www.nist.gov/news-events/news/2026/01/caisi-issues-request-information-about-securing-ai-agent-systems
OpenAI. (2026, January 7). Introducing ChatGPT Health. https://openai.com/index/introducing-chatgpt-health/
The Register. (2026, January 4). AI agents 2026’s biggest insider threat: PANW security boss. https://www.theregister.com/2026/01/04/ai_agents_insider_threats_panw/
World Economic Forum. (2026, January 12). Global Cybersecurity Outlook 2026. https://www.weforum.org/publications/global-cybersecurity-outlook-2026/



