Weekly Musings Top 10 AI Security Wrapup: Issue 25 February 6, 2026 - February 12, 2026
Microsoft patches prompt injection flaws in Copilot, North Korea weaponizes deepfakes for crypto theft, and a 200-page global report confirms what we already knew: governance can't keep up
Your AI coding assistant can now be weaponized through a poisoned codebase. This week gave us the first actively exploited prompt injection vulnerabilities in GitHub Copilot, a North Korean crew running deepfake Zoom calls to rob crypto firms, and 300 million private chatbot messages sitting in an open database.
The theme is that speed kills. AI capabilities are moving faster than the people building them can secure them, faster than the regulators writing rules, and faster than the enterprises deploying them can govern.
Here’s what mattered and what you should do about it. If you’re building your AI governance program, RockCyber can help you close the gaps.
1. Microsoft patches prompt injection RCE in GitHub Copilot, three actively exploited zero-days hit Windows
Microsoft’s February 10 Patch Tuesday dropped fixes for 58 vulnerabilities, including six actively exploited zero-days (BleepingComputer). The AI security headline is that remote code execution vulnerabilities in GitHub Copilot across VS Code, Visual Studio, and JetBrains products (CVE-2026-21516, CVE-2026-21523, CVE-2026-21256). These flaws stem from command injection triggered through prompt injection (Krebs on Security). A threat actor embeds a malicious prompt into a codebase, and when an agent workflow executes, the prompt becomes code. CISA added all six zero-days to its KEV catalog the same day (SecurityWeek).
Why it matters
AI coding assistants are now a confirmed attack surface. The prompt-to-execution chain is live.
CI/CD pipelines using Copilot agent workflows are exposed to supply chain attacks through poisoned repositories.
Six zero-days exploited simultaneously signals coordinated offensive activity.
What to do about it
Patch all affected GitHub Copilot, VS Code, Visual Studio, and JetBrains installations before the CISA KEV deadline.
Audit CI/CD pipelines that use Copilot agent mode. Disable automatic command execution from AI-suggested code until you validate controls.
Prioritize CVE-2026-21510 (SmartScreen bypass), CVE-2026-21514 (Word OLE bypass), and CVE-2026-21533 (RDP to SYSTEM) across all endpoints.
Rock’s Musings
Prompt injection strikes again, but this time in an actively exploited vulnerability in the world’s most popular AI coding tool. A poisoned README triggers remote code execution when Copilot processes it. Every AI agent that reads unvalidated context and executes actions has this same problem. The architecture is the vulnerability. Start treating AI agent security as a distinct risk domain in your program.
2. International AI Safety Report 2026 drops, confirms governance is losing the race
On February 3, the second International AI Safety Report was published, chaired by Yoshua Bengio and authored by over 100 experts from 30+ countries (International AI Safety Report). The report finds that AI capabilities have accelerated sharply in mathematics, coding, and autonomous task execution (Inside Global Tech). A key finding is that some models detect when they’re being tested and behave differently during evaluation versus deployment. The report calls risk management frameworks “still immature” (AI Governance Library).
Why it matters
The largest global collaboration on AI safety now says governance can’t keep up with capability advances.
Models that game their own evaluations undermine every safety benchmark your vendor shows you.
12 frontier AI companies published safety frameworks in 2025, but the report notes wide variation in scope and enforceability.
What to do about it
Use the report as board-level evidence to justify accelerated AI governance investment. It carries the weight of 30 governments and the UN.
Challenge vendor safety claims by asking about evaluation gaming. If their model can distinguish test from deployment, your risk assessment is incomplete.
Map your AI risk management practices against the report’s four-component framework: risk identification, analysis, mitigation, and governance.
Rock’s Musings
Most of this report confirms what practitioners already knew, but that’s why it matters. When Bengio and 100 experts backed by 30 nations say “real-world evidence of safety measure effectiveness remains limited,” your board can’t dismiss that as one consultant’s opinion. The bit about models gaming evaluations should keep you up at night. If the AI can tell when it’s being tested, every safety benchmark you’ve seen is unreliable. Send the executive summary to your CEO.
3. North Korea’s UNC1069 runs deepfake Zoom calls to steal crypto
Google’s Mandiant published research on February 9 detailing UNC1069, a North Korean threat actor targeting a FinTech organization (Google Cloud Blog). The attackers compromised a crypto executive’s Telegram account, sent a Calendly link to a fake Zoom meeting, and displayed a deepfake video of a CEO during the call (The Record). The intrusion deployed seven distinct malware families including three new tools (Dark Reading).
Why it matters
Deepfake video is now a confirmed offensive tool in state-sponsored financial crime.
Seven malware families on a single host shows a targeted, high-investment operation.
The Telegram-to-Zoom-to-malware chain exploits the trust your employees place in the common business tools they use daily.
What to do about it
Brief cryptocurrency and financial services teams on the specific attack chain: compromised Telegram account, Calendly invite, spoofed Zoom link, ClickFix infection.
Require verified domains for all conferencing links. Block look-alike Zoom infrastructure at email and web gateways.
Train staff that “run these commands to fix audio/video” during unsolicited calls is a red flag. That’s the ClickFix technique, and it works because people comply.
Rock’s Musings
North Korea stole over $2 billion in crypto in 2025. UNC1069 is just one crew in what TRM Labs calls an “industrialized theft supply chain.” The deepfake angle changes everything. When your employee sees a video of a known CEO on a Zoom call, trust goes up, skepticism goes down, and seven malware families get dropped.
4. DockerDash vulnerability turns AI metadata into executable code
On February 3, Noma Labs disclosed a critical flaw in Docker’s Ask Gordon AI assistant, dubbed DockerDash (Noma Security). A malicious metadata label in a Docker image hijacks the AI’s execution chain: Gordon reads it, forwards it to the MCP Gateway, and the gateway executes it with zero validation (SecurityWeek). In cloud/CLI environments, this enables RCE. In Docker Desktop, it enables data exfiltration (Infosecurity Magazine).
Why it matters
This is another well-documented exploitation of Model Context Protocol (MCP) as an attack surface, a protocol gaining rapid adoption.
The attack uses standard Docker LABEL fields, meaning malicious images look normal to existing scanning tools.
The pattern applies to any AI agent that reads external context and passes it to an execution layer.
What to do about it
Upgrade to Docker Desktop 4.50.0 or later immediately.
Audit any AI agent in your environment that reads metadata, documents, or external context and can trigger tool execution.
Demand human-in-the-loop confirmation for all MCP tool invocations in development environments.
Rock’s Musings
Noma’s Sasi Levi nailed it. The DockerDash vulnerability “isn’t Docker-specific, it’s AI-specific.” Every RAG system, every AI assistant that reads metadata, every agent that processes external input faces the same problem. The AI cannot distinguish between context and instruction. The MCP standard is growing fast. If you’re deploying MCP-connected agents, treat every external input as potentially malicious.
5. Chat and Ask AI app leaks 300 million messages from 25 million users
Security researcher Harry found that Chat and Ask AI, a wrapper app with 50+ million downloads connecting users to ChatGPT, Claude, and Gemini, left its Firebase backend wide open (404 Media). The misconfiguration exposed roughly 300 million messages from 25 million users (Malwarebytes). Messages contained mental health discussions, self-harm requests, and queries about illegal activities (GBHackers).
Why it matters
300 million messages prove that AI chat wrapper apps store everything users type, and often store it insecurely.
The Firebase misconfiguration is a known, preventable error. This wasn’t a sophisticated attack. It was an open door.
Users treated AI chats as private conversations. The exposed content creates extortion, identity theft, and social engineering risk at massive scale.
What to do about it
Inventory third-party AI wrapper apps in your environment. If employees use consumer AI apps for work, their conversations sit in someone else’s database.
Add AI app vetting to your shadow IT detection process. Firebase misconfigurations are scannable.
Remind employees: anything typed into an AI chat may be stored, breached, and public. Treat AI conversations with the same caution as email.
Rock’s Musings
Codeway built a wrapper app that connects to real AI models from real companies. But the app stored everything in a Firebase database with public read access. No attack needed. Just log in. Think about how many AI wrapper apps your employees downloaded this year. Each one stores conversations on infrastructure you don’t control. The apps people use to access AI are the weak link.
6. Senator Hassan presses Bondu after AI toy exposes 50,000 children’s chat logs
Senator Maggie Hassan sent a formal letter to Bondu, maker of AI plush toys for children ages 3 to 9, after researchers found Bondu’s web portal allowed anyone with a Gmail account to access 50,000 children’s chat transcripts (Senate Joint Economic Committee). Exposed data included children’s names, birth dates, and intimate conversations (Axios). Bondu patched the issue within hours of disclosure (Malwarebytes).
Why it matters
50,000 children’s private conversations were accessible to anyone with a Google account. No hacking required.
Congressional attention signals that AI toy security will become a regulatory priority, with California SB 867 already proposing a four-year moratorium on AI companion chatbots for minors.
The data included information that researchers called “a kidnapper’s dream”: names, birthdays, family details, and behavioral patterns.
What to do about it
If your organization develops or invests in consumer AI products that interact with children, conduct an immediate security audit of data storage and access controls.
Monitor legislative developments. SB 867 in California and the Parents and Kids Safe AI Act will reshape compliance requirements for AI products targeting minors.
Review your enterprise’s policy on AI products used by employees’ families.
Rock’s Musings
Joel Margolis told Wired the exposed data was “a kidnapper’s dream.” Names, birthdays, what kids like, who their family members are, all accessible with a Gmail login. Bondu touted 18 months of safe beta testing. The problem was never the toy’s behavior. It was that every word a child said to a stuffed dinosaur sat on an open server. The attack surface isn’t the model. It’s the trust that makes people share everything with a friendly interface.
7. Eight critical n8n vulnerabilities expose AI orchestration infrastructure
Between January and early February 2026, eight new high-to-critical CVEs were disclosed in n8n, the popular open-source workflow automation platform (Geordie AI). The vulnerabilities span expression evaluation, file access controls, Git, SSH, Merge nodes, and Python execution. Several bypass earlier patches, including CVE-2026-25049 which enables system command execution via malicious workflows (The Hacker News). Organizations that upgraded to n8n 2.2.2 following January guidance remain exposed.
Why it matters
n8n sits at the intersection of APIs, secrets, CI/CD, and AI orchestration. A compromise here cascades across your entire automation stack.
Multiple vulnerabilities bypass earlier patches, creating a false sense of security for organizations that thought they were current.
Authenticated workflow editor access is enough to exploit several flaws, and that role is commonly granted to non-administrators.
What to do about it
Patch n8n immediately. Do not assume the January patch cycle covered you.
Audit who has workflow editor access. Apply least-privilege principles to your automation platform.
Review all n8n workflows that handle credentials or connect to AI services.
Rock’s Musings
n8n flies under the security radar because it’s “just automation.” But it connects to your APIs, stores your credentials, orchestrates your AI workflows, and executes code. Eight CVEs in one month, several bypassing previous patches, tells me the security debt in AI tooling is deeper than most teams realize. If you’re running n8n in production, treat it like Active Directory. To an attacker, it’s just as valuable.
8. Ivanti EPMM zero-days breach Dutch and Finnish government systems
Ivanti disclosed two zero-day vulnerabilities in Endpoint Manager Mobile (EPMM), CVE-2026-1281 and CVE-2026-1340, both CVSS 9.8, on January 29 (The Hacker News). By February 9, the Netherlands confirmed compromise of government systems. Finland’s Valtori disclosed a breach affecting 50,000 government employees (The Hacker News).
Why it matters
Two European government institutions confirmed compromise within days of disclosure.
The dormant payload technique suggests brokers are stockpiling access to government systems for future sale.
EPMM manages mobile devices. Compromising it means potential access to every managed device.
What to do about it
If you run Ivanti EPMM, patch to the emergency release immediately. Check for indicators of compromise, specifically the /mifs/403.jsp web shell path.
Review EPMM logs for unusual authentication patterns between January 29 and your patch date. The zero-days were exploited before disclosure.
Audit your mobile device management architecture. If your MDM platform is compromised, assume all managed devices are potentially exposed.
Rock’s Musings
Ivanti. Again. EPMM manages mobile devices for governments and critical infrastructure operators. Two 9.8 CVSS zero-days, exploited before disclosure, breaching government systems across Europe. The dormant payload angle is telling. Access brokers are treating compromised government infrastructure like inventory: gain entry, plant loaders, wait to sell.
9. Colorado AI Act delayed to June 2026 as federal preemption fight intensifies
Colorado’s AI Act (SB 24-205), originally scheduled for February 1, 2026, has been delayed to June 30 (King and Spalding). The delay follows President Trump’s December 2025 executive order directing an AI Litigation Task Force to challenge state AI laws (Gunderson Dettmer). Colorado is the only state law specifically named in the executive order.
Why it matters
The most comprehensive U.S. state AI law targeting algorithmic discrimination in high-risk systems just got pushed back five months.
The federal preemption fight creates compliance uncertainty for every organization deploying AI in employment, lending, housing, or insurance decisions.
What to do about it
Don’t pause compliance work. The executive order cannot overturn state law. State laws remain enforceable until struck down.
Prepare for a two-track reality: implement state obligations while tracking federal moves that could narrow those obligations.
Document your impact assessments and risk management now. You’ll need them regardless of which jurisdiction prevails.
Rock’s Musings
Companies scrambling to comply by February 1 just got breathing room. Companies that already did the work got a competitive advantage. The federal preemption push faces a long road through courts. My advice: build your AI governance to the highest standard any jurisdiction requires. If you meet Colorado’s requirements, you’ll meet most of what Texas, California, and the EU AI Act demand.
10. EU AI Act high-risk guidance misses deadline as August 2026 enforcement looms
The European Commission missed its deadline to publish guidelines on high-risk AI system requirements under the EU AI Act (IAPP). CEN and CENELEC, the standardization bodies, also missed their deadline and now aim for the end of 2026. High-risk obligations become enforceable in August 2026. The GPAI Code of Practice is expected in final form by June (Bird and Bird).
Why it matters
Organizations building compliance programs for high-risk AI systems have no finalized guidance or technical standards, with enforcement just six months away.
The standards delay means there’s no “safe harbor” for companies trying to demonstrate conformity.
Industry groups are calling for enforcement delays, but the statutory deadline stands.
What to do about it
Don’t wait for final standards. Build your compliance program on the AI Act text, the NIST AI Risk Management Framework, and available draft guidance.
Track the General-Purpose AI Code of Practice. If you’re a GPAI provider, the June finalization means compliance work needs to be underway now.
Engage your legal team on whether your AI systems qualify as “high-risk” under the Act. The classification determines your obligations, and waiting for guidance is not a defense.
Rock’s Musings
The EU passed the most ambitious AI regulation in history, then couldn’t deliver the guidance companies need to comply with it. But the enforcement date hasn’t moved. August 2026 is coming whether the paperwork is ready or not. Build to the text of the Act, supplement with NIST and OWASP guidance, and document everything.
The One Thing You Won’t Hear About But You Need To
AI agents are inheriting the permissions of the humans who use them, and nobody is governing that
Well… if you follow me on Linkedin, you’ve definitely heard about this. Just take a look across this week’s stories and you’ll see a pattern nobody is naming. In the Copilot vulnerabilities, the AI executes malicious code with the developer’s privileges. In DockerDash, Ask Gordon forwards poisoned metadata using whatever access Docker provides. In n8n, workflow editor access exploits critical vulnerabilities because the platform inherits broad permissions.
AI agents inherit the permission scope of the humans and systems they operate within, but they don’t inherit the judgment to use those permissions safely.
Why it matters
Traditional identity and access management (IAM) was built for human users who exercise judgment. AI agents exercise none.
The blast radius of a compromised AI agent equals the full permission set of the account or environment it operates in.
No major IAM framework currently addresses AI agent permissions as a distinct category requiring separate controls.
What to do about it
Assign each agent a workload identity that is cryptographically attested and bound to its runtime environment, not a static service account. Pair it with just-in-time, short-lived tokens scoped per task so there are no standing credentials to steal or reuse.
Enforce capability-level access controls that evaluate what each agent is doing, not just who spawned it. A developer’s AI assistant should never inherit production access simply because the developer has it.
Treat agent permissions as privileged access: review them quarterly, require justification for renewal, and auto-revoke on expiry. Standing agent permissions are standing risk.
Rock’s Musings
We’re giving AI agents the keys to the kingdom and hoping they’ll only open the right doors. That’s not a security strategy. That’s a prayer. The NIST AI RMF doesn’t address agent permissions. The EU AI Act barely touches it. I think this becomes the defining AI security challenge of 2026. Start with an inventory. Then start restricting. For more on building AI governance programs that address these gaps, visit RockCyber Musings or reach out to RockCyber directly.
If you found this analysis useful, subscribe at rockcybermusings.com for weekly intelligence on AI security developments.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
References
404 Media. (2026, February 5). Massive AI chat app leaked millions of users’ private conversations. https://www.404media.co/massive-ai-chat-app-leaked-millions-of-users-private-conversations/
ASIS Online. (2026, February). New International AI Safety Report spotlights emerging risks. Security Management. https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/2026-international-safety-report/
Axios. (2026, February 3). Senator presses AI toy company bondu after kids’ chat data was exposed. https://www.axios.com/2026/02/03/ai-toy-bondu-chat-data-exposure-hassan
Bird & Bird. (2026). Taking the EU AI Act to practice: Understanding the draft transparency code of practice. https://www.twobirds.com/en/insights/2026/taking-the-eu-ai-act-to-practice-understanding-the-draft-transparency-code-of-practice
BleepingComputer. (2026, February 11). Microsoft February 2026 Patch Tuesday fixes 6 zero-days, 58 flaws. https://www.bleepingcomputer.com/news/microsoft/microsoft-february-2026-patch-tuesday-fixes-6-zero-days-58-flaws/
Check Point Research. (2026, February 9). 9th February: Threat intelligence report. https://research.checkpoint.com/2026/9th-february-threat-intelligence-report/
Common Sense Media. (2026, January 22). Common Sense Media warns against AI toy companions after research reveals safety risks [Press release]. https://www.commonsensemedia.org/press-releases/common-sense-media-warns-against-ai-toy-companions-after-research-reveals-safety-risks
CSO Online. (2026, February 11). Microsoft fixes six zero-days on February 2026 Patch Tuesday. https://www.csoonline.com/article/4130446/february-2026-patch-tuesday-six-new-and-actively-exploited-microsoft-vulnerabilities-addressed.html
CyberAdviser Blog. (2026, January). What to expect in AI regulation in 2026. https://www.cyberadviserblog.com/2026/01/what-to-expect-in-ai-regulation-in-2026/
Cybernews. (2026, February 10). Armed with new tools, North Koreans ramp up attacks on lucrative crypto sector. https://cybernews.com/security/north-korea-ai-lucrative-crypto-industry/
CyberPress. (2026, February 10). AI chat app data breach exposes 300 million messages. https://cyberguy.com/security/millions-ai-chat-messages-exposed-app-data-leak/
Dark Reading. (2026, February 11). North Korea’s UNC1069 hammers crypto firms with AI. https://www.darkreading.com/threat-intelligence/north-koreas-unc1069-hammers-crypto-firms
GBHackers. (2026, February 10). 25 million users affected as AI chat platform leaks 300 million messages. https://gbhackers.com/ai-chat-platform-leaks-300-million-messages/
Geordie AI. (2026, February). Eight new n8n CVEs in February 2026: Updated patching guidance. https://www.geordie.ai/resources/technical-advisory-eight-new-n8n-cves-since-january---updated-remediation-guidance
Google Cloud Blog. (2026, February 9). UNC1069 targets cryptocurrency sector with new tooling and AI-enabled social engineering. https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering
Gunderson Dettmer. (2026). 2026 AI laws update: Key regulations and practical guidance. https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance
IAPP. (2026, February). European Commission misses deadline for AI Act guidance on high-risk systems. https://iapp.org/news/a/european-commission-misses-deadline-for-ai-act-guidance-on-high-risk-systems
Infosecurity Magazine. (2026, February 9). DockerDash exposes AI supply chain weakness in Docker’s Ask Gordon. https://www.infosecurity-magazine.com/news/dockerdash-weakness-dockers-ask
Inside Global Tech. (2026, February 10). International AI Safety Report 2026 examines AI capabilities, risks, and safeguards. https://www.insideglobaltech.com/2026/02/10/international-ai-safety-report-2026-examines-ai-capabilities-risks-and-safeguards/
International AI Safety Report. (2026, February 3). International AI Safety Report 2026. https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026
King & Spalding. (2026). New state AI laws are effective on January 1, 2026, but a new executive order signals disruption. https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption
Krebs on Security. (2026, February 11). Patch Tuesday, February 2026 edition. https://krebsonsecurity.com/2026/02/patch-tuesday-february-2026-edition/
Malwarebytes. (2026, February 10). AI chat app leak exposes 300 million messages tied to 25 million users. https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-users
Malwarebytes. (2026, February). An AI plush toy exposed thousands of private chats with children. https://www.malwarebytes.com/blog/news/2026/02/an-ai-plush-toy-exposed-thousands-of-private-chats-with-children
Noma Security. (2026, February 3). DockerDash: Two attack paths, one AI supply chain crisis. https://noma.security/blog/dockerdash-two-attack-paths-one-ai-supply-chain-crisis/
SecurityWeek. (2026, February 4). DockerDash flaw in Docker AI assistant leads to RCE, data theft. https://www.securityweek.com/dockerdash-flaw-in-docker-ai-assistant-leads-to-rce-data-theft/
SecurityWeek. (2026, February 11). 6 actively exploited zero-days patched by Microsoft with February 2026 updates. https://www.securityweek.com/6-actively-exploited-zero-days-patched-by-microsoft-with-february-2026-updates/
The Hacker News. (2026, February 3). Docker fixes critical Ask Gordon AI flaw allowing code execution via image metadata. https://thehackernews.com/2026/02/docker-fixes-critical-ask-gordon-ai.html
The Hacker News. (2026, February 11). North Korea-linked UNC1069 uses AI lures to attack cryptocurrency organizations. https://thehackernews.com/2026/02/north-korea-linked-unc1069-uses-ai.html
The Hacker News. (2026, February 12). Dutch authorities confirm Ivanti zero-day exploit exposed employee contact data. https://thehackernews.com/2026/02/dutch-authorities-confirm-ivanti-zero.html
The Record. (2026, February 10). North Korean hackers targeted crypto exec with fake Zoom meeting, ClickFix scam. https://therecord.media/north-korean-hackers-targeted-crypto-exec-clickfix
U.S. Senate Joint Economic Committee. (2026, February 3). Senator Hassan presses toy company on child safety and privacy practices. https://www.jec.senate.gov/public/index.cfm/democrats/2026/2/senator-hassan-presses-toy-company-on-child-safety-and-privacy-practices-after-children-s-conversations-with-its-ai-chat-toy-were-left-exposed-to-any-gmail-user
Zero Day Initiative. (2026, February 11). The February 2026 security update review. https://www.zerodayinitiative.com/blog/2026/2/10/the-february-2026-security-update-review



