Weekly Musings Top 10 AI Security Wrapup: Issue 14 October 3, 2025 - October 9, 2025
DOJ’s bulk‑data rule takes effect, EU’s Apply AI plan launches, and fresh LLM exploit paths hit OpenShift AI, GitHub Copilot, Gemini, Figma MCP, and Perplexity Comet
Enterprise AI is scaling fast. Your risks are scaling faster. This week served as a stark reminder that the attack surface is shifting from web applications and endpoints to agents, copilots, supply chains, and cross-border data flows. A federal data rule quietly hit its final compliance date. Europe rolled out an adoption push that will change vendor roadmaps. Multiple high-impact vulnerabilities were discovered across OpenShift AI, GitHub Copilot Chat, Figma’s MCP server, and Gemini. Attackers continue to bolt AI onto outdated playbooks. You do not need more dashboards. You need decisions. Let’s make them.
1) DOJ Data Security Program’s Final Compliance Obligations Took Effect On October 6
Summary
The Department of Justice’s Data Security Program (DSP), implementing Executive Order 14117, reached its final compliance milestone on October 6. Beyond prohibitions on covered data transactions, entities must now fulfill affirmative obligations for restricted transactions, including conducting due diligence, establishing a written data compliance program, performing audits, reporting, and maintaining records. The National Security Division’s DSP page and the DOJ’s April announcement laid out the staged rollout and the October 6 effective date for these affirmative requirements. Expect scrutiny on transfers of bulk sensitive personal data and government‑related data to countries of concern, with penalties for violations. (Department of Justice; National Security Division)
Why It Matters
Cross‑border data flows are now a regulatory control point with enforcement hooks.
“Affirmative obligations” force evidence, not promises.
Vendor and partner exposure becomes a first‑order risk for AI training and evaluation datasets.
What To Do About It
Map all covered data transactions by system, counterparty, and jurisdiction; tag anything touching model training, TEVV, or analytics.
Stand up a DSP‑aligned written program and attestations; fold audits into third‑party risk reviews.
Gate transfers with contractual and technical controls, and establish a halt mechanism for non‑compliant flows.
Rock’s Musings
This is a governance milestone that actually bites. I’m already seeing executive teams underestimate the operational lift. “We don’t sell data” is not a control. You need system‑level lineage for where bulk-sensitive data flows, plus a program that can survive discovery. If your AI roadmap assumes external enrichment or offshore labeling, you’ve got homework. Treat this like SOX for sensitive data. Board question to answer in one slide: which restricted transactions do we rely on for revenue, how are they controlled, and what’s our stop‑ship trigger when a partner is out of bounds? If you cannot answer that by next month, call your vCISO. Then call your GC.
(Citation: Department of Justice; National Security Division)
2) European Commission Launches “Apply AI” To Accelerate Adoption
Summary
On October 8, the European Commission introduced “Apply AI,” a package aimed at accelerating enterprise AI adoption through funding, skills, and infrastructure measures. It complements the implementation of the AI Act, with an emphasis on deployment support and practical capacity-building across the Single Market. Analysts expect the program to shape vendor offerings, compliance tooling, and public procurement in 2026. (Reuters; Euronews)
Why It Matters
Shifts European focus from rules to uptake, affecting enterprise roadmaps.
Incentivizes compliant AI services that align with AI Act obligations.
Public procurement demand will pull the ecosystem toward auditable AI.
What To Do About It
Align your EU portfolio to prioritized use cases and funding tracks to accelerate ROI.
Bake conformity‑assessment evidence into product and DevOps pipelines.
Position managed assurance services that simplify AI Act compliance for customers.
Rock’s Musings
I appreciate the transition from theory to practice. “Apply AI” tees up money and demand signals that reward boring, verifiable engineering. If you sell into the EU, put your solutions team on three customer‑ready evidence packs: data governance, TEVV, and post‑deployment monitoring. Then figure out which of your features are compliance theater and cut them. Your product needs to be deployable and defensible. In that order.
(Citation: Reuters; Euronews)
3) Oracle E‑Business Suite 0‑Day Exploited In The Wild, CVE‑2025‑61882
Summary
Oracle published a Security Alert for CVE‑2025‑61882, an unauthenticated RCE in E‑Business Suite, initially released October 4 and updated October 6. CrowdStrike reported mass exploitation by financially motivated actors and provided indicators, noting likely data exfiltration and web shell deployment through the abuse of BI Publisher templates. This is serious business software in the blast radius of threat actors who like automation and public POCs. Patch priority is critical for any internet‑exposed EBS. (Oracle; CrowdStrike)
Why It Matters
Unauth RCE on core ERP is a data‑theft and extortion magnet.
Public POCs accelerate copycat exploitation.
ERP often lives outside normal EDR visibility and hardening baselines.
What To Do About It
Patch now; where you can’t, isolate EBS, disable internet access, and put compensating WAF rules in place.
Hunt for malicious templates, suspicious Java processes, and outbound 443 from app servers.
Add ERP to your high‑value asset list and require explicit change control.
Rock’s Musings
If your team is still treating ERP as “that legacy thing finance owns,” this is your wake‑up. I’ve seen too many orgs run mission‑critical systems with weak segmentation and stale JDKs because “it just works.” That habit is how you end up briefing the board at 2 a.m. about why invoice data is on a leak site. Get EBS under central detection, give it a named service owner, and run tabletops on how you’ll operate payroll during containment. Boring work beats painful weekends.
(Citation: Oracle; CrowdStrike)
4) Red Hat OpenShift AI Privilege Escalation Enables Full Cluster Takeover, CVE‑2025‑10725
Summary
A high‑severity flaw in Red Hat OpenShift AI allows a low‑privileged authenticated user, such as a data scientist in a Jupyter notebook, to escalate to cluster admin. NVD lists CVSS 9.9. Public write-ups and coverage highlight the risk of a complete compromise of confidentiality, integrity, and availability for hybrid deployments that run model lifecycles on OpenShift AI. This is an identity and authorization problem at the platform layer, not a novel AI exploit, which makes it both familiar and dangerous. (NVD; CSO Online)
Why It Matters
Turns AI platform tenancy into an infrastructure risk.
Bridges ML tooling to cluster control planes.
Expands insider and credential‑theft blast radius.
What To Do About It
Patch affected versions and audit ClusterRoleBindings and service account scopes.
Move notebooks to least‑privileged namespaces with network policy enforcement.
Treat OpenShift AI as a Tier‑0 platform with break‑glass controls and continuous RBAC drift checks.
Rock’s Musings
The fastest way to lose a week is to let “helpful defaults” turn into blind trust. AI platforms inherit the sins of their Kubernetes underlay. If your MLOps pipeline can schedule jobs that can touch cluster secrets, you don’t have an AI problem. You have an IAM and RBAC problem. Fix the authorization model before you ship more models. And for the love of uptime, stop granting admin so the demo works.
(Citation: NVD; CSO Online)
5) California’s Frontier AI Transparency Law (SB 53) Is Now Signed
Summary
California enacted the Transparency in Frontier Artificial Intelligence Act on September 29, with material coverage and analysis continuing into this week. SB 53 sets whistleblower protections and transparency obligations for frontier model developers operating at large compute scales and establishes incident reporting thresholds tied to catastrophic risk definitions. It targets the governance layer for high‑end model training, testing, and reporting. Expect an influence far beyond California given platform presence and supplier footprints. (Office of the Governor of California; Reuters)
Why It Matters
Creates U.S. state‑level obligations for frontier model developers.
Shields employees who disclose safety risks, changing internal incentives.
Raises the bar for incident reporting and risk documentation.
What To Do About It
Inventory where California nexus triggers obligations for your AI development stack.
Align whistleblower processes with policy, including independent reporting channels.
Pre‑write critical incident reporting playbooks and thresholds that map to SB 53 definitions.
Rock’s Musings
This is the governance signal developers said they wanted: clear obligations with safety levers and room to operate. Now the question is whether your org can produce real evidence of risk controls rather than safety theater. If your safety testing is a slide deck, fix it. If your incident taxonomy is “we’ll know it when we see it,” fix that too. Risk realism beats ideology. Every time.
(Citation: Office of the Governor of California; Reuters)
6) GitHub Copilot Chat “CamoLeak” Allowed Secret Exfiltration And Response Control
Summary
Legit Security disclosed a Copilot Chat vulnerability combining a prompt‑injection vector with a clever bypass of GitHub’s Camo image proxy. The result: invisible comments seeded malicious instructions that could steer responses, suggest malicious packages, and exfiltrate secrets and private code through pre‑signed image proxy URLs. GitHub mitigated by disabling image rendering in Copilot Chat. It’s a crisp example of how context channels and UI features become the exfil path in assistant workflows. (Legit Security; SecurityWeek)
Why It Matters
Assistant context is a data egress channel hiding in plain sight.
“Invisible content” patterns create cross‑user influence and integrity risk.
Security controls built for web content don’t automatically cover LLM context flows.
What To Do About It
Disable risky renderers in LLM‑adjacent UIs; prefer allow‑lists over block‑lists.
Add guardrails that strip or sanitize hidden markup in PRs, issues, and chats.
Instrument assistant usage with DLP‑style detections for secrets and code patterns.
Rock’s Musings
This one is elegant and nasty. I love good engineering, and I hate clever data theft. If your engineering leadership says “we don’t allow prompt injection,” ask how they handle hidden content, pre‑signed URLs, and proxies. You’ll get silence. Build a cross‑functional review for any feature that pulls external context into AI assistants. You’re not just shipping helpful experiences. You’re shipping attack surface.
(Citation: Legit Security; SecurityWeek)
7) Google Gemini Faces “ASCII Smuggling” Prompt‑Injection Claims
Summary
Researchers at FireTail described an “ASCII smuggling” technique that hides invisible Unicode tag characters in seemingly benign strings to steer Gemini when integrated with Google Workspace. Coverage indicates Google does not plan to “fix” the class of issues as product vulnerabilities, instead positioning mitigations and layered defenses as the answer. The net is that indirect prompt injection through hidden characters remains a practical risk where AI agents process emails and calendar data. (FireTail; BleepingComputer)
Why It Matters
Invisible control characters can bypass UI review and policy checks.
Agents with email and calendar access magnify identity‑spoofing and data‑poisoning risks.
Vendor posture shapes your enterprise threat model and compensating controls.
What To Do About It
Restrict high‑privilege connectors for agents until you can monitor and constrain behaviors.
Deploy canonicalization and disallow lists for hidden Unicode ranges in high‑risk inputs.
Add detections for abnormal agent actions tied to calendar and email workflows.
Rock’s Musings
Vendors will frame this as “social engineering.” That’s convenient. If your product executes hidden instructions because Unicode says so, you own part of the problem. As a buyer, treat agents in productivity suites like interns. Limit their access. Supervise their work. Log everything. If you cannot get the controls you need, turn off the connector. You can live without AI scheduling your meetings. You can’t live with it forwarding your board deck.
(Citation: FireTail; BleepingComputer)
9) Figma Model Context Protocol Server RCE, CVE‑2025‑53967
Summary
A command‑injection flaw in the figma‑developer‑mcp server allowed remote code execution via unsanitized shell command construction used in a fallback path. The GitHub advisory and Imperva’s analysis describe how MCP tool calls could be abused, including through indirect prompt injection, and note a patch in version 0.6.3. This incident ties together AI agent tooling, local developer environments, and traditional input sanitization failures. (GitHub Advisory; Imperva)
Why It Matters
Agent toolchains extend trust to third‑party servers running on developer machines.
Fallback logic often hides dangerous execution paths.
Prompt‑driven flows can trigger exploit paths the developer never sees.
What To Do About It
Update to patched versions; ban child_process.exec with untrusted input and prefer execFile.
Threat‑model MCP and agent servers as remote‑code‑capable and enforce OS hardening.
Add CI checks for dangerous patterns in agent and MCP code before release.
Rock’s Musings
We keep rediscovering the basics in new clothes. If your AI agent executes curl built from unsanitized strings, it’s not an “AI vulnerability.” It’s a software engineering miss. Treat agent infrastructure the same way you treat your API gateways. Code review, least privilege, and no shell string interpolation. None. You wouldn’t accept this in payments code. Don’t accept it here.
(Citation: GitHub Advisory; Imperva)
9) OpenAI Publishes Q3 Threat Report On Malicious Use
Summary
OpenAI released an updated report documenting cases of malicious model use by financially motivated and state‑linked actors. The throughline: adversaries bolt AI onto existing tradecraft to move faster rather than inventing novel capabilities. The report highlights detection and takedowns, with media coverage underscoring trendlines in influence operations and phishing support. The practical takeaway is that AI is an accelerator, not a magic wand, for attackers and defenders. (OpenAI; The Hacker News)
Why It Matters
Clarifies attacker value: scale and speed.
Reinforces the need for usage telemetry and account takedowns.
Supports risk‑based control design rather than fear‑based bans.
What To Do About It
Instrument your own model and agent usage; ban accounts and automate revocation.
Tune detections for AI‑assisted phishing and malware scaffolding patterns.
Share indicators with vendors and ISACs to shrink attacker reuse value.
Rock’s Musings
Threat intel without operational knobs is just storytelling. This report has knobs. If you run internal models, treat abusive usage the way you treat compromised accounts. Burn them fast. On the external side, prepare your execs for AI‑polished social engineering. The message is not “panic.” It’s “accelerate your controls to match theirs.”
(Citation: OpenAI; The Hacker News)
10) “CometJacking” Shows How A Single URL Can Hijack Perplexity’s AI Browser
Summary
LayerX researchers detailed “CometJacking,” a technique where a crafted URL parameter drives Perplexity’s Comet browser to consult agent memory and exfiltrate data from connected services like email and calendar. Security media confirmed and amplified the findings. Even if vendor mitigations reduce impact, the design lesson stands: agentic browsers that act on query strings and memory need strict guardrails. (LayerX Security; BleepingComputer)
Why It Matters
URL‑driven agent actions expand phishing into prompt‑driven exfiltration.
Memory and connector scopes are the new cookies and tokens.
Agentic browsers blend autonomous behavior with user trust at click time.
What To Do About It
Block or sanitize agent‑control query parameters at network egress.
Limit agent memory scope and retention; separate personal and corporate contexts.
Run controlled red‑team tests against agentic features before wide rollout.
Rock’s Musings
If one link can turn your browser into an insider threat, you don’t have a browser. You have an automation framework with a URL parser. Treat it accordingly. I’m not anti‑agent. I’m anti‑unchecked autonomy. Pilot these tools, pen test them, and assume the worst. Then decide if the productivity lift beats the residual risk for your users. That’s governance.
(Citation: LayerX Security; BleepingComputer)
The One Thing You Won’t Hear About But You Need To: NIST’s “Zero Drafts” On AI Standards Are Open For Input Until October 17
Summary
NIST’s AI Standards “Zero Drafts” pilot is moving from concept to text. The agency released an extended outline for a proposed zero draft on documentation of AI datasets and models and will consider input received by October 17 for the initial public draft. The same pilot is advancing TEVV guidance. This is the scaffolding many of you need to standardize evidence for regulators, auditors, and customers. Get your redlines in now or live with someone else’s definitions. (National Institute of Standards and Technology; NIST Document Outline)
Why It Matters
Establishes common artifacts for AI documentation and testing.
Reduces audit thrash by converging on templates and definitions.
Gives enterprises a chance to shape standards they’ll later be measured against.
What To Do About It
Form a small internal working group to submit concrete edits.
Map the draft templates to your existing AI control library and fill gaps.
Pilot the documentation template on one model and one dataset before year‑end.
Rock’s Musings
Complaining about audits while ignoring the chance to influence the template is a choice. This is the boring, valuable work that pays off when the next law lands. Treat “Zero Drafts” like a design partner program for governance. Send practitioners, not marketers, to do the comments. Then build once and reuse for every regulator who comes knocking.
(Citation: National Institute of Standards and Technology; NIST Document Outline)
If you want deeper dives, RockCyber’s posts and the RockCyber Musings feed are your best launchpad to take action.
👉 What do you think? Ping me with the story that keeps you up at night, or the one you think I overrated.
👉 The Wrap-Up drops every Friday. Stay safe, stay skeptical.
👉 For deeper dives, visit RockCyber.
Citations
BleepingComputer. (2025, October 7). Google won’t fix new ASCII smuggling attack in Gemini. https://www.bleepingcomputer.com/news/security/google-wont-fix-new-ascii-smuggling-attack-in-gemini/
CrowdStrike. (2025, October 6). CrowdStrike identifies campaign targeting Oracle E‑Business Suite via zero‑day CVE‑2025‑61882. https://www.crowdstrike.com/en-us/blog/crowdstrike-identifies-campaign-targeting-oracle-e-business-suite-zero-day-CVE-2025-61882/
Department of Justice. (2025, April 11). Justice Department implements critical national security program to protect Americans’ sensitive data. https://www.justice.gov/opa/pr/justice-department-implements-critical-national-security-program-protect-americans-sensitive
FireTail. (2025, October). Ghosts in the machine: ASCII smuggling across various LLMs. https://www.firetail.ai/blog/ghosts-in-the-machine-ascii-smuggling-across-various-llms
GitHub Advisory Database. (2025, September 30). CVE‑2025‑53967 figma‑developer‑mcp command injection. https://github.com/advisories/GHSA-gxw4-4fc5-9gr5
Imperva. (2025, October 8). CVE‑2025‑53967 in Figma MCP server: design oversight enables RCE. https://www.imperva.com/blog/cve-2025-53967-figma-mcp/
LayerX Security. (2025, October 3). CometJacking: how one click can turn Perplexity’s Comet AI browser against you. https://layerxsecurity.com/blog/cometjacking-how-one-click-can-turn-perplexitys-comet-ai-browser-against-you/
Legit Security. (2025, October 8). CamoLeak: critical GitHub Copilot vulnerability leaks private source code. https://www.legitsecurity.com/blog/camoleak-critical-github-copilot-vulnerability-leaks-private-source-code
National Institute of Standards and Technology. (2025). NIST’s AI standards “Zero Drafts” pilot project. https://www.nist.gov/artificial-intelligence/ai-research/nists-ai-standards-zero-drafts-pilot-project-accelerate
National Institute of Standards and Technology. (2025). Extended outline: Proposed zero draft for a standard on documentation of AI datasets and AI models. https://www.nist.gov/document/extended-outline-proposed-zero-draft-standard-documentation-ai-datasets-and-ai-models
National Security Division. (2025). Data Security Program overview and effective dates. https://www.justice.gov/nsd/data-security
NIST NVD. (2025, September 30). CVE‑2025‑10725 detail. https://nvd.nist.gov/vuln/detail/CVE-2025-10725
Office of the Governor of California. (2025, September 29). Governor Newsom signs legislation to protect Californians and promote innovation in AI. https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-legislation-to-protect-californians-and-promote-innovation-in-ai/
OpenAI. (2025, October 7). Disrupting malicious uses of AI: October 2025. https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025/
Oracle. (2025, October 4). Oracle Security Alert Advisory CVE‑2025‑61882. https://www.oracle.com/security-alerts/alert-cve-2025-61882.html
Reuters. (2025, October 8). EU launches Apply AI to accelerate artificial intelligence adoption. https://www.reuters.com/technology/artificial-intelligence
SecurityWeek. (2025, October 9). GitHub Copilot Chat flaw leaked data from private repositories. https://www.securityweek.com/github-copilot-chat-flaw-leaked-data-from-private-repositories/
The Hacker News. (2025, October 8). Severe Figma MCP vulnerability lets hackers execute code remotely. https://thehackernews.com/2025/10/severe-figma-mcp-vulnerability-lets.html
The Hacker News. (2025, October 4). CometJacking: one click can turn Perplexity’s Comet AI browser into a data thief. https://thehackernews.com/2025/10/cometjacking-one-click-can-turn.html
The Hacker News. (2025, October 7). OpenAI details disruption of malicious uses of AI. https://thehackernews.com/2025/10/openai-details-disruption-of-malicious.html
Euronews. (2025, October 8). EU unveils Apply AI to boost AI adoption. https://www.euronews.com/next
BleepingComputer. (2025, October 3). CometJacking attack tricks Comet browser into stealing emails. https://www.bleepingcomputer.com/news/security/commetjacking-attack-tricks-comet-browser-into-stealing-emails/