Weekly Musings Top 10 AI Security Wrapup: Issue 13 September 26 - October 2, 2025
Biosecurity “zero-day,” EU incident reporting, California SB-53, Gemini “Trifecta” bugs, CISA info-sharing lapse, OpenAI teen controls, and more
AI risk isn’t one thing. It’s a stack of messy, interlocking problems that span biology, software supply chains, privacy, and geopolitics. This week delivered proof. Researchers demonstrated how AI-designed protein variants evaded DNA-synthesis screening and collaborated with government and industry to develop a patch. Brussels moved from principles to operations with a draft template for reporting “serious AI incidents.” California set the first statewide transparency obligations for frontier developers. Tenable disclosed three now-patched vulnerabilities across Google Gemini’s ecosystem, the kind that bite when your agents read logs, search results, or the web. Meanwhile, a U.S. shutdown sidelined cyber defenders and allowed a decade-old information-sharing law to lapse, just as AI-enabled fraud continues to rise. OpenAI launched parental controls for teen users. Lawmakers floated a federal AI evaluation bill. Europe tapped an AI firm to protect the digital euro, and standards bodies continued to push the incident taxonomy forward. If you’re building or buying AI, your governance plan needs to keep up.
1) Science flags an AI-biosecurity “zero-day” that evaded DNA screening
Summary: A Scientific study shows that open-source AI protein-design tools can “paraphrase” known toxins into tens of thousands of variants that evade standard gene-synthesis screening. The team coordinated quiet fixes with DNA providers and U.S. authorities before publishing and releasing mitigations. It’s a first, formalized “zero-day” in AI biosecurity, and it won’t be the last. (Science; Washington Post)
Why it matters
Screening gaps can convert lab ordering systems into attack surfaces.
Model-assisted protein design is advancing faster than guardrails.
Expect regulatory pressure for “security by default” in bio-AI tools.
What to do about it
Require vendor attestations on sequence-screening coverage and update cadence.
Gate any AI-assisted design workflow behind identity, audit, and human-in-the-loop checks.
Map your controls to NASEM and OECD guidance, and track updates on DNA-synthesis screening.
Rock’s musings: I’ve heard “dual use” dismissed as theoretical. This paper pulls that rug. Screening that matches strings will fail when models mutate structure while preserving function. If your organization uses design tools, treat screening like endpoint protection: implement it as a layered, monitored, and regularly patched process. I’d like to see a binding consortium standard with test suites, not just voluntary practices. And yes, that invites red-teaming by design. If we can run chaos drills for cloud, we can do it for wet labs. The worst practice is pretending this risk belongs to “someone else.”
2) EU opens consultation on serious AI incident reporting (Article 73)
Summary: The European Commission’s AI Office has published draft guidance and a reporting template for “serious incidents” involving high-risk AI, launching a consultation from September 26 to November 7. The draft clarifies scope, timelines, and interplay with other regimes and foreshadows 2–15 day reporting windows. Providers should start rehearsal now. (European Commission; National Law Review)
Why it matters
This is the playbook your EU regulators will use to judge post-market monitoring.
Timelines are tight, cross-regime conflicts are real, and definitions matter.
Incident taxonomies will anchor KPIs for boards and audit committees.
What to do about it
Stand up an AI incident response runbook that mirrors the draft template.
Pre-map what’s “serious” for your systems, with trigger criteria and owners.
Dry-run a 72-hour exercise that includes legal, PR, regulators, and customers.
Rock’s musings: Compliance theater won’t cut it. I want to see proof that you can collect the evidence chain for data, prompts, tools, and model versions without a scavenger hunt. If your MLOps can’t recreate an event timeline, you don’t have governance; you have vibes. Treat this like a breach response, not a policy appendix. Use the draft window to argue for sanity where the guidance conflicts with other reporting clocks.
3) California enacts SB-53, the Transparency in Frontier AI Act
Summary: On September 29, Governor Newsom signed SB-53, requiring large frontier developers to publish standardized safety frameworks and incident reports, with whistleblower protections and 15-day reporting for certain safety events. It stops short of third-party pre-deployment audits but sets a U.S. baseline for public disclosures starting in 2026. (Office of Governor Newsom; Associated Press)
Why it matters
First state law to mandate structured safety disclosures for frontier models.
Creates reference obligations others will copy, even before federal action.
Increases reputational cost for hand-wavy policies and “AI-washing.”
What to do about it
Develop a public-facing AI framework that aligns with the NIST AI RMF and ISO/IEC standards.
Define an incident taxonomy compatible with EU Article 73 to avoid duplicate work.
Stand up whistleblower channels specifically for AI risk.
Rock’s musings: I’m not crying for startups. This applies to big players, and the requirements are table stakes for any serious AI shop. The real test is whether disclosures are specific, versioned, and auditable. If the frameworks read like marketing, regulators and buyers should treat them like marketing. My advice to boards: tie compensation to meeting your published safety KRIs.
4) Tenable’s “Gemini Trifecta”: three now-patched flaws in Google’s AI
Summary: Tenable disclosed three vulnerabilities in Google’s Gemini suite affecting Search personalization, Cloud Assist, and the Browsing tool. The issues enabled search-injection, log-to-prompt injection, and sensitive data exfiltration with little to no user interaction. Google patched the flaws. The big lesson is agent exposure to poisoned inputs across your stack. (Tenable; SecurityWeek)
Why it matters
Agentic AI widens your attack surface to logs, calendars, and third-party content.
“Promptware” is just input validation by another name, and most orgs lack it.
Cloud RBAC doesn’t save you if your agent intentionally reads hostile data.
What to do about it
Treat untrusted content as code: sanitize, constrain tools, and enforce allow-lists.
Add canary strings and output-diff checks to detect instruction hijacks.
Update threat models to cover cross-app injections from email, docs, and web.
Rock’s musings: If your SOC triage bot reads logs, assume an attacker will write to those logs. That’s not hypothetical. The fix isn’t to ban assistants, it’s to fence them. Constrain tools, record tool use, and implement content security policies for your agents. If a vendor can’t show you injection tests that fail safely, don’t wire them into production systems.
5) Apple and OpenAI move to dismiss xAI’s antitrust suit
Summary: On October 1, Apple and OpenAI asked a federal judge to toss xAI’s complaint alleging anti-competitive collusion tied to ChatGPT integrations on iOS. The companies argue there’s no exclusive deal and no duty to partner with every chatbot. The case sits alongside xAI’s separate trade-secret dispute with OpenAI. (Reuters; The Verge)
Why it matters
Outcome affects platform neutrality expectations for AI assistants.
Discovery could expose partnership terms, model-evaluation data, or both.
Antitrust scrutiny will shape default AI distribution on phones and PCs.
What to do about it
Prepare dual-vendor AI strategies to avoid platform lock-in.
Negotiate portability clauses for memories, embeddings, and agent graphs.
Monitor for conduct remedies that might open new integration paths.
Rock’s musings: Antitrust isn’t about hurt feelings. It’s about market power over distribution. Whether or not this case survives, CISOs should assume default assistants will reflect platform economics, not their security needs. Build controls that outlast a vendor breakup, a new default, or a surprise delist.
6) Shutdown gut-punch: CISA furloughs, and the 2015 cyber info-sharing law lapses
Summary: The U.S. shutdown sidelined most of CISA’s workforce and let the Cybersecurity Information Sharing Act of 2015 expire on September 30, removing liability protections that encouraged private-public threat-intel sharing. Without those shields, many companies will hesitate to share indicators or tactics. Bad timing during peak ransomware season. (Washington Post; Cybersecurity Dive)
Why it matters
Less sharing reduces early warning on AI-accelerated attack cycles.
Legal exposure and FOIA risk chill collaboration.
State and sector ISACs will bear the brunt of the load.
What to do about it
Shift sharing to private ISAC/TIP frameworks with contractual protections.
Revisit counsel-approved sharing playbooks and indemnities.
Increase automated, anonymized exchange via STIX/TAXII until protections return.
Rock’s musings: We can debate policy later. Right now, defenders need continuity. If your threat-sharing program relies on government pipes, build Plan B. That’s not a political statement; it’s operational hygiene. And if you’ve been freeloading on community intel, today’s a good day to give back.
7) OpenAI rolls out parental controls and teen safety changes
Summary: OpenAI has launched parental controls that link parent-teen accounts and allow restrictions on content, image, and voice features, as well as memory and training. Optional alerts can notify parents of high-risk behavior. Critics immediately tested and found bypasses, arguing that defaults and verification still fall short. (OpenAI; Reuters)
Why it matters
Youth safety is now a regulatory and reputational KPI for AI platforms.
Controls shift liability expectations for schools and consumer brands.
Expect copycat features from other model providers.
What to do about it
Enforce teen settings at the IdP level for edu and youth-facing products.
Require vendors to prove age-gating and default-safe configurations.
Add hotline escalation and human-review procedures to your acceptable-use policy.
Rock’s musings: I’m glad to see movement, but safety that depends on perfect onboarding will fail at consumer scale. Defaults matter. Enterprises should assume teens will touch your public-facing AI even if your policy says they won’t. If you serve minors, treat teen safety like PCI. Prove it, don’t just declare it.
8) Senators Hawley and Blumenthal float a federal AI evaluation bill
Summary: A bipartisan proposal would establish a DOE-run evaluation program for advanced AI, requiring developers to submit models for testing on risks such as loss of control and adversarial misuse prior to deployment. It’s early-stage, but it signals momentum toward pre-release guardrails in the U.S. (Axios; Biometric Update)
Why it matters
Puts federal muscle behind standardized evals, not just voluntary tests.
Could be harmonized with EU GPAI duties and state rules, such as SB-53.
Requires resourcing, test suites, and red-team talent at scale.
What to do about it
Inventory evals you already run, mapped to foreseeable high-consequence risks.
Engage early with DOE/NIST pilots to avoid incompatible duplications.
Budget for third-party testing and secure model-artifact submission workflows.
Rock’s musings: If you’ve shipped models without structured evals, this is your wake-up. The smart play is to standardize on open, reproducible test harnesses now. When rules arrive, you won’t be rewriting your pipeline under duress.
9) ECB picks an AI firm to help secure the digital euro, and it raises $75M
Summary: The European Central Bank selected Feedzai to provide central fraud detection for the prospective digital euro, and on the same day, the company announced a $75 million funding round at a $2 billion valuation. The ECB framework is multi-vendor for core components and aims for a 2029 launch, pending legislative approval. (Reuters; European Central Bank)
Why it matters
Central-bank digital currency security will set a new bar for raising fraud controls.
Banks and PSPs will need model risk governance that aligns with the ECB’s requirements.
Signals that AI-native fraud detection is becoming part of the national payments infrastructure.
What to do about it
If you operate in the eurozone, assess integration paths and data-sharing controls.
Stress-test your AML/fraud stack for CBDC edge cases and synthetic identity.
Align model validation with European banking supervisors’ expectations.
Rock’s musings: Payments teams, this is your early notice. When a central bank bakes AI into transaction risk scoring, you’ll inherit interfaces and obligations. Don’t bolt this on later. Build fraud-model observability and appeals processes now.
10) OECD sharpens AI incident definitions as jurisdictions converge
Summary: The OECD’s AI Incidents and Hazards Monitor and related analysis, as well as the advanced incident taxonomy work, were advanced this week, laying the groundwork for cross-border alignment. The timing aligns with the EU’s Article 73 push for serious-incident reporting and will influence how firms classify and share AI-related issues. (OECD AI risks and incidents; OECD analysis)
Why it matters
Common definitions reduce double-work across EU, U.S., and sector rules.
Better taxonomies improve analytics and trend detection.
Stronger foundations for disclosure, research, and insurer modeling.
What to do about it
Map your internal incident types to OECD categories.
Add incident labels to your AI change-management and post-incident reviews.
Contribute anonymized cases to industry groups to improve the evidence base.
Rock’s musings: Governance needs boring plumbing. Good taxonomies beat clever dashboards. If you want actionable metrics for execs, start by naming things the same way every time. Then automate the capture. Your audit committee will thank you.
The one thing you won’t hear about but you need to
DHS OIG warned CISA hadn’t finalized the future of its Automated Indicator Sharing program as the 2015 law sunset approached. That warning hit a week before the shutdown and the lapse of liability protections. Translation: the public pipes were already shaky. If you depend on AIS-based flows, don’t wait for Congress to act. Stand up private exchange rails with contractual protections and clear minimization rules, then mirror out when legal shields return. (DHS OIG; Cybersecurity Dive)
Why it matters
A fragile public backbone, combined with a legal lapse, equals blind spots.
Adversaries won’t pause because Washington did.
Boards expect continuity of situational awareness.
What to do about it
Migrate priority sharing to sector ISACs and commercial TIPs with indemnities.
Implement anonymization and delayed disclosure policies reviewed by counsel.
Track reauthorization bills and be prepared to pivot once the shields are back in place.
Rock’s musings: Hope isn’t a control. Build redundancy into your intel sharing, just as you do in production. When the legal winds shift, you shouldn’t have to rebuild your program from scratch.
If you want deeper dives, RockCyber’s posts and the RockCyber Musings feed are your best launchpad for board-ready action items.
👉 What do you think? Ping me with the story that keeps you up at night, or the one you think I overrated.
👉 The Wrap-Up drops every Friday. Stay safe, stay skeptical.
👉 For deeper dives, visit RockCyber.
References
European Central Bank. (2025, October 2). ECB selects digital euro service providers.
European Commission. (2025, September 26). AI Act: Commission issues draft guidance and reporting template on serious AI incidents.
Microsoft. (2025, October 2). Researchers find — and help fix — a hidden biosecurity threat.
National Law Review. (2025, September 26). European Commission opens consultation on EU AI Act serious incident guidance.
OECD. (2025, September 25). How are AI developers managing risks?
OECD. (2025, September 30). Name it to tame it: Defining AI incidents and hazards.
OpenAI. (2025, September 30). Introducing parental controls.
Reuters. (2025, September 29). OpenAI launches parental controls in ChatGPT after California teen’s suicide.
Reuters. (2025, October 1). Apple, OpenAI ask judge to dismiss xAI lawsuit.
Reuters. (2025, October 2). ECB picks AI startup to prevent digital euro frauds.
SecurityWeek. (2025, September 30). Google patches Gemini AI hacks involving poisoned logs and search results.
Tenable Research. (2025, September 30). The Gemini Trifecta.
The Associated Press. (2025, September 29). California Gov. Gavin Newsom signs landmark bill creating AI safety measures.
The Office of Governor Gavin Newsom. (2025, September 29). Governor Newsom signs SB-53.
The Washington Post. (2025, October 2). How AI is making it easier to design new toxins without being detected.
The Washington Post. (2025, October 2). Shutdown guts U.S. cybersecurity agency at perilous time.
Axios. (2025, September 29). Exclusive: Hawley and Blumenthal unveil AI evaluation bill.
Biometric Update. (2025, October 2). Bill before Congress would push federal oversight into frontier AI.
Cybersecurity Dive. (2025, October 1). Landmark US cyber-information-sharing program expires.
DHS Office of Inspector General. (2025, September 25). CISA has not finalized plans for AIS beyond Sept. 30, 2025.