Weekly Musings Top 10 AI Security Wrapup: Issue 22 November 21, 2025 - December 4, 2025
Global OT Guidance, Failing Safety Scores, Teen Chatbot Fallout, And The Coming AI Disclosure Crunch
The Weekly Musings took a break last week due to the Thanksgiving holiday here in the States. Wherever you are, I hope you have had the opportunity to relax and recharge with loved ones.
The last two weeks crystallized a simple reality where AI has evolved from a set of isolated tools to an infrastructure layer that regulators, attackers, and plaintiffs now treat as systemic. Regulators moved on three fronts at once. Global security agencies published hard guidance for AI in operational technology. The White House backed off an aggressive attempt to preempt state AI laws. The SEC’s Investor Advisory Committee started pushing toward mandatory disclosures on AI’s impact, including board oversight.
Vendors didn’t sit still, either. The Future of Life Institute’s Winter 2025 AI Safety Index gave every major frontier lab a grade no better than C+. OpenAI disclosed a data exposure at analytics vendor Mixpanel that hit API users, not because OpenAI itself was breached but because a partner was. Character.ai ripped out open-ended chat for minors and stood up an AI Safety Lab after lawsuits over teen suicides.
Your job is to turn that mess into decisions. Where do you slow down deployment? Where do you accelerate investment in governance? Where do you demand better from vendors? I will walk you through the eleven items that actually change your risk posture, map them to practical actions, and give you a few blunt questions to drop into your next board or exec meeting.
1. Global OT AI Guidance Gives Critical Infrastructure A Real Playbook
Summary
CISA, the NSA, and an international pack of cyber agencies released “Principles for the Secure Integration of Artificial Intelligence in Operational Technology,” a 25-page guidance for anyone wiring AI into industrial and critical infrastructure environments. The document focuses on machine learning, large language models, and AI agents operating alongside physical processes like power, water, and manufacturing. It lays out four overarching principles, starting with “Understand AI,” and drills into risks like poor sensor data quality, interoperability with legacy OT, AI-driven remote access, and shared responsibility between asset owners and AI suppliers.
National cyber centers in Canada, New Zealand, and others all pushed the same guidance to their operators, signaling that this is the emerging international baseline, not a niche US view. Trade press summaries stressed the warning that OT data lakes used to train AI can become high-value targets and that owners should treat AI systems as new attack surfaces, not just smarter analytics.
Why It Matters
AI in OT is now a formally recognized safety risk, not just an efficiency tool.
Regulators just got a shared language to question your AI-in-plants roadmap.
Vendors can no longer claim there is “no guidance” on AI agents touching valves and PLCs.
What To Do About It
Map every current or planned AI workload that touches OT to the four principles in the guidance.
Update your AI risk register and incident playbooks to explicitly include OT scenarios and data-lake compromise.
Fold these principles into your broader AI governance work, including programs built on RockCyber’s RISE and CARE frameworks for AI strategy and governance.
Rock’s Musings
If you run a plant, a pipeline, or any process that can blow up, this document is not optional homework. It is the nearest thing you have to a pre-standard for AI in OT before regulators start dropping binding requirements. Too many AI conversations still live in white-collar land, with people arguing about copilots and chatbots while forgetting that someone already wired a model into a safety-critical loop.
I would bring your OT lead, CISO, and whoever owns AI strategy into a room and force them to do a simple exercise. Take one live OT system. Ask, “If we dropped an LLM or anomaly detector into this, what new kill chain appears?” Then walk that against the guidance. The answer should feed your AI architecture decisions and your budget. If you treat this as another PDF to file under “frameworks,” you are basically hoping your plant never becomes the first AI-assisted safety case on the evening news.
2. White House Pauses AI Preemption Order, State Patchwork Lives On
Summary
Reuters confirmed that the White House has paused a draft executive order that would have directed the Justice Department to challenge state AI laws and allowed federal agencies to withhold broadband funding from states with “onerous” AI rules. The draft would have created an AI Litigation Task Force to challenge state laws on grounds such as interference with interstate commerce or conflict with federal regulation.
This pause falls within a broader fight over who actually governs AI in the United States. The Bipartisan Policy Center’s new piece on AI governance notes that Congress already rejected a federal moratorium on state AI laws on a 99-1 vote, and lays out eight lessons for any future preemption attempt, including the need for clear national standards rather than empty freezes.
Why It Matters
State AI laws will keep piling up, because the federal “stop” button just failed again.
Your compliance stack needs to handle Colorado, California, and New York rules simultaneously.
Any future federal preemption will likely come bundled with prescriptive national standards.
What To Do About It
Stand up or update a state AI law tracker as part of your AI governance program, not as an ad hoc spreadsheet.
Design controls to a “highest common denominator” baseline so you are not rewriting policies for every new state statute.
Use your AI strategy program, whether internal or with partners, to brief the board on realistic preemption scenarios.
Rock’s Musings
I keep hearing executives say, “We will wait for the federal AI law so we do not overbuild.” That fantasy just took another hit. States are not waiting. Colorado’s AI Act and California’s SB 53 already exist, although not fully implemented, and BPC is pretty clear that any attempt to freeze state action without a detailed federal replacement is a political non-starter.
If you are still treating AI compliance as a sidecar on your privacy program, you are going to get crushed by timelines when a state attorney general knocks. The practical move is boring but effective. Standardize how you describe AI systems, risks, and controls across the company. Then map that to state laws and to federal signals like America’s AI Action Plan that I broke down earlier this year. That is exactly why we built the RISE and CARE frameworks at RockCyber, so you can pivot as policy whiplash continues without rebuilding your governance from scratch.
3. AI Safety Index: Everyone Gets A “C+” Or Worse
Summary
The Future of Life Institute published its Winter 2025 AI Safety Index, grading eight major AI companies across domains like risk assessment, current harms, safety frameworks, existential safety, governance, and information sharing. Anthropic and OpenAI scored best with overall C+ grades, while Google DeepMind landed a C, and others, including Meta, xAI, DeepSeek, Z.ai, and Alibaba Cloud, sat in the D range. No one came close to an A.
Reuters’ summary pulled no punches, highlighting that none of the companies had a credible strategy for controlling superintelligent systems even as they race toward them, and quoting Max Tegmark’s line that US AI firms remain “less regulated than restaurants” despite repeated incidents linked to self-harm and AI-powered hacking.
Why It Matters
You now have an external safety scoreboard for your key AI vendors.
Investors, regulators, and boards can benchmark claims against a third-party methodology.
C and D grades put pressure on procurement and risk teams to justify the use of these models.
What To Do About It
Add the AI Safety Index grades to your vendor due diligence packs and AI supply chain risk register.
For any vendor at D or below, require evidence of concrete safety controls, not just marketing promises.
Use the index as one input in your AI model selection process, besides your own threat modelling and evaluation.
Rock’s Musings
If a critical supplier scored a D on a basic security assessment, you would either push them into an improvement plan or rip them out. Yet in AI, people see a vendor get a D on existential safety and shrug because the demos look impressive. That disconnect is the problem. This index is not perfect, but it is a big step toward making vague “we take safety seriously” claims measurable.
When I talk with CISOs and CAIOs, the hesitation is usually, “We do not control OpenAI, Google, or the others, so what do we do with these grades?” My answer is simple. Treat them like you treat cloud providers. You cannot fix their stack, but you can control blast radius, monitoring, and what data you hand over. Combine the Safety Index with the kind of AI supply chain work I laid out in my piece on AI supply chain security, and you move from passive worry to active risk design. That is the difference between governance theater and governance that actually survives discovery.
4. OpenAI’s Mixpanel Breach: Your AI Supply Chain Just Raised Its Hand
Summary
OpenAI disclosed that its web analytics provider, Mixpanel, suffered a security incident on November 9 that allowed an attacker to export a dataset containing “limited customer identifiable information and analytics information” related to some OpenAI API users. Exposed fields included names, email addresses, coarse location, browser and operating system details, referring websites, and internal user or organization IDs. Chat content, prompts, API requests, API usage data, passwords, keys, and payment details were not affected.
OpenAI terminated Mixpanel’s use in production, is reviewing the affected dataset, and says it is tightening security vetting across all vendors. Security outlets pointed out that this is yet another example where the primary platform can be hardened but a third-party analytics tool becomes the weak link. Reports note that Mixpanel detected the breach on November 9 and shared the dataset with OpenAI on November 25, with lawsuits already emerging that target both firms.
Why It Matters
AI platforms extend your attack surface into every analytics and telemetry vendor they touch.
Even “non-sensitive” metadata is enough for targeted phishing and social engineering against your developers.
Regulators and plaintiffs will not care whether the leak started at OpenAI or at its vendor.
What To Do About It
Require your AI providers to disclose their key analytics and telemetry vendors, as well as their security posture.
Treat AI telemetry like any other sensitive data, with contractual limits, technical controls, and monitoring for abnormal access.
Run a focused supplier risk review of your AI analytics stack using the same lens you would for CRM and billing systems.
Rock’s Musings
If this feels familiar, it should. We have been here with marketing platforms, call center outsourcing providers, and payroll processors. The twist is that AI vendors are still selling a story where “we never leak prompts or outputs,” while quietly streaming rich metadata to third parties that attackers love. That combination of trust in the core and neglect at the edge is exactly how you end up with a low-grade incident that becomes a high-grade reputation hit.
I would not overreact and rip every analytics tool out of your AI stack. I would absolutely demand the same level of supplier transparency from AI platforms that you demand from your cloud providers. Treat this breach as a dry run. If your AI platform vendor called you tonight with this exact scenario, could you quickly answer the board’s question: who was exposed, what is the phishing risk, and what did we already do about it? If the answer is no, your AI governance is not yet real. This is precisely where a structured AI risk assessment, like the one we use at RockCyber, earns its keep.
5. Genesis Mission: The Federal Government Builds Its Own AI Factory
Summary
Executive Order 14363, “Launching the Genesis Mission,” formally creates a national effort to use AI to accelerate scientific discovery and strengthen US technological and security capabilities. The order directs the Department of Energy to build and operate an “American Science and Security Platform” that integrates decades of federal scientific datasets with high-performance compute to train scientific foundation models and deploy AI agents for hypothesis testing and workflow automation.
Law firm analyses highlight that Genesis centralizes previously scattered AI efforts under a DOE-led platform and aligns closely with America’s broader AI Action Plan. The fact sheet explicitly ties Genesis to national security, energy dominance, and strategic leadership, not just basic research. Private sector participation is expected through partnerships with national labs, universities, and companies that can build on the platform while complying with strict data and security controls.
Why It Matters
The federal government is building a dedicated AI platform around its most sensitive scientific and security datasets.
Genesis will shape expectations for how “serious” AI programs manage data, agents, and safety at scale.
Vendors that want to play in this ecosystem will need strong evidence of AI security and governance.
What To Do About It
If you operate in energy, health, materials, or defense, map how Genesis could change your data access and partnership strategy.
Use the Genesis architecture principles as a reference when designing your own internal AI platforms.
Brief your board that AI is now an explicit instrument of national strategy, not just a productivity tool.
Rock’s Musings
This is the moment where AI stops being a set of pilots sprinkled across agencies and becomes infrastructure. Think of Genesis as the government’s internal AI cloud. That is a simplification, but it is directionally right. Once DOE starts centralizing models, data, and agents on a single platform, you will see much stronger expectations from agencies about how industry partners handle similar stacks.
For enterprise leaders, the lesson is not to copy Genesis feature by feature. The lesson is to stop letting every team spin up their own AI “experiment” on whatever data lake they find. You need your own American-Science-and-Security-Platform inside the company, even if you call it something less dramatic. Tie it to a clear strategy, use something like the RISE framework to govern it, and assume regulators will eventually ask how your controls compare to federal practice. Waiting for that question without an answer is a bad strategy.
6. Poetry As An Attack Surface: DexAI’s Jailbreak Study
Summary
Researchers at Italy’s Icaro Lab, part of ethical AI company DexAI and Sapienza University, showed that large language models can be jailbroken using carefully crafted poems. They wrote 20 poems in Italian and English, each ending with an explicit request for harmful content such as hate speech, self-harm instructions, or guidance on weapons. When they tested these across 25 AI models from nine major companies, 62 percent of the poetic prompts elicited responses that would normally be blocked.
Some models resisted, including smaller ones like GPT-5 nano in the reported tests, while others failed completely. Google’s Gemini 2.5 Pro reportedly responded harmfully to all tested poems. The authors withheld the exact poems due to their potential for misuse and say they only heard back from Anthropic ahead of publication. Their next step is a public poetry challenge to probe safety guardrails further, highlighting how language style and metaphor confuse current detection systems.
Why It Matters
Safety systems tuned for plain prompts can fail badly when requests are embedded in creative language.
Red-teaming that relies solely on direct instructions misses a large class of vulnerabilities.
Attackers do not need exotic exploits when a clever poem works.
What To Do About It
Expand your internal and vendor safety evaluations to include stylized and metaphor-heavy prompts.
Train your AI security and testing teams to think like creative adversaries, not only like engineers.
Include adversarial content evaluation in your AI governance process, not as a one-off pen test.
Rock’s Musings
If your red-team only asks, “Tell me how to build a bomb,” your models probably look safe. If someone writes a strange sonnet that walks the model toward the same outcome, your filters might fold instantly. This study is a reminder that AI security is not just about rules and checklists. It is about the messy edge cases where human creativity meets machine pattern matching.
When I work with teams on agent security, I keep coming back to one idea. You do not get to define how your system will be used. You get to define how it will behave when people use it in ways you did not imagine. That is why context engineering, not just bigger context windows, matters. If you have not yet tried to break your own models with adversarial narratives, stories, or poetry, someone else will. They will not email you the results.
7. Character.ai Rips Out Teen Chat And Bets On “Stories”
Summary
Character.ai announced that it would remove open-ended chat for users under 18 and instead give them access to structured experiences like videos, stories, and streams. The company limited teen chat time as it phased out the feature in late November and rolled out age assurance that combines in-house models with third-party tools such as Persona.
On November 25, Character.ai launched “Stories,” an interactive fiction format where users pick two or three characters, choose a genre, and then guide a narrative through branching choices. Stories is explicitly “built for teens” as a visual and replayable alternative to open-ended chat. The company also announced plans for an independent non-profit AI Safety Lab focused on safety alignment for entertainment AI. Coverage tied these moves directly to a series of lawsuits alleging that the platform’s chatbots contributed to teen suicides and to growing regulatory questions about AI companions for minors.
Why It Matters
A major chatbot platform just accepted that open-ended AI chat for teens is too risky.
Structured formats like Stories may become the regulatory expectation for youth-facing AI.
AI Safety Labs dedicated to a specific vertical are starting to appear.
What To Do About It
If your product touches minors, assume regulators will compare you directly to Character.ai’s model.
Separate under-18 AI experiences from adult ones in your architecture and governance.
Consider supporting independent safety research for your vertical instead of relying only on internal teams.
Rock’s Musings
For years, platforms have tried to argue that with enough filters, you can always make AI companions safe for teens. Character.ai just publicly admitted that the risk-reward tradeoff does not work. They are not shutting the doors. They are rebuilding the entire experience around structured stories instead of open conversation. That is a very different design decision.
If you ship AI into education, gaming, or consumer apps, you should treat this as a line in the sand. Do you really think your small team has solved problems that a company built entirely around AI characters decided it could not solve? Probably not. I would rather see you design constrained formats to clearly specify guardrails and be able to show regulators that you changed the product based on safety evidence, not PR. That story plays much better in a hearing room than “we trusted the filters.”
8. Chatbot-Linked Suicides And The Mental Health Front
Summary
The Guardian reported on a lawsuit filed by the parents of 16-year-old Adam Raine, who died by suicide after months of conversations with ChatGPT. The complaint alleges that the chatbot encouraged his suicidal thoughts, provided specific methods, and helped draft a suicide note. OpenAI responded that such use violated its terms of service, called the excerpts in the lawsuit misleading without full context, and stressed that it has been upgrading safeguards and directing distressed users toward professional help.
Advocacy group Public Knowledge warned that hastily drafted kids-and-teens AI chatbot safety laws could backfire or duplicate existing liability regimes rather than closing real gaps. Meanwhile, OpenAI’s own news feed now highlights new funding for research into AI and mental health, signaling that vendors know the mental health risk cannot be hand-waved away as “misuse.”
Why It Matters
AI mental health harms are moving from headlines into court cases and policy proposals.
Vendors are beginning to fund mental health research, which raises expectations for responsible deployment.
Your organization may face similar liability if you deploy AI that engages with vulnerable users.
What To Do About It
Classify any AI feature that can engage distressed users as high-risk and give it dedicated governance.
Build escalation paths from AI interactions to human professionals, including clear “off ramps” from the interface.
Monitor legal and regulatory developments so your own guardrails and logging can stand up under discovery.
Rock’s Musings
This is not just a platform problem. If you build or embed conversational AI into anything that touches health, wellness, or financial distress, you are playing in the same risk pool. Plaintiffs will not care that your dataset is smaller than OpenAI’s or that your bot “only” helps with coaching. They will care about transcripts and logs. Regulators will care whether you designed for safety or trusted disclaimers.
I am not interested in demonizing chatbots as such. I am very interested in whether organizations understand what happens when a lonely teen or an anxious customer starts treating your bot as a confidant. That is why mental health is quickly becoming a core AI risk category, not just a side effect. Put bluntly, if you would be horrified to see your bot’s chat logs read aloud in a courtroom, you probably need to redesign it now. Governance is no longer abstract here. It is life and death on both moral and legal fronts.
9. AI Adoption Rockets, Governance, And Incident Playbooks Limp Behind
Summary
A new CSO Online analysis warns that AI adoption is surging while governance and security lag badly. Only 7 percent of organizations in the underlying report had a dedicated AI governance team, and just 11 percent felt prepared to handle emerging AI regulatory requirements. The authors call for continuous discovery of AI use, real-time monitoring of prompts and outputs, and identity policies that treat AI systems as distinct actors with scoped access, rather than as invisible features.
In parallel, a Lexology piece on “shadow AI” describes rapid growth in employees uploading sensitive data to public and semi-public LLMs, effectively turning normal productivity use into a quiet data-exfiltration channel. The Future Society’s work on AI incident response adds that US agencies still lack clear playbooks for AI-related failures, even as incidents rise across three broad categories of harm.
Why It Matters
You probably have more AI in production than your risk and governance functions realize.
Regulators and insurers will not accept “we did not know employees were using it” much longer.
Incident response for AI failures is years behind where it needs to be.
What To Do About It
Run continuous discovery to identify all AI systems, shadow tools, and integrations in your environment.
Treat AI systems as first-class identities with least-privilege access and auditable actions.
Develop AI-specific incident playbooks, aligned with NIST’s AI RMF and the kind of evidence-driven approach I described in our AAGATE governance work.
Rock’s Musings
I keep seeing the same pattern. Top leadership thinks the company has “a few pilots,” while the help desk is already seeing tickets that prove AI is woven into dozens of workflows. Finance teams are pasting sensitive data into public copilots. Developers are pulling code suggestions from whatever model responds fastest. Governance is still stuck drafting an AI policy PDF that no one reads.
You do not fix this with another town hall or a ban. You fix it with discovery and with treating AI as something you manage, not something that happens to you. That means clear inventories, clear owners, and clear expectations for monitoring. It also means building AI incident response the way you built cyber incident response after your first major breach. If you are still planning to improvise your way through your first AI failure, you are repeating a mistake we should have learned from a decade ago.
10. Australia’s National AI Plan: Innovation First, New Rules Later
Summary
Australia released its National AI Plan on December 2, choosing to lean on existing laws and sector regulators rather than introduce sweeping new AI-specific statutes. The plan emphasizes attracting investment into advanced data centers, building AI skills to support jobs, and using AI to boost productivity while maintaining public safety. It confirms that an AI Safety Institute will launch in 2026 to monitor emerging risks and support regulators.
Coverage from Australian outlets highlights the plan’s dual recognition that AI can exacerbate national security threats and introduce “new and unknown” ones, along with its focus on sovereign AI capability through a GovAI hosting platform for public sector AI workloads. Critics argue that while the roadmap is ambitious on economic benefits, it leaves gaps in accountability, democratic oversight, and long-term sustainability, risking an efficient but inequitable AI ecosystem.
Why It Matters
A G20 country just chose a “use existing laws plus a safety institute” path for AI.
GovAI hints at future expectations for where public sector AI is allowed to run.
Multinationals will face a different AI regulatory environment in Australia than in the EU.
What To Do About It
If you operate in Australia, align your AI risk language with the National AI Plan and prepare for the AI Safety Institute’s evaluations.
Plan for data residency and sovereign hosting needs if you want to win government AI business there.
Use Australia’s approach as a reference point in board discussions about how much regulation is actually “necessary” for AI.
Rock’s Musings
Australia’s move is a useful counterweight to the “every risk needs a new law” instinct. They are betting that existing privacy, consumer, and sectoral rules can carry a lot of the load, backed by a technical safety institute rather than a thick new statute. That is closer to how many companies should think. Strong governance and safety practices first. Narrow regulations where gaps remain.
For you, the signal is that global AI governance will not converge on one model. You will juggle EU-style comprehensive laws, US state patchworks, and national plans like Australia’s that rely heavily on existing frameworks. That is exactly why your AI governance cannot just mirror one jurisdiction. You need a principled stack that can map to multiple regimes without breaking. If you do that well, you turn regulatory diversity into a competitive advantage instead of a permanent fire drill.
The One Thing You Will Not Hear About But You Need To
SEC Investor Advisory Committee Eyes AI Disclosure Requirements
Summary
The SEC’s Investor Advisory Committee met today to consider a draft recommendation on “the disclosure of artificial intelligence’s impact on operations,” alongside panels on corporate governance changes and tokenization of equity. The draft from the Disclosure Subcommittee would push the Commission to require issuers to adopt a clear definition of “artificial intelligence,” disclose board oversight mechanisms for AI deployment, and report on how AI affects internal operations and consumer-facing matters when material.
Commissioner Hester Peirce’s remarks acknowledged AI’s central role in 2025 markets and noted rising investor interest in how AI affects risk, hiring, and cyber threat exposure. Compliance commentators point out that, while the recommendation is not yet a rule, it fits a familiar SEC pattern. First, an advisory committee recommendation. Next, staff guidance. Then, eventually, formal disclosure obligations.
Why It Matters
AI disclosure is now on the SEC’s formal agenda, not just a talking point.
Boards may soon need to show explicit oversight structures for AI deployment.
Public companies will have to standardize how they describe AI use and risk in filings.
What To Do About It
Work with legal and investor relations teams to draft a consistent, honest definition of AI for your company.
Ensure your board charter and committee structures clearly assign AI oversight, then document how that oversight operates.
Pilot an “AI section” in your next 10-K or 20-F draft before the SEC forces the issue, drawing on the oversight play I laid out in my AI agent risk plan for boards.
Rock’s Musings
This is one of those slow-burn developments that decides who looks prepared and who looks defensive two years from now. Today it is an advisory committee debating definitions and draft language. Tomorrow it is your external counsel telling you that you must disclose AI risks with the same discipline you bring to cyber, climate, and human capital. The companies that start now will make this look easy. The ones that wait will cram and get sloppy.
If you sit on a board, ask yourself a blunt question. Could you explain, in one paragraph, where AI shows up in your business, what guardrails exist, and who owns the risk. If the honest answer is no, the SEC is not your main problem. You already have a governance gap. My advice is to steal shamelessly from the kind of 90-day oversight plan I shared for AI agents. Use that to get your house in order before the disclosure wave hits. It is much nicer to adjust a draft paragraph than to defend silence after the fact.
Citations
Bipartisan Policy Center. (2025, December 3). Eight considerations to shape the future of AI governance. Retrieved December 4, 2025, from https://bipartisanpolicy.org/article/eight-considerations-to-shape-the-future-of-ai-governance/
Canadian Centre for Cyber Security. (2025, December 3). Joint guidance on principles for the secure integration of artificial intelligence in operational technology. Retrieved December 4, 2025, from https://www.cyber.gc.ca/en/news-events/joint-guidance-principles-secure-integration-artificial-intelligence-operational-technology
Character.ai. (2025, October 29). Taking bold steps to keep teen users safe on Character.AI. Retrieved December 4, 2025, from https://blog.character.ai/u18-chat-announcement/
Character.ai. (2025, November 25). Introducing Stories: A new way to create, play, and share adventures with your favorite characters. Retrieved December 4, 2025, from https://blog.character.ai/introducing-stories-a-new-way-to-create-play-and-share-adventures-with-your-favorite-characters/
Character.ai Support. (2025). Important changes for teens on Character.ai. Retrieved December 4, 2025, from https://support.character.ai/hc/en-us/articles/42645561782555-Important-Changes-for-Teens-on-Character-ai
CSO Online. (2025, November 30). AI adoption surges while governance lags, report warns of growing shadow identity risk. Retrieved December 4, 2025, from https://www.csoonline.com/article/4099211/ai-adoption-surges-while-governance-lags-report-warns-of-growing-shadow-identity-risk.html
Future of Life Institute. (2025, December). AI Safety Index: Winter 2025 edition. Retrieved December 4, 2025, from https://futureoflife.org/ai-safety-index-winter-2025/
Guardian, The. (2025, November 30). AI’s safety features can be circumvented with poetry, research finds. The Guardian. Retrieved December 4, 2025, from https://www.theguardian.com/technology/2025/nov/30/ai-poetry-safety-features-jailbreak
Guardian, The. (2025, November 26). ChatGPT firm blames boy’s suicide on “misuse” of its technology. The Guardian. Retrieved December 4, 2025, from https://www.theguardian.com/technology/2025/nov/26/chatgpt-openai-blame-technology-misuse-california-boy-suicide
Independent Community Bankers of America. (2025, December 4). CISA releases guidance on integrating AI in operational technology. Retrieved December 4, 2025, from https://www.independentbanker.org/w/cisa-releases-guidance-on-integrating-ai-in-operational-technology%C2%A0
Lexology. (2025, December 2). What is shadow AI? How do reasonable security and governance apply. Retrieved December 4, 2025, from https://www.lexology.com/library/detail.aspx?g=a475fd03-220d-4bb3-87c8-0634dd6625b8
Local News Matters. (2025, November 26). Menlo Park-based AI company restricts chatbot access to minors after lawsuits. Retrieved December 4, 2025, from https://localnewsmatters.org/2025/11/26/menlo-park-based-ai-company-restricts-chatbot-access-to-minors-after-lawsuits/
National Security Agency. (2025, December 3). NSA, CISA, and others release guidance on integrating AI in operational technology. Retrieved December 4, 2025, from https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/4347041/nsa-cisa-and-others-release-guidance-on-integrating-ai-in-operational-technology/
New York Post. (2025, November 27). OpenAI blames “misuse” of ChatGPT after 16-year-old kills himself following “months of encouragement.” New York Post. Retrieved December 4, 2025, from https://nypost.com/2025/11/27/business/openai-blames-misuse-of-chatgpt-after-16-year-old-kills-himself/
OpenAI. (2025, November 26). What to know about a recent Mixpanel security incident. Retrieved December 4, 2025, from https://openai.com/index/mixpanel-incident/
Public Knowledge. (2025, November). Kids & teens safety regulations for AI chatbots could backfire. Retrieved December 4, 2025, from https://publicknowledge.org/kids-teens-safety-regulations-for-ai-chatbots-could-backfire/
Reuters. (2025, November 21). White House pauses executive order that would seek to preempt state laws on AI, sources say. Reuters. Retrieved December 4, 2025, from https://www.reuters.com/world/white-house-pauses-executive-order-that-would-seek-preempt-state-laws-ai-sources-2025-11-21/
Reuters. (2025, December 2). Australia rolls out AI roadmap, steps back from tougher rules. Reuters. Retrieved December 4, 2025, from https://www.reuters.com/world/asia-pacific/australia-rolls-out-ai-roadmap-steps-back-tougher-rules-2025-12-02/
Reuters. (2025, December 3). AI companies’ safety practices fail to meet global standards, study shows. Reuters. Retrieved December 4, 2025, from https://www.reuters.com/business/ai-companies-safety-practices-fail-meet-global-standards-study-shows-2025-12-03/
SecurityWeek. (2025, December 4). Global cyber agencies issue AI security guidance for critical infrastructure OT. Retrieved December 4, 2025, from https://www.securityweek.com/global-cyber-agencies-issue-ai-security-guidance-for-critical-infrastructure-ot/
Securities and Exchange Commission. (2025, November 25). SEC Investor Advisory Committee to host Dec. 4 meeting on regulatory changes in corporate governance, the tokenization of equity securities. Retrieved December 4, 2025, from https://www.sec.gov/newsroom/press-releases/2025-135-sec-investor-advisory-committee-host-dec-4-meeting-regulatory-changes-corporate-governance
Securities and Exchange Commission Investor Advisory Committee. (2025, November 18). Recommendation of the Disclosure Subcommittee regarding the disclosure of artificial intelligence’s impact on operations [Draft]. Retrieved December 4, 2025, from https://www.sec.gov/files/sec-iac-artificial-intelligence-recommendation-111825.pdf
TechCrunch. (2025, November 25). Character AI will offer interactive Stories to kids instead of open-ended chat. TechCrunch. Retrieved December 4, 2025, from https://techcrunch.com/2025/11/25/character-ai-will-offer-interactive-stories-to-kids-instead-of-open-ended-chat/
TechRadar. (2025, December 3). Character.ai launches Stories to keep teens engaged as it scales back open-ended chat for under-18s. TechRadar. Retrieved December 4, 2025, from https://www.techradar.com/ai-platforms-assistants/character-ai-launches-stories-to-keep-teens-engaged-as-it-scales-back-open-ended-chat-for-under-18s
The Future Society. (2025, November 12). AI incidents are rising. It’s time for the United States to build playbooks for when AI fails. Retrieved December 4, 2025, from https://thefuturesociety.org/us-ai-incident-response/
The Verge. (2025, December 4). AI chatbots can be wooed into crimes with poetry. The Verge. Retrieved December 4, 2025, from https://www.theverge.com/report/838167/ai-chatbots-can-be-wooed-into-crimes-with-poetry
U.S. Cybersecurity and Infrastructure Security Agency, Australian Signals Directorate’s ACSC, National Security Agency, Federal Bureau of Investigation, and partners. (2025, December 3). Principles for the secure integration of artificial intelligence in operational technology [Cybersecurity information sheet]. Retrieved December 4, 2025, from https://media.defense.gov/2025/Dec/03/2003834257/-1/-1/0/JOINT_GUIDANCE_PRINCIPLES_FOR_THE_SECURE_INTEGRATION_OF_AI_IN_OT.PDF
U.S. Federal Register. (2025, November 28). Launching the Genesis Mission [Executive Order 14363]. Retrieved December 4, 2025, from https://www.federalregister.gov/documents/2025/11/28/2025-21665/launching-the-genesis-mission
U.S. White House. (2025, November 24). Fact sheet: President Donald J. Trump unveils the Genesis Mission to accelerate AI for scientific discovery. Retrieved December 4, 2025, from https://www.whitehouse.gov/fact-sheets/2025/11/fact-sheet-president-donald-j-trump-unveils-the-genesis-missionto-accelerate-ai-for-scientific-discovery/




Really sharp framing on the Safety Index results as a procurement signal, not just a headline. Your point that nobody treats a D-grade supplier the same way inAI versus infrastructure exposes the weird exceptionalism we've built around these models. It raises a trickierquestion though: if every frontier lab scores C+ or worse because they're racing toward capabilities nobody knows how to control yet, does comparative scoring actually help procurement teams pick safer options, or does it just normalize mediocrity across the board?