Weekly Musings Top 10 AI Security Wrapup: Issue 25 December 19, 2025 - January 1, 2026
Holiday Edition: Critical Vulnerabilities, New Government Centers, and the Reality of Agentic Risks
The holiday season brought CISOs something other than year-end budget reviews: a critical vulnerability in one of the most deployed AI frameworks, OpenAI admitting prompt injection will never be solved, and China drafting rules for human-like AI that put Western regulators to shame.
This two-week period encapsulates where we stand. AI frameworks grow faster than their security postures can handle. Governments scramble to catch up with technology that moves quarterly while policy moves annually. The decisions you make in Q1 about AI governance, framework security, and supply chain risk will determine whether you are a case study or a success story. Here is what you need to know.
1. LangChain LangGrinch Vulnerability Exposes 847 Million Installs to Secret Exfiltration
Security researcher Yarden Porat discovered a critical serialization injection vulnerability in LangChain Core on December 4, 2025, with the advisory published on December 24. Tracked as CVE-2025-68664 with a CVSS score of 9.3, the flaw affects the dumps() and dumpd() functions that form the backbone of LangChain’s serialization pipeline. The vulnerability allows attackers to inject malicious payloads through LLM response fields like additional_kwargs and response_metadata, enabling environment variable exfiltration and potential remote code execution via Jinja2 templates (Security Affairs). LangChain’s total downloads exceed 847 million, with approximately 98 million in the past month alone. A parallel flaw hit LangChainJS as CVE-2025-68665.
Why it matters
LangChain sits at the foundation of countless AI applications, making this a supply chain risk that extends far beyond individual deployments.
The attack vector operates through normal LLM response processing, meaning malicious prompts can trigger exploitation without obvious indicators.
Patches in versions 0.3.81 and 1.2.5 disable secrets_from_env by default, but organizations must verify all dependencies have updated.
What to do about it
Upgrade langchain-core immediately to version 0.3.81 or 1.2.5 and verify all dependent packages.
Audit any application that serializes LLM responses, particularly those using streaming, logging, or caching pipelines.
Treat all LLM outputs as untrusted input for serialization operations.
Rock’s Musings
LangChain’s $4,000 bounty feels like pocket change for a vulnerability affecting hundreds of millions of installs. The economics here send a troubling message about how the industry values security research on AI infrastructure.
What concerns me more is how many organizations don't realize they are running LangChain. It is buried in dependencies, bundled in commercial products, and embedded in internal tools built by developers who moved on two jobs ago. You cannot patch what you cannot find.
2. NIST and MITRE Launch $20 Million AI Security Centers
On December 22, 2025, NIST announced a $20 million investment to establish two centers in partnership with MITRE: the AI Economic Security Center for U.S. Manufacturing Productivity and the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats (NIST). The centers will develop agentic AI tools for manufacturing and critical infrastructure protection. NIST plans to allocate an additional $70 million to the AI for Resilient Manufacturing Institute under the Manufacturing USA program.
Why it matters
This represents the first dedicated federal investment in agentic AI for critical infrastructure defense.
Partnership with MITRE brings ATT&CK framework expertise to AI threat modeling.
The focus on “technology evaluations” signals forthcoming standards for AI security in OT environments.
What to do about it
Track NIST publications from these centers for emerging standards.
Evaluate your readiness to adopt agentic AI tools in manufacturing contexts.
Engage with NIST’s public comment processes as the centers develop guidance.
Rock’s Musings
$20 million sounds impressive until you compare it to what private sector AI labs spend on a single model training run. This is seed money, not a comprehensive national investment. That said, MITRE’s threat intelligence expertise applied to AI security is valuable.
The real impact depends on execution. If these centers produce usable frameworks within 12 to 18 months, they will shape how we think about AI in critical infrastructure. If they become another multi-year study with recommendations that arrive after the threat environment has shifted, we will have missed the window. For more on navigating AI governance frameworks, visit our analysis at RockCyber.
3. OpenAI Admits Prompt Injection Will Never Be Fully Solved
On December 22, 2025, OpenAI published a detailed blog post acknowledging that prompt injection attacks against its ChatGPT Atlas browser agent are “unlikely to ever be fully ‘solved’” (OpenAI). The disclosure accompanied a security update including an adversarially trained model. OpenAI revealed that internal red teaming uncovered attacks where malicious emails instructed the agent to send resignation letters instead of out-of-office replies (Gizmodo).
Why it matters
OpenAI’s admission confirms what security researchers have warned: agentic browsers create fundamentally new attack surfaces.
The attack example shows how prompt injection can exploit trusted contexts.
User recommendations to employ logged-out browsing shift security burden to end users.
What to do about it
Establish policies restricting agentic AI browser tools from sensitive email, financial, or administrative systems.
Implement monitoring for unusual agent behaviors in workflows involving credential use.
Train users to verify agent actions manually when outcomes have material consequences.
Rock’s Musings
Credit to OpenAI for saying the quiet part loud. Prompt injection is a “feature” to avoid, not a bug to fix. These systems work by following instructions, and attackers will always find ways to make their instructions more compelling than yours.
OpenAI’s solution is essentially “be more careful,” which is not a solution at all. Organizations need to treat agentic AI like any other privileged access tool: controlled, monitored, and restricted to low-risk workflows until we develop better containment strategies.
4. OpenAI Seeks Head of Preparedness at $555,000 Amid Growing AI Risks
On December 27, 2025, Sam Altman announced via X that OpenAI is hiring a Head of Preparedness with a base salary of $555,000 plus equity (The Register). Altman acknowledged that AI models “are starting to present some real challenges,” including potential impacts on mental health and models “so good at computer security they are beginning to find critical vulnerabilities” (TechCrunch). The role follows departures from OpenAI’s safety leadership as the previous Head of Preparedness, Aleksander Madry, was reassigned in July 2024, and subsequent leads Lilian Weng and Joaquin Quinonero Candela both left the preparedness team within a year.
Why it matters
The public admission that models pose “real challenges” in cybersecurity and mental health signals internal concern that capability advancement is outpacing safety measures.
Leadership turnover in safety roles suggests organizational tension between commercial deployment pressure and risk mitigation.
The salary level indicates the difficulty of retaining talent in roles that inherently conflict with product launch timelines.
What to do about it
Factor leadership stability into vendor risk assessments for AI providers, as safety team turnover may indicate governance gaps.
Monitor OpenAI’s Preparedness Framework updates for changes that might affect how you evaluate model safety in your deployments.
Document your own preparedness posture for AI risks, including mental health impacts on employees using AI tools.
Rock’s Musings
Half a million dollars is a lot of money to pay someone whose job is to tell you not to ship the thing you want to ship. The revolving door in OpenAI’s safety leadership tells you everything about the structural incentives at play.
I do not envy whoever takes this role. They will inherit a company under pressure to hit $100 billion in revenue while managing technology that finds critical vulnerabilities and affects mental health. For those tracking AI governance implications, we discuss these dynamics at Rock’s Musings.
5. China Drafts Rules Requiring AI Systems to Disclose Identity Every Two Hours
On December 27, 2025, China’s Cyberspace Administration published a draft of the “Interim Measures for the Administration of Anthropomorphic Interactive Services” (Bloomberg). The rules target AI systems simulating human personality, requiring disclosure of AI identity at login and every two hours during use. Providers must detect signs of overdependence or addiction and intervene. Security assessments are mandatory for services with more than 1 million registered users or 100,000 monthly active users.
Why it matters
China is moving faster than Western regulators on human-AI relationship risks, creating compliance divergence for global companies.
Two-hour disclosure intervals address psychological manipulation concerns that EU and U.S. frameworks largely ignore.
Regulatory sandboxes suggest China wants to enable innovation while maintaining oversight.
What to do about it
Assess whether your AI products could trigger Chinese jurisdiction requirements.
Evaluate your disclosure practices against China’s proposed standards as a global policy benchmark.
Monitor consultation outcomes for final requirements affecting multinational deployments.
Rock’s Musings
Say what you will about Chinese regulatory philosophy, but, on the surface, they appear to be addressing real problems that Western regulators are still pretending don’t exist. The two-hour disclosure requirement tacitly acknowledges that AI companions create genuine psychological attachment.
The mandate for overdependence detection is particularly interesting. China requires AI providers to protect users from their own products. Compare that to the U.S. approach, where we are still debating whether AI companies have any duty of care. We may not adopt China’s methods, but we will eventually answer the same questions.
6. Trust Wallet Chrome Extension Breach Costs Users $7 Million
On December 24, 2025, attackers used a leaked Chrome Web Store API key to publish a malicious version of Trust Wallet’s browser extension, affecting version 2.68 (Hacker News). Users who logged in before December 26 had their wallet information harvested via a rogue implementation connecting to “metrics-trustwallet[.]com,” which was registered on December 8. Losses exceeded $7 million across Bitcoin, Ethereum, and Solana. Funds were laundered through ChangeNOW, FixedFloat, and KuCoin.
Why it matters
Browser extension supply chains remain a critical vulnerability.
The 16-day gap between domain registration and attack suggests sophisticated planning.
API key management failures create systemic risks beyond individual extension developers.
What to do about it
Audit API key management for browser extension publishing workflows.
Implement monitoring for extension updates outside normal release cycles.
Evaluate enterprise browser policies restricting extension installations.
Rock’s Musings
This is what happens when you build critical financial infrastructure on browser extension platforms. Chrome Web Store was not designed with the assumption that extensions would control millions of dollars in assets.
The 16-day preparation window is noteworthy. Someone registered the exfiltration domain, waited, then struck on Christmas Eve when response capabilities would be minimal. That is professional tradecraft, not opportunistic crime.
7. AI-Powered Holiday Scams Hit Record Scale with 33,500 Phishing Emails in Two Weeks
Check Point’s December 22, 2025, report documented 33,502 Christmas-themed phishing emails in a two-week period, alongside over 10,000 fake advertisements daily on social media (Check Point). AI-generated delivery scams doubled compared to 2024, while deepfake voice attacks targeting retailers reached 1,000 or more calls per large retailer daily. Three in 10 fraud attempts targeting retailers now use AI-generated content.
Why it matters
AI automation has fundamentally changed the economics of phishing, enabling personalized attacks at volumes previously impossible.
Voice deepfakes targeting retailers represent a new attack surface.
The 100% year-over-year increase indicates attackers are successfully scaling AI-assisted fraud.
What to do about it
Deploy AI-based email filtering capable of detecting AI-generated patterns.
Establish voice verification protocols for credential changes or financial transactions.
Conduct targeted user awareness training on AI-generated scam indicators.
Rock’s Musings
We have crossed the threshold where AI-generated scams are genuinely difficult to distinguish from legitimate communications. The delivery notification scams work because they mirror exactly what real delivery services send.
The 3-in-10 statistic for AI-generated retail fraud is probably conservative. We need detection capabilities that assume malice rather than waiting for obvious indicators. Prevention has always been preferable to response, but now it is becoming mandatory.
8. MongoDB Vulnerability Exploited Globally with 87,000 Instances at Risk
On December 27, 2025, MongoDB disclosed CVE-2025-14847, a high-severity vulnerability with CVSS 8.7 allowing unauthenticated attackers to read uninitialized heap memory (Hacker News). By December 29, active exploitation was observed worldwide. The Shadowserver Foundation identified over 87,000 susceptible instances as of December 31, with 68,400 in the United States.
Why it matters
The two-day transition from disclosure to exploitation demonstrates compressed attack timelines.
MongoDB’s prevalence in AI and ML data pipelines creates compounding risks.
Uninitialized memory leaks can expose credentials, API keys, and PII.
What to do about it
Inventory all MongoDB instances, including those in commercial applications and AI platforms.
Prioritize patching internet-exposed instances and implement network segmentation.
Assume credential exposure for unpatched instances and rotate secrets.
Rock’s Musings
Two days from disclosure to active exploitation is the new normal. If your patching timeline assumes weeks of evaluation before action, you are operating on assumptions from a decade ago.
The concentration in the U.S., with 68,400 instances, tells you where the attack surface is largest. MongoDB is infrastructure for some of the largest AI applications in production. If attackers are reading heap memory from AI systems, they are potentially accessing model weights, training data, and inference inputs.
9. TRM Labs Traces $35 Million in LastPass Breach Cryptocurrency Thefts to Russian Exchanges
On December 25, 2025, TRM Labs published findings tracing over $35 million in cryptocurrency stolen via the 2022 LastPass breach, with wallet drains continuing through late 2025 (TRM Labs). Attackers cracked weak master passwords from the 30 million encrypted vault backups stolen three years earlier. The firm identified $28 million laundered through Wasabi Wallet in late 2024 and early 2025. Funds consistently flowed to Russian exchanges Cryptex and Audi6, with Cryptex previously sanctioned by OFAC.
Why it matters
Three-year-old breach data remains exploitable when users fail to rotate credentials.
Russian exchange infrastructure continues to serve as primary off-ramp despite sanctions.
The sustained campaign demonstrates attacker patience and resource allocation.
What to do about it
Enforce mandatory password rotation for credentials stored in compromised password managers.
Implement cryptocurrency monitoring for corporate holdings.
Review password manager selection with emphasis on zero-knowledge architecture.
Rock’s Musings
Three years. Attackers have been quietly cracking LastPass vaults for three years, draining cryptocurrency accounts from people who probably forgot they even used LastPass. This is the long tail of breach economics.
The TRM Labs demixing work is impressive. Even CoinJoin mixing could not hide the pattern of withdrawals and the consistent use of sanctioned infrastructure. Attribution is getting better, which means deterrence might eventually follow. Plan for that reality.
10. Fortinet Warns of Renewed Exploitation of 2020 FortiOS Two-Factor Authentication Bypass
On December 24, 2025, Fortinet issued a fresh advisory warning of active exploitation of CVE-2020-12812, a five-year-old two-factor authentication bypass in FortiOS SSL VPN (Fortinet). The vulnerability exploits case-sensitivity differences between FortiGate and LDAP Directory, allowing attackers to bypass 2FA by changing username case. While patched in 2020, attackers continue targeting configurations where local users with 2FA are members of LDAP groups.
Why it matters
Five-year-old vulnerabilities remain exploitable when configuration changes outpace patch deployment.
VPN compromises provide network perimeter access that cascades to internal systems.
The specific attack requires an understanding of target configurations, suggesting reconnaissance-driven campaigns.
What to do about it
Audit FortiOS configurations for LDAP group settings that could be exploited.
Enable username-sensitivity on all FortiGate devices.
Implement monitoring for authentication anomalies, including case-variant username attempts.
Rock’s Musings
A five-year-old bug that was patched but keeps getting exploited tells you everything about the gap between vulnerability disclosure and configuration hygiene. Most organizations patched CVE-2020-12812 in 2020 and forgot about it. Then someone added an LDAP group to the authentication policy, and now they are vulnerable again without knowing it.
Configuration drift reintroduces risk constantly. You need continuous validation, not annual assessments.
The One Thing You Won’t Hear About, But You Need To
The Check Point holiday scam data buried a statistic that deserves its own headline: deepfake voice attacks now target large retailers at rates exceeding 1,000 AI-generated calls per day (Check Point). These calls do not sound robotic. Attackers use approximately three seconds of a voice sample to generate convincing voice clones with 85% accuracy.
Why it matters
Voice authentication systems built on the assumption that voices are difficult to fake are fundamentally broken.
Customer service and technical support channels represent the highest-volume attack surface.
Detection requires investment in audio analysis capabilities that most organizations lack.
What to do about it
Eliminate voice-based authentication for any transaction with material consequences.
Implement callback verification protocols using out-of-band contact information.
Deploy AI-based voice analysis tools for customer-facing call centers.
Rock’s Musings
We built entire authentication architectures around the assumption that you cannot fake a human voice. That assumption died in 2025.
The 1,000-calls-per-day figure is per large retailer. Multiply that across the Fortune 500, add in financial services, healthcare, and government, and you have millions of AI-generated voice fraud attempts happening right now. Your call center staff cannot distinguish these calls from legitimate customers. If voice is part of your verification process, remove it. For more on building resilient security operations, visit RockCyber.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceToolkit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
References
Bloomberg. (2025, December 27). China issues draft rules to govern use of human-like AI systems. https://www.bloomberg.com/news/articles/2025-12-27/china-issues-draft-rules-to-govern-use-of-human-like-ai-systems
Check Point Research. (2025, December 22). From fake deals to phishing: The most effective Christmas scams of 2025. https://blog.checkpoint.com/research/from-fake-deals-to-phishing-the-most-effective-christmas-scams-of-2025
Cyber Security News. (2025, December 26). Critical Langchain vulnerability lets attackers exfiltrate sensitive secrets from AI systems. https://cybersecuritynews.com/langchain-vulnerability/
Fortinet. (2025, December 24). FortiOS SSL VPN improper authentication (CVE-2020-12812). https://www.fortiguard.com/psirt
Fortune. (2025, December 29). OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers. https://fortune.com/2025/12/29/openai-hiring-head-of-preparedness-550000-salary-ai-safety-risks-sam-altman/
Gizmodo. (2025, December 23). OpenAI’s outlook on AI browser security is bleak, but maybe a little more AI can fix it. https://gizmodo.com/openais-outlook-on-ai-browser-security-is-bleak-but-maybe-a-little-more-ai-can-fix-it-2000702902
Infosecurity Magazine. (2025, December 23). NIST, MITRE partner on $20M AI centers for manufacturing and cyber. https://www.infosecurity-magazine.com/news/nist-mitre-ai-centers
MITRE. (2025, December 22). NIST launches new artificial intelligence centers, expands collaboration with MITRE. https://www.mitre.org/news-insights/news-release/nist-artificial-intelligence-centers-collaboration-mitre
NIST. (2025, December 22). NIST launches centers for AI in manufacturing and critical infrastructure. https://www.nist.gov/news-events/news/2025/12/nist-launches-centers-ai-manufacturing-and-critical-infrastructure
OpenAI. (2025, December 22). Continuously hardening ChatGPT Atlas against prompt injection attacks. https://openai.com/index/hardening-atlas-against-prompt-injection/
Security Affairs. (2025, December 26). LangChain core vulnerability allows prompt injection and data exposure. https://securityaffairs.com/186185/hacking/langchain-core-vulnerability-allows-prompt-injection-and-data-exposure.html
TechCrunch. (2025, December 28). OpenAI is looking for a new Head of Preparedness. https://techcrunch.com/2025/12/28/openai-is-looking-for-a-new-head-of-preparedness/
The Hacker News. (2025, December 25). LastPass 2022 breach led to years-long cryptocurrency thefts, TRM Labs finds. https://thehackernews.com/2025/12/lastpass-2022-breach-led-to-years-long.html
The Hacker News. (2025, December 26). Trust Wallet Chrome extension breach caused $7 million crypto loss via malicious code. https://thehackernews.com/2025/12/trust-wallet-chrome-extension-breach.html
The Hacker News. (2025, December 27). New MongoDB flaw lets unauthenticated attackers read uninitialized memory. https://thehackernews.com/2025/12/new-mongodb-flaw-lets-unauthenticated.html
The Register. (2025, December 29). OpenAI seeks new safety chief as Altman flags growing risks. https://www.theregister.com/2025/12/29/openai_safety_chief
TRM Labs. (2025, December 25). TRM traces stolen crypto from 2022 LastPass breach. https://www.trmlabs.com/resources/blog/trm-traces-stolen-crypto-from-2022-lastpass-breach-on-chain-indicators-suggest-russian-cybercriminal-involvement



