Weekly Musings Top 10 AI Security Wrapup: Issue 24 December 12, 2025 - December 18, 2025
How AI IDEs, Federal Power Plays, and Deepfake Fraud Are Redefining Your 2026 Security Roadmap
Your AI coding assistant might be a backdoor. That’s not hyperbole. This week, a security researcher disclosed a vulnerability class affecting 100% of tested AI IDEs. The same week, NIST dropped the first draft of its Cybersecurity Framework for AI, and President Trump signed an executive order launching a federal assault on state AI laws. If you’re still treating AI security as a 2026 problem, the threat landscape has some bad news for you.
Introduction
This week marks a turning point for AI security and governance. The collisions between innovation and risk are no longer theoretical exercises for future strategic planning. They’re operational realities demanding immediate executive attention. A single security researcher disclosed over 30 vulnerabilities affecting every major AI coding tool from Cursor to GitHub Copilot. NIST responded to the AI security gap with the first comprehensive cybersecurity framework profile for artificial intelligence. The Trump administration declared war on state AI regulation with an executive order establishing a federal litigation task force. And the European Commission published its first draft of rules for marking AI-generated content as deepfake fraud surges past $12 billion in annual losses.
What connects these stories? The security and governance infrastructure for AI is being built right now, often in reactive bursts following breaches and disclosed vulnerabilities. Organizations that wait for clarity before acting will find themselves perpetually behind. This week’s news offers a blueprint for 2026: treat AI IDEs as untrusted infrastructure, prepare for regulatory whiplash across jurisdictions, and accept that your third-party AI vendors are the new attack surface. The question isn’t whether your AI systems will be targeted. It’s whether you’ve built the governance and controls to respond when they are.
1. IDEsaster: The Novel Vulnerability Class Affecting Every AI Coding Tool
Summary: Security researcher Ari Marzouk disclosed “IDEsaster,” a new vulnerability class affecting AI-powered integrated development environments, including GitHub Copilot, Cursor, Windsurf, and Claude Code. The research identified over 30 separate vulnerabilities across more than 10 market-leading products, resulting in 24 CVE assignments. What makes IDEsaster distinct from previous AI IDE vulnerabilities is the attack chain. Rather than targeting the AI agent’s tools or configuration, these exploits leverage features of the underlying base IDE software, meaning vulnerabilities affect all AI coding assistants built on the same foundation. Case studies demonstrate paths to remote code execution via overwrites of IDE settings and data exfiltration via remote JSON schema requests. Anthropic updated Claude Code documentation to reflect the risk, and AWS published a security advisory.
Why It Matters:
Millions of developers use affected tools daily, and compromised development environments create supply chain risk for every downstream application.
The research introduces the “Secure for AI” principle, arguing that adding AI agents to legacy software creates attack vectors that existing security models don’t address.
100% of tested AI IDEs were vulnerable to at least one IDEsaster attack chain, suggesting this is a systemic design problem rather than isolated implementation flaws.
What To Do About It:
Audit AI coding tool configurations and restrict workspace settings files from AI agent modification.
Require human-in-the-loop confirmation for all file operations outside the current working directory.
Evaluate whether AI IDE features like remote JSON schema loading and IDE settings overwrites can be disabled without breaking critical workflows.
Run your coding assistants, such as Claude Code, in a containerized and sandboxed environment.
Rock’s Musings: Let me be direct… this is the research disclosure that should terrify every CISO whose organization uses AI coding tools. And that’s pretty much everyone now. Ari Marzouk didn’t find a bug in Cursor or GitHub Copilot. He found a design flaw in how we’re building AI on top of legacy infrastructure. The base IDE software was never designed to resist an autonomous agent operating on behalf of potentially malicious instructions.
The “Secure for AI” principle Marzouk introduces deserves more attention than it will probably get. We’ve spent years talking about AI governance at the model and policy level. But the tooling developers use every day has become an attack surface we’ve largely ignored. Your security program probably has controls for production systems. Does it have controls for the IDE that writes the code that runs on those systems? If the answer is no, you’ve got homework.
2. NIST Releases Draft Cybersecurity Framework Profile for AI
Summary: The National Institute of Standards and Technology published NISTIR 8596, a draft Cybersecurity Framework Profile for Artificial Intelligence, on December 16, 2025. The document maps the NIST Cybersecurity Framework 2.0 to AI-specific security challenges across three focus areas: securing AI systems, conducting AI-enabled cyber defense, and thwarting AI-enabled cyberattacks. The profile resulted from a yearlong collaborative effort involving over 6,500 community participants. NIST is accepting public comments until January 30, 2026, with a workshop planned for January 14 to discuss the preliminary draft.
Why It Matters:
This is the first comprehensive guidance connecting CSF 2.0 controls to AI security, filling a gap that has left organizations without authoritative frameworks for AI cybersecurity.
The three focus areas address both defensive AI deployment and protection against AI-powered attacks, acknowledging the dual-use nature of the technology.
NIST guidance typically serves as the baseline for federal contractor requirements and influences private-sector best practices.
What To Do About It:
Assign a team member to review the draft and prepare comments before the January 30 deadline.
Begin mapping your current AI deployments against the three focus areas to identify gaps.
Plan to attend the January 14 NIST workshop or review the published materials.
Rock’s Musings: NIST has been quietly putting the building blocks of AI governance infrastructure in place for years, from the AI Risk Management Framework to the taxonomy of adversarial machine learning attacks. The Cyber AI Profile is the logical next step: here’s how to connect AI-specific risks to the cybersecurity program you already have. I’ve said for a long time that organizations shouldn’t wait for perfect frameworks before starting their AI governance programs. But now that NIST is providing authoritative guidance, you have less excuse to wing it.
What I appreciate about the three focus areas is the acknowledgment that AI security isn’t just about protecting your models. It’s also about using AI to defend better and understanding how adversaries will use AI against you. That’s a more mature framing than we usually see in these documents. If your 2026 planning doesn’t include all three dimensions, you’re thinking about this too narrowly.
3. Trump Executive Order Launches Federal Challenge to State AI Laws
Summary: On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” directing federal agencies to challenge state AI regulations. The order establishes an AI Litigation Task Force within the Department of Justice to pursue legal action against state laws that “unconstitutionally regulate interstate commerce” or conflict with federal AI policy. The order specifically cites Colorado’s algorithmic discrimination law as requiring AI models to “embed ideological bias” and produce “false results.” The executive order does not preempt existing state laws but mobilizes federal resources to challenge them in court. Exemptions are carved out for child safety, AI data center infrastructure, and government procurement.
Why It Matters:
Organizations operating across multiple states face regulatory uncertainty as federal and state frameworks collide.
Colorado, California, and other states with comprehensive AI laws may see enforcement challenged or delayed by federal litigation.
The order signals that the federal government intends to establish a “minimally burdensome” national standard, which may mean fewer protections than state laws currently provide.
What To Do About It:
Continue compliance with existing state AI laws until courts rule otherwise; the executive order does not suspend or invalidate current requirements.
Brief your legal team on the potential for regulatory changes and plan for multiple compliance scenarios.
Track federal funding implications if your organization receives grants or participates in federally supported programs.
Rock’s Musings: This is going to get messy. The executive order is a declaration that the federal government and states like Colorado and California are on a collision course over AI regulation. For organizations trying to build compliance programs, the worst possible outcome is regulatory limbo. And that’s exactly what we’re heading toward.
My advice? Don’t treat this as an excuse to pause your AI governance work. If anything, the organizations that build robust AI risk assessment practices now will be better positioned to adapt when the dust settles, whether that means federal preemption, state law prevailing, or some patchwork compromise. The fundamentals of responsible AI don’t change based on which government entity is enforcing the rules. See the client alert I co-authored with Daniel Pietragallo at Buchalter HERE.
4. EU Commission Publishes First Draft Code of Practice for AI Content Marking
Summary: The European Commission published the first draft of its Code of Practice on marking and labelling AI-generated content on December 17, 2025. The code addresses Article 50 of the EU AI Act, which requires providers to mark AI-generated content in machine-readable format and deployers to clearly label deepfakes and AI-generated text on matters of public interest. The draft covers two sections: rules for marking and detecting AI content applicable to providers, and labeling requirements for deployers. The Commission is collecting feedback until January 23, 2026, with the final code expected in June 2026 and rules taking effect in August 2026.
Why It Matters:
Organizations deploying generative AI systems in the EU will need technical solutions for machine-readable content marking by August 2026.
The code addresses deepfake transparency at a time when AI-generated content is increasingly difficult to distinguish from authentic material.
Industry groups are advocating for flexibility in marking techniques, recognizing that no single solution works for all content types.
What To Do About It:
Review the draft code to understand the technical requirements for content marking and detection.
Evaluate current watermarking and content provenance capabilities against the proposed requirements.
Begin assessing vendor solutions for content marking and coordinate with legal on EU compliance timelines.
Rock’s Musings: The EU AI Act’s transparency requirements are where theory meets implementation. It’s easy to say AI-generated content should be labeled. It’s much harder to build technical systems that mark video, audio, and text in ways that are robust, interoperable, and don’t degrade user experience. The industry’s push for flexibility isn’t just vendor lobbying. Current marking techniques have real limitations, and prescriptive requirements could be obsolete before they take effect.
That said, the August 2026 deadline is coming whether or not the technology is perfect. If you operate in Europe or have European customers, your 2026 roadmap needs to include content marking capabilities. This isn’t optional. The deepfake fraud statistics we’re seeing make clear why regulators are pushing for transparency.
5. OpenAI Discloses API Customer Data Breach via Third-Party Vendor
Summary: OpenAI disclosed a security incident involving third-party analytics provider Mixpanel that exposed limited customer data from API accounts. On November 9, 2025, Mixpanel detected unauthorized access to its systems resulting from an SMS phishing attack. The attacker exported a dataset containing user profile information including names, email addresses, browser details, and approximate locations. OpenAI clarified that no conversation content, API keys, passwords, or payment information was exposed, and ChatGPT users were not affected. OpenAI terminated its use of Mixpanel and is enforcing stricter security requirements for external partners. Security experts noted that sending unanonymized user data to third-party analytics providers was a choice that increased exposure.
Why It Matters:
The breach illustrates how third-party vendors in the AI supply chain can expose customer data even when core systems remain secure.
Attackers continue targeting adjacent systems rather than attacking AI providers directly.
The exposed information could support phishing and social engineering attacks against API customers.
What To Do About It:
Review what customer data your organization sends to third-party analytics and marketing tools.
Evaluate whether data minimization principles are being applied to AI vendor integrations.
Enable multi-factor authentication on all AI platform accounts and train staff to recognize phishing attempts.
Rock’s Musings: OpenAI deserves some credit for transparency here. They disclosed broadly, even though most users weren’t affected. The deeper lesson is about supply chain risk. OpenAI’s core systems were never breached, but Mixpanel was, and suddenly, API customer data is in the attacker's hands. This is the AI vendor management problem in miniature. Your security is only as strong as your weakest third-party integration.
The criticism that OpenAI didn’t need to send identifiable user data to an analytics platform is fair. Data minimization should be the best practice today, not to mention it’s required by GDPR. Every piece of data you share with a third party is data that can be exposed when that third party gets compromised. And in 2025, “when” is the right word, not “if.”
6. Bitsight Research Reveals 1,000+ Exposed MCP Servers Without Authentication
Summary: A research report from Bitsight Technologies published December 11, 2025, found approximately 1,000 Model Context Protocol servers exposed on the public internet without authorization controls. MCP is an open standard that enables AI applications to connect to external tools, APIs, and databases. While the MCP specification recommends OAuth 2.1 for authorization, authentication is technically optional. Researchers found exposed servers with access to Kubernetes clusters, CRM platforms, and tools capable of executing arbitrary shell commands. The report notes that MCP servers can become proxies for attackers to pivot into protected databases and file systems.
Why It Matters:
MCP adoption is accelerating with Microsoft integrating support across Copilot Studio and Azure AI Foundry.
Exposed MCP servers create direct paths to backend systems that would otherwise require multiple attack steps to reach.
Authentication being optional by specification means insecure deployments are a default outcome rather than an exception.
What To Do About It:
Inventory any MCP servers deployed in your environment and verify authentication is configured.
Restrict MCP servers to internal networks and use local transport methods where public exposure isn’t required.
Include MCP server security in your standard vulnerability management process.
Rock’s Musings: Model Context Protocol is one of those technologies that sounds great in a demo and creates massive attack surface in production. The idea that AI assistants can invoke real-world actions through a standardized protocol is exactly the kind of capability enterprises want. The reality that a thousand servers are sitting on the public internet without authentication is exactly the kind of risk those enterprises aren’t managing.
This is the agentic AI security problem in microcosm. We’re giving AI systems capabilities to take actions, but we’re not applying the same rigor to securing those capabilities that we’d apply to any other critical infrastructure. If you’ve deployed MCP servers, treat them like you’d treat any service with database access and shell execution privileges. Because that’s what they are.
7. Deepfake-as-a-Service Platforms Drive 30% of Corporate Impersonation Attacks
Summary: Cyble’s Executive Threat Monitoring report found that AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks in 2025. Deepfake-as-a-Service platforms have made the technology accessible to criminals of all skill levels. Voice cloning now requires just 20-30 seconds of audio, and convincing video deepfakes can be created in 45 minutes using freely available software. Documented financial losses from deepfake-enabled fraud exceeded $200 million in Q1 2025 alone. The Deloitte Center for Financial Services projects U.S. fraud losses facilitated by generative AI will climb from $12.3 billion in 2023 to $40 billion by 2027.
Why It Matters:
Executive impersonation attacks have evolved from crude phishing to sophisticated real-time video and voice manipulation.
The Arup incident, where deepfakes of the CFO and colleagues convinced an employee to transfer $25.5 million, demonstrates the scale of potential losses.
Human detection rates for high-quality deepfake video are just 24.5%.
What To Do About It:
Establish verification protocols for high-value transactions that don’t rely solely on video or voice confirmation.
Train employees to recognize deepfake indicators and implement callback procedures using verified contact information.
Evaluate deepfake detection technologies for integration into communication and authentication workflows.
Rock’s Musings: The $25.5 million Arup fraud should be required reading for every CFO and CISO. A finance worker authorized 15 wire transfers because he thought he was on a video call with his CFO and colleagues. He was talking to deepfakes. Every security control designed around “verify the identity of the person making the request” fails when identity can be convincingly fabricated in real time.
We need to stop thinking about deepfakes as a disinformation problem and start thinking about them as a business email compromise problem on steroids. Your callback procedures, your dual authorization requirements, your training programs, they all need to account for the possibility that the video or voice on the other end isn’t real. If you’re a virtual CISO, this should be in your client briefings.
8. Fortune: AI Coding Tools Face First Security Exploits in the Wild
Summary: Fortune reported that AI coding tools have attracted the attention of malicious actors, with several exploits and near-misses demonstrating potential attack paths. The most notable incident involved Amazon Q’s VS Code extension being compromised through a prompt injection that directed the AI to wipe users’ local files and disrupt AWS infrastructure. The malicious version passed Amazon’s verification and was publicly available for two days. Security researchers also discovered prompt injection vulnerabilities in tools from Cursor, GitHub, and Google’s Gemini. CrowdStrike reported threat actors exploiting an unauthenticated code injection vulnerability in Langflow AI to deploy malware.
Why It Matters:
AI coding tool compromise creates supply chain risk for every application built using those tools.
The agentic nature of these tools means small oversights can escalate to critical security issues.
Verification and approval processes for AI tool extensions are not catching malicious modifications.
What To Do About It:
Implement monitoring for AI coding tool behavior that deviates from expected patterns.
Require security review for AI tool configuration changes and extension updates.
Establish procedures for responding to compromised AI tool disclosures including code audit requirements.
Rock’s Musings: The Amazon Q incident is instructive because the attacker reportedly did it to expose “security theater” rather than execute an actual attack. Mission accomplished. A prompt injection that could wipe files and disable cloud infrastructure made it through Amazon’s verification process and sat in production for two days. What does that tell you about the verification processes for AI tools you’re deploying?
The combination of prompt injection susceptibility and supply chain attack vectors makes AI coding tools a particularly dangerous category. You’re trusting these tools with code that becomes your products, your infrastructure, your customer data. The security rigor applied to that trust relationship should match the risk. For most organizations, it doesn’t.
9. AI Fraud Deterrence Act Introduced to Criminalize Deepfakes of Federal Officials
Summary: Representatives Ted Lieu (D-Calif.) and Neal Dunn (R-Fla.) introduced the bipartisan AI Fraud Deterrence Act on November 25, 2025. The legislation would update criminal definitions and penalties for fraud to account for AI involvement and criminalize impersonation of federal officials using deepfakes. The bill cites AI’s use in attempts to mimic White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio earlier this year. The FBI has warned that generative AI reduces the time and effort criminals need to deceive targets and can correct for human errors that might otherwise serve as warning signs.
Why It Matters:
Federal legislative action on AI fraud has been limited, and bipartisan sponsorship increases the bill’s chances of advancing.
The bill addresses the scale problem where AI enables fraud at volumes impossible for human-only operations.
Criminal penalties for AI-assisted fraud would create deterrent effects beyond civil liability.
What To Do About It:
Track the bill’s progress through Congress and prepare for potential compliance implications if enacted.
Review current fraud prevention controls for adequacy against AI-enhanced attacks.
Consider how criminal liability for AI fraud might affect vendor and partner due diligence.
Rock’s Musings: Federal legislation on AI fraud is overdue. The current legal framework wasn’t designed for a world where anyone with a laptop can generate a convincing video of your CEO instructing a wire transfer. The AI Fraud Deterrence Act isn’t perfect, but it’s a recognition that our laws need to catch up to our threats.
The bipartisan sponsorship is the most encouraging (yet also shocking) aspect. AI governance can’t be a partisan football. The fact that lawmakers from both parties see AI fraud as a problem worth solving creates space for substantive policy rather than culture war posturing. We’ll see if that consensus survives the legislative process.
10. Sumsub Report: Sophisticated AI-Powered Identity Fraud Jumps 180%
Summary: Sumsub’s Identity Fraud Report published November 25, 2025, found that while overall identity fraud attempts slightly decreased to 2.2% of verifications globally, sophisticated fraud combining AI deepfakes, synthetic identities, and social engineering jumped 180%. The report defines sophisticated fraud as attacks using multiple coordinated techniques, including device tampering and cross-channel manipulation. Dating apps and online media platforms experienced the highest fraud rates at 6.3% each. The Maldives saw a 2,100% year-over-year increase in deepfake attacks, the highest for any single country.
Why It Matters:
Attackers are shifting from high-volume simple fraud to lower-volume sophisticated attacks that bypass traditional controls.
Synthetic identity fraud, which combines real personal information with AI-generated content, now accounts for 21% of first-party fraud.
Geographic distribution of attacks shows fraud networks exploiting regions with weaker verification infrastructure.
What To Do About It:
Evaluate identity verification controls against sophisticated multi-technique attacks rather than single-vector threats.
Implement behavioral analytics and device fingerprinting to detect coordinated fraud campaigns.
Review vendor onboarding processes for exposure to synthetic identity risk.
Rock’s Musings: The headline decline in overall fraud attempts masks a more concerning trend. The attacks are getting smarter. A 180% jump in sophisticated fraud while total attempts decline means attackers are investing in quality over quantity. That’s bad news for organizations relying on volume-based fraud detection.
The synthetic identity problem is particularly insidious. These aren’t stolen identities; they’re fabricated ones using real components. Your traditional identity verification checks might validate the individual data points while completely missing that the assembled identity is fake. If your fraud prevention strategy hasn’t evolved to address synthetic identities, you’re defending against last year’s threats.
The One Thing You Won’t Hear About But You Need To: PromptPwnd and AI Agent Risks in CI/CD Pipelines
Summary: Aikido Security researchers disclosed “PromptPwnd,” a vulnerability class affecting GitHub Actions and GitLab CI/CD pipelines that integrate AI agents. The attack exploits prompt injection vectors in AI tools that process untrusted user input such as issue bodies, pull request descriptions, and commit messages. When AI agents interpret malicious embedded text as instructions rather than content, they can leak secrets, edit repository data, and compromise supply chain integrity. At least five Fortune 500 companies have been affected, with many more likely exposed. Google remediated a Gemini CLI issue following Aikido’s responsible disclosure.
Why It Matters:
This is the first confirmed real-world demonstration that AI prompt injection can compromise CI/CD pipelines.
Any repository using AI for issue triage, PR labeling, or automated replies is potentially vulnerable.
The attack requires no sophisticated exploit chain, just crafted user input and a misconfigured AI agent.
What To Do About It:
Audit GitHub Actions and GitLab CI/CD workflows for AI agent integrations processing untrusted input.
Restrict AI agent toolsets to prevent write operations on issues, pull requests, and sensitive repository data.
Treat all AI-generated output as untrusted code that requires validation before execution.
Rock’s Musings: This is the story that should be getting more attention, but isn’t. AI prompt injection compromising CI/CD pipelines is precisely the kind of attack that sounds theoretical until it hits your organization. Five Fortune 500 companies is proof that this attack is real and not a proof of concept.
The broader pattern here is what concerns me. We’re integrating AI agents into critical development infrastructure without applying the security rigor those systems deserve. The AI can read issue descriptions. The AI can execute privileged actions. The AI can be manipulated by anyone who can submit a PR or file an issue. That’s a security model designed to fail. If your DevOps team has deployed AI agents in your CI/CD pipeline, this is your wake-up call to review how those agents handle untrusted input.
👉 Visit RockCyber.com to learn more about how we can help you in your traditional Cybersecurity and AI Security and Governance Journey
👉 Want to save a quick $100K? Check out our AI Governance Tools at AIGovernanceTookit.com
👉 Subscribe for more AI and cyber insights with the occasional rant.
Citations
Aikido Security. (2025, December 3). Prompt injection inside GitHub Actions: The new frontier of supply chain attacks. Aikido Security Blog. https://www.aikido.dev/blog/promptpwnd-github-actions-ai-agents
Bitsight Technologies. (2025, December 11). Model Context Protocol security risks grow as unsecured servers appear across the internet. SiliconANGLE. https://siliconangle.com/2025/12/11/model-context-protocol-security-risks-grow-unsecured-servers-appear-across-internet/
Bleeping Computer. (2025, December 15). OpenAI discloses API customer data breach via Mixpanel vendor hack. https://www.bleepingcomputer.com/news/security/openai-discloses-api-customer-data-breach-via-mixpanel-vendor-hack/
Cooley LLP. (2025, December 12). Showdown: New executive order puts federal government and states on a collision course over AI regulation. https://www.cooley.com/news/insight/2025/2025-12-12-showdown-new-executive-order-puts-federal-government-and-states-on-a-collision-course-over-ai-regulation
Cyble. (2025, December 11). Deepfake-as-a-service exploded in 2025: 2026 threats ahead. https://cyble.com/knowledge-hub/deepfake-as-a-service-exploded-in-2025/
Cybersecurity Dive. (2025, December 17). NIST adds to AI security guidance with Cybersecurity Framework profile. https://www.cybersecuritydive.com/news/nist-ai-cybersecurity-framework-profile/808134/
European Commission. (2025, December 17). Commission publishes first draft of Code of Practice on marking and labelling of AI-generated content. Digital Strategy. https://digital-strategy.ec.europa.eu/en/news/commission-publishes-first-draft-code-practice-marking-and-labelling-ai-generated-content
Fortune. (2025, December 15). AI coding tools exploded in 2025. The first security exploits show what could go wrong. https://fortune.com/2025/12/15/ai-coding-tools-security-exploit-software/
Infosecurity Magazine. (2025, November 25). AI and deepfake-powered fraud skyrockets amid identity fraud stagnation. https://www.infosecurity-magazine.com/news/ai-deepfake-fraud-skyrockets/
Marzouk, A. (2025, December 6). IDEsaster: A novel vulnerability class in AI IDEs. MaccariTA. https://maccarita.com/posts/idesaster/
National Institute of Standards and Technology. (2025, December 16). Draft NIST guidelines rethink cybersecurity for the AI era. https://www.nist.gov/news-events/news/2025/12/draft-nist-guidelines-rethink-cybersecurity-ai-era
NBC News. (2025, November 25). AI fraud bill seeks to criminalize deepfakes of federal officials. https://www.nbcnews.com/tech/tech-news/ai-fraud-bill-seeks-criminalize-deepfakes-federal-officials-rcna245763
OpenAI. (2025, November 27). What to know about a recent Mixpanel security incident. https://openai.com/index/mixpanel-incident/
The Hacker News. (2025, December 15). Researcher uncovers 30+ flaws in AI coding tools enabling data theft and RCE attacks. https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.html
The White House. (2025, December 11). Ensuring a national policy framework for artificial intelligence. Presidential Actions. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/



