White House Issues Executive Order – Ensuring a National Policy Framework for Artificial Intelligence
Federal Preemption, State Enforcement, and What AI Governance Means for Your Business in 2026 - By Daniel Pietragallo and Rock Lambros
This blog is co-authored as a Buchalter client alert as a joint effort between Denver Special Counsel Daniel Pietragallo and Rock Lambros, CEO of RockCyber.
Breaking Down the Executive Order
On December 11, 2025 the White House issued an Executive Order entitled “Ensuring a National Policy Framework for Artificial Intelligence”, which establishes that “It is the policy of the United States to sustain and enhance the United States’ global dominance through a minimally burdensome national policy framework for AI.”[1]
Citing the burden of regulating AI though a patchwork of state laws and concern for “laws requiring entities to embed ideological bias within models”, the Executive Order directs the Attorney General to create an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws” under grounds of Federal preemption, constitutionality, and their effect on interstate commerce.
Section 4 of the Executive Order directs the Department of Commerce, in consultation with other Executive Branch AI policy advisors, to evaluate all existing state AI laws that can be challenged by the Task Force. The Executive Order also specifically directs the Department of Commerce to “identify laws that require AI models to alter their truthful outputs, or may compel AI developers or deployers to disclose or report information that would violate the First Amendment”.
Section 5 attempts to restrict state funding for broadband infrastructure by making states with “onerous AI laws” ineligible to receive certain federal funds. The Executive Order further directs all executive departments and Federal agencies to evaluate discretionary grant programs, to condition them “on States either not enacting an AI law…or entering into a binding agreement with the relevant agency not to enforce such laws…”
Section 6 directs the Federal Communications Commission (FCC) to initiate a proceeding within 90 days (after the Department of Commerce issues its evaluation under Section 4) “to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws”.
Section 7 further directs the Federal Trade Commission (FTC) to issue a policy statement within 90 days, analyzing the circumstances under which “State laws that require alterations to the truthful outputs of AI models” may conflict with the FTC Act’s prohibition on engaging in deceptive acts or practices.
Section 8 directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare legislative recommendations for “establishing a uniform Federal policy framework for AI that preempts State AI laws”.
This Executive Order is the latest attempt by the White House to preempt State AI laws, but it is important to remember that States have broad powers to investigate and regulate “unfair and deceptive acts or practices” that affect consumers. Therefore, even in the absence of AI specific laws, State Attorneys General retain tremendous power to conduct sovereign enforcement actions for consumer harms caused by AI developers and deployers.
US State Attorneys General Respond in Opposition to Federal Preemption
In anticipation of the Executive Order, on November 25, 2025 a coalition of 36 state attorneys general sent a letter to leaders in the House and Senate emphatically stating their opposition to a moratorium on enforcement of state AI governance laws.[2]
On December 9, 2025 a coalition of 42 state attorneys general sent a letter directly to thirteen prominent AI companies, including Open AI, Anthropic, Google, Meta, Perplexity, and xAI.[3] Therein, they expresses “serious concerns about the rise in sycophantic and delusional outputs to users” from Generative AI.
In that letter, the State Attorneys General warned the AI companies that failing to mitigate harms caused by their products, including implementing additional safeguards to protect children, may violate both existing civil and criminal laws. The letter also serves to reinforce States’ authority to investigate and pursue enforcement actions related to artificial intelligence under already existing consumer protection laws.
Buchalter’s Analysis of the Executive Order
It is important to note that only a few states have passed comprehensive AI governance laws (Colorado, California, and Texas) but that 42 State Attorneys General reinforced their existing authority to prevent consumer harms caused by AI under already existing laws. By identifying in detail the measures that AI companies need to take, State AG’s are setting the baseline expectations for reasonable safety and security related to chatbots and LLMs.
While the White House Executive Order does not create any new law per se, it does direct Federal agencies to identify opportunities to challenge the constitutionality of state AI laws and propose Federal laws specifically designed to preempt state regulations. This further codifies the Administration’s emphasis on deregulation of Artificial Intelligence technologies and their pro-growth priorities.
The Executive Order is directed at states like Colorado, Texas, and California which require developers and deployers to implement a recognized AI Governance Framework (like NIST AI RMF[4] or ISO 42001[5]), conduct regular safety testing, and takes steps to mitigate any actual or foreseeable harms.
However, Colorado was recently approved to receive over $420 million in Federal funding under the Broadband Equity Access and Deployment (BEAD) grant and threats to withhold those funds would deprive large swaths of rural Colorado from receiving upgraded broadband infrastructure.[6]
The Executive Order is unlikely to deter implementation of the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which goes into effect on January 1, 2026. This year US States are poised to implement comprehensive AI governance regulations and the Executive Order was timed to influence ongoing legislative debates at the state level.
RockCyber’s Perspective on AI Governance Implications
The Executive Order presents organizations with a paradox: regulatory uncertainty is increasing, not decreasing.
While the Administration frames this order as reducing compliance burden, the practical effect for most organizations is the opposite. We now face an indeterminate period where federal agencies will challenge state laws, state attorneys general will defend their authority, and courts will adjudicate competing claims. During this transition, organizations deploying AI systems cannot assume any particular regulatory regime will prevail.
The Compliance Vacuum is Not a Compliance Holiday
Organizations should resist the temptation to interpret federal preemption efforts as permission to scale back AI governance programs. Three realities constrain this interpretation.
First, the Executive Order explicitly carves out child safety protections, state procurement requirements, and infrastructure regulations from preemption. These categories encompass substantial AI use cases across healthcare, education, financial services, and government contracting.
Second, state attorneys general retain broad enforcement authority under existing consumer protection statutes. The recent coalition letters from state attorneys general demonstrate their intent to pursue AI-related harms under unfair and deceptive practices frameworks that predate any AI-specific legislation. An organization that abandons governance controls in reliance on federal preemption may find itself defending against state enforcement actions grounded in statutes untouched by the Executive Order.
Third, contractual and insurance obligations increasingly incorporate AI governance requirements independent of regulatory mandates. Enterprise customers, cyber insurers, and institutional investors are imposing governance standards through commercial agreements. Federal preemption of state law does not modify these private obligations.
Operational Implications Across Functions
For security and technology leaders, the Executive Order does not change the fundamental risk calculus. AI systems that process sensitive data, make consequential decisions, or interact with vulnerable populations require governance controls regardless of the regulatory environment. The security vulnerabilities inherent in large language models, the data privacy implications of training corpora, and the operational risks of AI-generated outputs persist independent of which jurisdiction regulates them.
Organizations should continue implementing risk management programs aligned with recognized frameworks. Whether state law mandates such programs or market forces demand them, the underlying need remains constant. The question is not whether to govern AI systems but how to do so efficiently.
For legal and compliance teams, the Executive Order introduces a monitoring requirement. The 90-day deadlines for Commerce Department evaluation, FCC rulemaking initiation, and FTC policy statements will generate substantial guidance over the next quarter. Organizations need mechanisms to track these developments and assess their applicability.
Compliance strategies should account for the possibility that federal preemption efforts may face legal challenge. State attorneys general have demonstrated willingness to litigate federal overreach, and courts may ultimately preserve state authority in whole or in part. Building compliance programs solely around anticipated federal preemption creates execution risk.
For executive leadership and boards, the Executive Order signals continued federal prioritization of AI industry growth over precautionary regulation. This creates both opportunity and exposure. Organizations with mature governance programs may find competitive advantage as counterparties and customers seek partners who can demonstrate responsible AI practices regardless of regulatory minimums. Organizations without such programs face reputational and operational risk if AI deployments generate harm in a lighter regulatory environment.
Practical Recommendations
Organizations should take four immediate steps.
Inventory existing AI governance controls and map them to both current state requirements and emerging federal guidance. Understand which controls satisfy multiple frameworks and which are jurisdiction-specific.
Document the business rationale for governance decisions independent of regulatory compliance. Controls implemented for risk management, customer assurance, or operational reliability justify continuation regardless of regulatory developments.
Establish monitoring processes for the federal actions directed by the Executive Order. The Commerce Department evaluation, FCC proceeding, and FTC policy statement will provide concrete guidance on federal expectations.
Engage legal counsel to assess state-specific exposure. Organizations operating in Colorado, Texas, California, or other states with AI-specific laws need jurisdiction-by-jurisdiction analysis of how federal preemption efforts may affect their obligations.
The Executive Order clarifies federal policy priorities. It does not eliminate the need for organizational AI governance. Companies that maintain disciplined governance programs will navigate this transition more effectively than those who interpret regulatory uncertainty as regulatory absence.
RockCyber works with organizations to build AI governance programs that satisfy multiple frameworks and adapt to regulatory evolution. Our approach emphasizes controls that serve business objectives independent of compliance mandates, ensuring resilience against regulatory volatility.
Buchalter’s Bottom Line for Business
Although the Executive Order signals a continued effort by the White House to disrupt and preempt state-level AI governance regulations, businesses should not interpret this action as a green light to discontinue AI governance efforts. State AG
enforcement authority under existing consumer protection laws remains intact regardless of AI-specific statute preemption.
Businesses should closely monitor policy guidance from the Executive Branch and calendar the 90-day deadlines: Commerce Department state law evaluation (March 2026), FCC rulemaking initiation (June 2026 at latest), and FTC policy statement (March 2026) will generate actionable federal guidance.
Businesses should carefully evaluate their risk exposure under the requirements of emerging state AI laws in Colorado, Texas, and California to maintain compliance with the regulatory mandates. For example, AI laws in Texas and California offer safe harbor or a rebuttable presumption of compliance if the business has implemented a recognized framework like NIST AI RMF or ISO42001. Any business using AI to make consequential decisions that might affect consumer rights should strongly consider taking advantage of these provisions by implementing a recognized governance framework.
This communication is not intended to create or constitute, nor does it create or constitute, an attorney-client or any other legal relationship. No statement in this communication constitutes legal advice nor should any communication herein be construed, relied upon, or interpreted as legal advice. This communication is for general information purposes only regarding recent legal developments of interest, and is not a substitute for legal counsel on any subject matter. No reader should act or refrain from acting on the basis of any information included herein without seeking appropriate legal advice on the particular facts and circumstances affecting that reader. For more information, visit www.buchalter.com.
[1] https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/
[2] Letter-to-Congress-AI-Moratorium_FINAL-corrected.pdf
[3] Letter-to-Meta-re-Scam-Investments-_FINAL.pdf
[4] AI Risk Management Framework | NIST
[5] ISO/IEC 42001:2023 – AI management systems
[6] Colorado Approved for $420 Million I Federal Broadband Funding, Connecting Rural Colorado
RockCyber is a cybersecurity and AI governance consulting firm serving both innovative startups and Fortune 500 companies. Founded by Rock Lambros, author of The CISO Evolution, RockCyber delivers advisory services across AI strategy, risk management, and regulatory compliance. The firm’s proprietary RISE and CARE frameworks align AI initiatives with business objectives while embedding governance controls that satisfy NIST AI RMF, ISO 42001, the EU AI Act and emerging state requirements. Rock serves as a Distinguished Fellow with the Enterprise Risk Quantification Institute and contributes to OWASP’s Agentic Security Initiative. RockCyber offers Virtual CISO and Virtual Chief AI Officer services, cybersecurity assessments, AI risk assessments, and compliance program development. For organizations navigating the intersection of AI innovation and regulatory uncertainty, RockCyber provides practical guidance grounded in measurable outcomes. Learn more at www.rockcyber.com.



