Key Takeaways
- Trump's December 2025 executive order created legal uncertainty without delivering actual preemption — the DOJ's AI Litigation Task Force must win in court, a process that will take years while state mandates remain fully enforceable.
- The Senate stripped the 10-year state AI moratorium from the One Big Beautiful Bill before it was signed on July 4, 2025, leaving states entirely free to expand AI regulation — which they are aggressively doing.
- Up to 37 states have adopted the NAIC's Model AI Bulletin, and a 12-state NAIC AI examination tool pilot is active through September 2026, creating immediate market conduct exposure regardless of federal litigation timelines.
- Colorado's quantitative disparate impact testing mandate, New York's proxy discrimination ban, and California's autonomous decision restrictions collectively cover premium markets too large to defer compliance on while betting on federal intervention.
- Carriers building AI governance to the highest state standard now will absorb compliance costs but avoid examination findings and private litigation exposure; those waiting for Washington will face both.
The compliance calculus for carriers deploying AI in underwriting has become genuinely unsolvable. On December 11, 2025, President Trump signed an executive order directing federal agencies to treat most state AI regulations as obstacles to be litigated away. Six weeks later, those same state regulators rolled out structured AI examination tools and expanded algorithmic discrimination mandates. The result: insurers operating across multiple states must simultaneously comply with state laws the federal government is actively trying to kill, while preparing for a federal framework that doesn't yet exist. That's not regulatory friction. That's a structural impossibility — and the insurance industry's strategy of waiting for Washington to resolve it is a liability, not a plan.
The Executive Order That Created More Chaos Than It Solved
The December 2025 executive order — formally titled "Ensuring a National Policy Framework for Artificial Intelligence" — established a minimally burdensome federal AI framework and deployed multiple preemption vectors against conflicting state laws. The Department of Justice was directed to stand up an AI Litigation Task Force, charged with identifying and challenging state AI laws on Commerce Clause, First Amendment, and federal preemption grounds. The FTC and FCC were directed to initiate proceedings establishing federal standards that would supersede state disclosure and reporting requirements. The order even incorporated a funding leverage mechanism: states with conflicting AI laws risk losing access to federal broadband deployment grants.
But the EO has a foundational problem. It directs agencies to act but doesn't itself preempt anything. The DOJ's litigation task force must win in court before any state law actually falls, and Sidley Austin's analysis makes clear that constitutional challenges will be slow and outcome-uncertain. The administration also tried a more blunt instrument: the One Big Beautiful Bill Act passed the House in May 2025 with a provision imposing a 10-year moratorium on state AI regulation enforcement. The Senate stripped that provision before the bill became law on July 4, 2025. States remain fully empowered to regulate AI. As Proskauer observed, the outcome was plain: the Big Beautiful Bill leaves AI regulation to states and localities — for now. The executive order succeeded in creating legal uncertainty without delivering the preemption it promised. For compliance teams, that's the worst possible outcome.
22 States, Zero Consensus: The Underwriting Regulation Wild West
Even among states that have moved on AI governance, the requirements diverge sharply, making multi-state compliance an exercise in satisfying contradictory mandates. Approximately 24 jurisdictions had adopted the NAIC's Model AI Bulletin as of mid-2025, with more recent figures suggesting 37 states as of early 2026. But "adopted" masks enormous variation in what's actually required.
Colorado's algorithmic discrimination statute (C.R.S. §10-3-1104.9) is the most demanding in the country. It expanded to cover auto insurance and health plans on October 15, 2025, now requiring carriers to perform quantitative disparate impact testing across protected categories. New York's Circular Letter 2024-7 requires insurers to demonstrate that AI systems don't proxy for protected classes and mandates explanatory documentation, vendor audits, and bias testing. California restricts health insurers from relying solely on automated adverse determinations, requiring licensed clinician review and AI-specific disclosures at every denial.
These three states collectively represent a dominant share of U.S. premium volume. Carriers serving those markets have no choice but to build explainability infrastructure, maintain auditable model documentation, and conduct recurring bias validation. The Trump executive order specifically targets Colorado's law as potentially compelling "false results" to avoid differential treatment. But until a court rules otherwise, Colorado's law is valid, enforceable, and active. Complying with it while betting it gets struck down is not a risk management strategy.
How State Liability Expansion Bills Are Daring Insurers to Comply
The active 2026 legislative pipeline makes the problem materially worse. Wiley's state AI bill tracker identifies a wave of proposals that directly implicate insurance operations. New York is advancing legislation banning personalized algorithmic pricing, with class action exposure of at least $5,000 per violation. Minnesota targets AI-assisted real-time dynamic pricing. Multiple states are creating private rights of action for algorithmic bias claims tied to healthcare coverage decisions.
This liability expansion trajectory matters structurally because it shifts enforcement from regulators to plaintiffs. Even if the DOJ's AI Litigation Task Force eventually defeats a state statute on Commerce Clause grounds, individual plaintiffs can continue to litigate existing claims under that law until appellate courts act. Carriers that assumed a state law would eventually fall and didn't build compliant systems will face retroactive litigation exposure on decisions already made. The NAIC's position is explicit: existing insurance laws apply regardless of whether decisions are made by humans, algorithms, or third-party vendors. State insurance departments aren't waiting for federal resolution to begin market conduct examinations, and neither are plaintiff's attorneys.
The Impossible Audit: When Federal Silence Meets State Mandates
The NAIC launched a 12-state pilot of its AI Systems Evaluation Tool in January 2026, running through September. Participating states include Colorado, California, Maryland, Virginia, Connecticut, Pennsylvania, Florida, Rhode Island, Iowa, Vermont, Louisiana, and Wisconsin. The tool gives regulators a structured examination framework for reviewing carrier AI governance programs, vendor contracts, model documentation, and disparate impact testing results.
Crowell & Moring's analysis of the tool's requirements is direct: carriers must maintain board-level accountability documentation, a comprehensive inventory of all deployed AI systems including third-party vendor models, vendor contracts containing audit rights and regulatory cooperation obligations, and recurring risk assessments demonstrating anti-discrimination compliance. This is enterprise-scale compliance infrastructure that must be operational before an examiner arrives, not built in response to a request.
The scale of the exposure is significant. A 2025 NAIC survey across 16 states found that 84% of health insurers are already using AI or machine learning in some capacity. Most of those systems are now subject to examination in one or more pilot states. Telling a state examiner that AI documentation isn't ready because the carrier was waiting for federal clarity isn't a defense. It's a market conduct finding.
What Carriers Are Actually Doing — and What It's Costing Them
The sophisticated carriers aren't waiting for Washington. They're building compliance infrastructure to the highest state standard — treating Colorado-plus-New York-plus-California as the de facto national floor and applying those requirements across all operations. That means investing in explainable AI tooling, bias auditing programs, vendor contract renegotiation, and model inventories that track hundreds of deployed systems across underwriting, pricing, claims, and fraud detection.
This is expensive. AI governance frameworks require new investments across explainability tools, bias auditing infrastructure, and cybersecurity posture. Regional and mid-market carriers without dedicated AI governance teams face a build-versus-outsource decision with no clean answer. Outsourcing compliance functions to third-party vendors creates new vendor management obligations under the same state rules the vendor is supposed to help satisfy. Building internal capability requires talent and budget that many mid-tier carriers don't have.
The strategic disparity here will accelerate consolidation. Carriers that build robust AI governance infrastructure now will survive examination cycles and the incoming wave of plaintiff litigation. Those that don't will face regulatory action, litigation exposure, or both — and the carriers positioned to acquire distressed books will be those that were already compliant.
The Only Way Out: Why the Industry Needs to Stop Waiting for Washington
The insurance industry's political strategy has been to advocate for federal uniformity as the solution to patchwork state regulation. That strategy has failed. The One Big Beautiful Bill's 10-year state moratorium was stripped from the final legislation. The Trump EO's preemption mechanisms depend on litigation that could take three to five years to produce enforceable outcomes. The McCarran-Ferguson Act — the 1945 statute granting states primacy over insurance regulation — creates a durable legal foundation for state authority that the DOJ's task force will struggle to overcome in insurance-specific contexts.
The NAIC has made its position unambiguous: it opposes federal preemption, it supports state-based oversight, and it is actively building the examination infrastructure to enforce that position. With 37 states adopting the Model Bulletin and an active multi-state AI examination pilot running through September 2026, state regulators are creating facts on the ground that no executive order can easily reverse.
The carriers that will win this period are those that treat state compliance as a permanent operating condition. That means investing in governance frameworks that satisfy the Colorado-New York-California standards across all operations, renegotiating vendor contracts before examination cycles begin, and accepting that the patchwork is the regulatory reality for at least the next three to five years. The federal preemption thesis was a reasonable bet in 2024. In April 2026, it's an exposure.
Frequently Asked Questions
Does the Trump executive order actually preempt state AI insurance regulations?
The December 2025 executive order directed federal agencies to challenge conflicting state laws but doesn't itself preempt them — the DOJ's AI Litigation Task Force must win in federal court before any state law falls, and those challenges could take years to resolve. Meanwhile, state insurance departments continue enforcing existing rules and rolling out new examination tools. The [Sidley Austin analysis](https://www.sidley.com/en/insights/newsupdates/2025/12/unpacking-the-december-11-2025-executive-order) of the order makes clear that constitutional challenges will be slow and outcome-uncertain.
What happened to the 10-year state AI moratorium in the One Big Beautiful Bill?
The House passed the One Big Beautiful Bill Act in May 2025 with a provision that would have imposed a 10-year moratorium on state and local AI regulation enforcement. The Senate removed the provision before the bill was signed into law on July 4, 2025, leaving states fully empowered to regulate AI. As [Proskauer's analysis](https://www.proskauer.com/blog/big-beautiful-bill-leaves-ai-regulation-to-states-and-localities-for-now) confirmed, AI regulation remains with states and localities for the foreseeable future.
What is the NAIC AI Evaluation Tool and which states are using it?
The NAIC launched a 12-state pilot of its AI Systems Evaluation Tool in January 2026, running through September, with participating states including Colorado, California, Maryland, Virginia, Connecticut, Pennsylvania, Florida, Rhode Island, Iowa, Vermont, Louisiana, and Wisconsin. The tool gives regulators a structured framework for examining carrier AI governance programs during market conduct reviews. [Crowell & Moring's analysis](https://www.crowell.com/en/insights/client-alerts/naic-intensifies-ai-regulatory-focus-what-health-insurance-payors-need-to-know) indicates that board-level documentation, vendor audit rights, and recurring disparate impact assessments are core compliance requirements the tool evaluates.
Which states have the most demanding AI requirements for insurance underwriters?
Colorado's algorithmic discrimination statute (C.R.S. §10-3-1104.9) requires quantitative disparate impact testing and expanded to cover auto and health plans in October 2025. New York's Circular Letter 2024-7 mandates proof that AI systems don't proxy for protected classes, along with vendor audits and bias testing documentation. California restricts health insurers from relying solely on automated adverse decisions, requiring licensed clinician review under Health & Safety Code §1367.01, per [Buchanan Ingersoll & Rooney's review](https://www.bipc.com/when-algorithms-underwrite-insurance-regulators-demanding-explainable-ai-systems).
How does the McCarran-Ferguson Act affect federal AI preemption in insurance?
The McCarran-Ferguson Act of 1945 grants states primary authority over insurance regulation and limits the application of federal law to the business of insurance. This creates a significant legal obstacle for federal preemption efforts in the insurance sector specifically, meaning the DOJ's AI Litigation Task Force arguments based on Commerce Clause or First Amendment grounds will face heightened scrutiny when applied to insurance underwriting and claims practices. The [NAIC argues](https://www.oneinc.com/resources/blog/ai-regulation-in-insurance-naic-pias-take-on-federal-oversight) that McCarran-Ferguson, combined with 150 years of state regulatory precedent, makes broad federal preemption of insurance AI rules legally untenable.