Key Takeaways
- 92% of health insurers currently use or plan to use AI systems — yet roughly one-third still lack regular bias-testing protocols, according to Fenwick's regulatory tracker.
- The NAIC's 2023 Model AI Bulletin has been adopted by nearly half of U.S. states, but it is principle-based guidance, not binding law — carriers can acknowledge it and do almost nothing concrete to comply.
- UnitedHealth's naviHealth algorithm allegedly carried a 90% error rate on claim denials, yet policyholders who didn't appeal never got those reversals — revealing how explainability failures translate directly into financial harm.
- Trump's December 2025 Executive Order establishing a federal AI litigation task force creates a direct collision course with Colorado, California, and New York's AI anti-discrimination statutes — and insurance's McCarran-Ferguson protections may not be enough to shield state law.
- Forward-looking carriers are building explainability infrastructure now, betting that regulators will shift from guidance to enforcement within 18 months.
The algorithm denied the claim in milliseconds. The policyholder waited weeks for an explanation that never fully came. The regulator, when asked how the model works, had no legal authority to compel an answer.
This isn't a hypothetical. It's the operational reality for millions of American policyholders in 2026. According to research tracked by Fenwick, 92% of health insurers, 88% of auto insurers, and 70% of homeowners insurers currently use or plan to deploy AI systems in core functions — underwriting, pricing, claims adjudication, fraud detection. The insurance industry has moved faster on AI adoption than almost any other regulated sector. The regulatory infrastructure to govern that adoption has not kept pace. The gap between those two realities is the defining compliance fault line of 2026.
The Statute Problem: Most State Insurance Codes Were Written Before Google Existed
The foundational architecture of state insurance regulation — unfair trade practice statutes, claim settlement regulations, rate filing requirements — was built during an era when actuarial tables were reviewed by humans and denial letters were typed on paper. When these codes prohibit "arbitrary" or "capricious" claims handling, they assume a human made a decision that could be explained through a chain of documented reasoning. A gradient-boosted tree with 400 input variables doesn't work that way.
This creates a specific legal problem: the standard of explainability the law demands is structurally incompatible with how many modern AI models produce outputs. Buchanan Ingersoll & Rooney's analysis identifies 31 states that have now adopted NAIC-aligned AI governance bulletins — but those bulletins are regulatory guidance instruments, not amendments to the underlying insurance code. A carrier can acknowledge the NAIC framework, implement a nominal governance committee, and still deploy a claims-scoring model that nobody inside the organization can fully explain to a regulator.
The enforcement consequence of this gap is real: insurance departments requesting documentation on AI systems during market conduct examinations currently have no binding statutory authority in most states to compel disclosure of model architecture, training data provenance, or validation methodology from third-party vendors. The regulator can ask. The carrier — or more often its vendor — can decline to answer fully on proprietary grounds.
What Carriers Are Actually Deploying: Agentic Claims, Algorithmic Pricing, and Predictive Lapse Models
The public discourse on insurance AI tends to anchor on a single use case: automated claims denial. The actual deployment picture is considerably broader and, from a regulatory standpoint, more complex.
Carriers are using machine learning in predictive lapse models that identify policyholders likely to cancel, then price renewals accordingly — a practice with clear disparate-impact potential along income and zip code lines. Algorithmic pricing engines are ingesting credit scores, behavioral telematics, social media proxies, and satellite imagery in combinations that no single underwriter ever reviews holistically. And as ZwillGen's review of emerging AI claims rules documents, a new generation of agentic claims tools is moving toward fully automated adjudication on standard property and auto claims, where a human reviews the output but not the reasoning path that produced it.
The UnitedHealth naviHealth case crystallizes what happens when this deployment outpaces governance. A CBS News investigation and subsequent class-action proceeding revealed that the nH Predict algorithm used to deny post-acute rehabilitative care for Medicare Advantage patients carried an alleged 90% error rate on appeals — meaning nine of ten challenges were reversed. Yet because only approximately 0.2% of denied policyholders filed appeals, the overwhelming majority of wrongful denials stood. In February 2025, a federal court allowed breach-of-contract and good-faith claims to proceed, specifically because the policy language promised that coverage decisions would be made by clinical staff — not an algorithm. That is an explainability failure with direct financial liability attached.
The Explainability Standard That Doesn't Exist Yet — But Regulators Are Demanding Anyway
The NAIC Model AI Bulletin, adopted December 2023, is the closest thing the U.S. has to a national explainability standard for insurance AI. It requires insurers to maintain documentation demonstrating "how inputs lead to specific outputs or decisions" and to explain adverse coverage determinations with "a specific rationale for denial." This sounds robust. It isn't enforceable in the way a statute is, and it is deliberately principle-based — the NAIC explicitly declined to prescribe specific technical standards for model architecture or validation methodology.
The result is that regulators are demanding explainability in examinations while lacking the statutory tools to define what adequate explainability looks like. A carrier that says "our model uses 200 variables weighted by a proprietary algorithm, and the top three contributing factors to this denial were claims frequency, credit score, and geographic risk tier" has arguably satisfied the NAIC guidance without revealing anything a policyholder or regulator could meaningfully challenge.
The 2026 regulatory calendar is meant to close this gap. Fenwick's regulatory tracker notes that regulators are expected to begin deploying the NAIC AI Systems Evaluation Tool during market conduct exams this year, and a draft model law specifically targeting third-party data and model vendors is anticipated — potentially including licensing requirements for AI vendors that currently operate with essentially no insurance-specific regulatory accountability.
Anti-Bias Guardrails: Where NAIC Guidance Ends and Legal Liability Begins
Colorado remains the clearest example of what binding AI anti-discrimination law in insurance looks like. SB 21-169, signed in 2021 and expanded through Amended Regulation 10-1-1 effective October 15, 2025, requires insurers to test their external consumer data sources, algorithms, and predictive models for unfair discrimination against protected classes — and to document that testing. This isn't guidance; non-compliance is an unfair trade practice under Colorado law.
New York's DFS Circular Letter 2024-7 operates similarly, requiring insurers to demonstrate AI systems don't proxy for protected classes and mandating vendor audits. California's SB 1120 (2024) prohibits health insurers from denying coverage based solely on algorithmic outputs, requiring licensed clinician review of adverse determinations.
The gap between these progressive-state mandates and the remaining jurisdictions is significant. Buchanan's analysis estimates roughly one-third of health insurers still lack regular testing protocols for bias and discrimination in their AI models — in states where such testing is technically required under NAIC guidance. That's not a compliance strategy. It's a litigation inventory.
How Forward-Looking Carriers Are Building for the Regulation That's Coming
The carriers that will emerge from the next 18 months of regulatory escalation without material examination findings or class-action exposure share a common posture: they are treating explainability infrastructure as a capital investment, not a cost center.
Concretely, this means four things. First, building model inventories — comprehensive registries of every AI system used in regulated processes, tiered by decision impact. Second, deploying SHAP values or equivalent local explainability tools that can translate a model's output into a human-readable rationale for each individual adverse decision. Third, establishing contractual explainability standards with third-party vendors, including the right to audit and the obligation to provide documentation during regulatory examinations. And fourth, creating escalation paths for high-stakes decisions — coverage denials, significant premium adjustments — where human review of the AI output is documented and auditable.
This infrastructure is expensive. It is also now priced into the cost of operating in states like Colorado, New York, and California — and will be priced into operations in every state where the NAIC model law eventually acquires enforcement teeth.
The Federal Wildcard: Could a National AI Framework Override State Insurance Sovereignty?
The most disruptive variable in the 2026 regulatory landscape is the Trump administration's December 2025 Executive Order on AI policy. As analyzed by Latham & Watkins, the order establishes an AI Litigation Task Force within the DOJ, tasked with identifying and challenging state AI laws that "obstruct" a minimally burdensome federal framework — including through direct constitutional litigation on Commerce Clause grounds.
For insurance, this creates a sovereignty collision that McCarran-Ferguson may not fully resolve. That act reserves regulation of the business of insurance to the states, but its reverse-preemption protection applies to "acts of Congress," not executive orders. Paul Hastings' analysis notes that if Congress eventually passes enabling legislation consistent with the EO's framework, a sufficiently specific federal AI statute could preempt state insurance AI regulations — including Colorado's anti-discrimination requirements and New York's DFS guidance.
The practical implication for compliance officers is uncomfortable: state AI rules that are currently enforceable may face federal preemption challenges within two years, but they remain fully enforceable today. Building to the current state-law standard is not optional, even if the federal picture eventually harmonizes downward. Carriers that delay compliance betting on federal preemption will face examination findings and litigation in the window before any federal framework actually takes effect.
The algorithm will keep running. The question is whether the institutions accountable for its outputs — carriers, vendors, and regulators alike — will build the governance infrastructure to justify the decisions it makes. The evidence so far suggests they're running behind.
Frequently Asked Questions
How many states have adopted the NAIC Model AI Bulletin, and does it create binding legal obligations?
Nearly half of U.S. states have adopted the NAIC's December 2023 Model AI Bulletin as of early 2026, according to [Fenwick's regulatory tracker](https://www.fenwick.com/insights/publications/tracking-the-evolution-of-ai-insurance-regulation). However, the bulletin is principle-based guidance rather than statute — it establishes governance expectations but does not prescribe specific technical standards, and non-compliance does not constitute a per se violation of insurance code in most adopting states.
What does the UnitedHealth naviHealth case tell us about the practical consequences of AI explainability failures?
The class action, which a federal court allowed to proceed in February 2025, alleges that UnitedHealth's nH Predict algorithm carried a 90% error rate on claims it denied — meaning 9 of 10 challenged denials were reversed on appeal, per [STAT News reporting](https://www.statnews.com/2023/11/14/unitedhealth-class-action-lawsuit-algorithm-medicare-advantage/). Because only ~0.2% of denied policyholders actually appealed, the overwhelming majority of wrongful denials were never corrected — a direct financial harm the court found sufficient to support breach of contract claims.
What does Colorado's SB 21-169 require that the NAIC Model Bulletin does not?
Colorado's SB 21-169 and its implementing Regulation 10-1-1 (effective October 15, 2025) impose binding, enforceable obligations on insurers to test external consumer data sources, algorithms, and predictive models for unfair discrimination against protected classes — and to document those tests, per the [Colorado Division of Insurance](https://doi.colorado.gov/for-consumers/sb21-169-protecting-consumers-from-unfair-discrimination-in-insurance-practices). Non-compliance is classified as an unfair trade practice, creating direct regulatory liability rather than the voluntary compliance expectation of NAIC guidance.
How does the Trump administration's December 2025 AI Executive Order affect state insurance AI regulations?
The December 11, 2025 Executive Order established a DOJ AI Litigation Task Force authorized to challenge state AI laws in federal court on Commerce Clause and preemption grounds, according to [Latham & Watkins' analysis](https://www.lw.com/en/insights/ai-executive-order-targets-state-laws-and-seeks-uniform-federal-standards). However, the EO itself cannot override existing state law — only an act of Congress can trigger the McCarran-Ferguson preemption analysis — so state insurance AI regulations remain fully enforceable while the federal challenge plays out.
What practical steps should compliance officers take given the current regulatory uncertainty?
Carriers should build comprehensive AI model inventories tiered by decision impact, deploy explainability tools (such as SHAP values) that can produce human-readable rationales for individual adverse decisions, and establish contractual audit rights with third-party AI vendors — as outlined in [Buchanan Ingersoll & Rooney's compliance framework](https://www.bipc.com/when-algorithms-underwrite-insurance-regulators-demanding-explainable-ai-systems). State laws in Colorado, California, and New York are enforceable now, and regulators are expected to begin using the NAIC AI Systems Evaluation Tool in market conduct examinations in 2026.