InsurTech & AI

One Million Robotaxi Rides a Month, Zero Actuarial Tables: Why Insurers Are Underwriting Physical AI on Pure Guesswork

Key Takeaways

  • Waymo surpassed 500,000 weekly paid rides in March 2026, yet the actuarial tables insurers rely on to price this risk simply do not exist — carriers are extrapolating from human-driver loss history to underwrite a categorically different risk.
  • EY's 2026 analysis identifies physical AI as a more structurally disruptive force for insurers than generative AI, because it shifts liability away from human operators and distributes it across hardware vendors, software platforms, and deployers simultaneously.
  • Existing commercial auto, product liability, and cyber policies were each designed for distinct, bounded risks; physical AI collapses those boundaries, leaving underwriters trying to stack policies that collectively still don't cover the actual exposure.
  • The three-party liability tangle — manufacturer, software platform, deployer — means every claim involving a physical AI system will require fault attribution that current policy language has no mechanism to resolve cleanly.
  • The insurance industry's premium paradox is acute: charge enough to cover unknown catastrophic losses and you price physical AI out of commercial adoption; charge what the market will bear and you guarantee underreserved catastrophic claims.

The insurance industry is underwriting one of the most consequential technology transitions in history using data from a world that no longer applies. Waymo crossed 500,000 weekly paid robotaxi rides in March 2026, up from roughly 175,000 a year earlier, with the company delivering 14 million paid trips across 2025 alone. That is commercial-scale physical AI operating in public space, on public roads, interacting with pedestrians, cyclists, and human-driven vehicles every second of every day. The actuarial tables that should be pricing this risk do not exist. Insurers are instead recycling frameworks derived from human-driver loss patterns to underwrite a system that operates on entirely different failure modes, liability structures, and causal chains. EY's early-2026 analysis puts it plainly: physical AI represents a deeper structural disruption to the insurance industry than generative AI ever will.

EY's 2026 Warning Decoded: Physical AI Is a Categorically Different Risk

The distinction EY draws between generative AI and physical AI is not a matter of degree, it is a matter of category. Generative AI touches insurers primarily as an operational efficiency tool and a fraud vector. Physical AI, by contrast, directly restructures the risk pool itself. When a large language model hallucinates, the downstream harm is usually informational. When a robotaxi's perception stack misclassifies a pedestrian at 35 mph, the harm is immediate, physical, and potentially fatal.

EY consultant Chris Raimondo, writing in early 2026, identifies the core liability reallocation: "Liability may extend across multiple parties, including the hardware vendor, the AI software platform, and the company or individual that owns the vehicle." This is not a refinement of existing auto liability doctrine — it is a demolition of it. Traditional motor insurance anchors liability to a human operator. Remove the human operator and the legal and actuarial architecture that has worked tolerably well since the 1920s suddenly has no load-bearing wall.

The EY Global Insurance Outlook for 2026 documents the scale of the incoming pressure: global AI robot deployments could reach 1.3 billion units by 2035, with AI spending already projected to exceed $500 billion by 2027. The humanoid robotics segment alone is forecast to reach $150 billion by 2035. These are not research-lab figures; they represent tens of millions of physical AI agents operating in warehouses, hospitals, public streets, and private homes, each one an unpriced liability event waiting to occur.

The Actuarial Void: Pricing Risk That Has Never Been Observed

Conventional actuarial methodology is built on credible loss experience. Insurers price commercial auto by analyzing frequency and severity distributions drawn from years of claims data segmented by vehicle class, geography, driver profile, and use type. The NHTSA's foundational finding that 94% of motor vehicle accidents involve driver-related factors tells you exactly how much of that historical data is structurally irrelevant to autonomous systems. Physical AI doesn't get tired, distracted, or impaired — but it does encounter edge cases outside its training distribution, suffer sensor degradation, receive over-the-air software updates that alter behavior mid-deployment, and fail in ways that have no analog in any existing loss database.

Insurers are aware of the problem. As S&P Global noted in its 2025 analysis, the core actuarial challenge is that "behavioral and mobility data become especially valuable to fill the gap" — a generous framing of what is effectively a confession that there is no gap-filling happening yet at any meaningful scale. OEMs retain control over vehicle logs and sensor data. Insurers are not standardly requiring operational telemetry as a condition of underwriting. The result is that premium pricing for physical AI is, right now, a function of educated extrapolation layered over incomplete analogues — not observed loss experience.

QED Investors' analysis of robot liability frames the data scarcity problem bluntly: "Insufficient historical data makes actuarial modeling nearly impossible; last year's technology-based models become obsolete quickly." Autonomous systems that learn and adapt compound this further. A robotaxi fleet in January 2025 is not the same risk object as that same fleet in January 2026, because both the hardware configuration and the software decision-making architecture will have changed through iterative updates. Static actuarial tables cannot model a risk that reinvents itself through continuous deployment.

Policy Language Built for Human Operators Breaks Down the Moment the Operator Is an Algorithm

The policy language problem is distinct from the actuarial problem, and arguably more urgent. Even if an insurer could accurately price the risk, the contracts they would use to transfer it contain definitions, exclusions, and liability triggers that were written with human agency as the foundational assumption.

Consider a standard commercial auto policy's "operator" definition. It assumes a legal person who made deliberate driving decisions, can be questioned, can provide a statement, and whose license and training history can be audited. A Waymo vehicle has none of these properties. California requires autonomous rideshare operators to carry $5 million in liability coverage, but the policy structures filling that requirement are assemblages of commercial auto, product liability, and cyber coverage that were never designed to interoperate for a single accident event involving algorithmic fault.

The cyber coverage gap is particularly acute. QED's analysis of the St. Jude cardiac device recall illustrates the structural flaw: cyber insurance covers data breaches but not physical damage caused by compromised systems. A robotaxi hacked mid-ride, or a warehouse humanoid robot that causes injury after receiving a malicious over-the-air update, falls into coverage territory that no single existing policy line was designed to occupy. Modern autonomous vehicles contain approximately 100 million lines of code; a software fault causing physical harm sits somewhere between a product defect, a cyber event, and an operational failure — and current policies treat those as mutually exclusive categories.

The Three-Party Liability Tangle: Where Manufacturer, Deployer, and Operator Gaps Collide

The liability distribution problem in physical AI claims is a multi-defendant structure that existing coverage silos cannot cleanly resolve. When the self-driving Uber fatally struck a pedestrian in Tempe, Arizona in 2018, the subsequent litigation demonstrated exactly how badly traditional liability assignment frameworks break down: was the fault with Uber as operator, Volvo as manufacturer, the Uber safety driver, or the autonomy software? The case ultimately settled before producing the judicial clarity the industry needed, leaving the underlying attribution question unanswered.

Waymo operates as a vertically integrated deployer, owning both the vehicle and the full autonomy stack, which at least concentrates liability into a single entity. The broader physical AI ecosystem is far messier. Specialist robotics insurance firms like AXIS and Founder Shield are beginning to write policies for embodied AI systems, but the standard market approach remains a patchwork of general liability, product liability, and professional liability coverage that requires courts — not policies — to determine which layer responds to a given claim. That is not an insurance market; it is a litigation pipeline with a coverage illusion attached.

As Deloitte's AV insurance analysis projects, claims will increasingly migrate from commercial auto lines to product and professional liability as autonomy levels rise — creating a $7 billion annual premium shift in commercial auto alone if just 20% of that exposure reclassifies. Insurers sitting heavily on commercial auto books without product liability expertise are structurally exposed to that migration.

The Premium Paradox: Why Correct Pricing Could Kill the Market

The deepest structural tension in physical AI insurance is the pricing dilemma. Actuarially sound underwriting of an unobserved, catastrophic-tail risk would require premium loads that no commercial robotaxi operator, warehouse robotics deployer, or humanoid robot lessee could absorb without fundamentally altering their unit economics. EY's own projections suggest auto premiums could ultimately fall 30-50% as autonomous systems reduce accident frequency — but that is a long-run equilibrium prediction, not a current-market reality. In the interim, insurers pricing for unknown tail risk while sitting on thin loss data have every actuarial incentive to over-reserve, and every competitive incentive to under-price.

Lemonade's recent launch of usage-based policies that adjust premiums based on whether a vehicle is in autonomous versus human-driven mode represents a genuine structural innovation, but it addresses frequency, not severity. The catastrophic loss scenario for physical AI — a fleet-wide software fault causing simultaneous multi-vehicle incidents, a humanoid robot line in a manufacturing facility that fails systemically — has no analog in any existing loss database from which to derive severity reserves. Insurers charging market-rate premiums for this exposure are, by definition, underreserved against the tail.

What a Functional Physical AI Insurance Market Would Actually Require

A market that can actually price and transfer physical AI risk at scale needs three things that do not yet exist in sufficient form: mandatory telemetry data-sharing as a condition of insurability, policy language that treats physical AI systems as integrated risk objects rather than forcing claims into human-analog categories, and regulatory frameworks that assign liability proportionally across the manufacturer-software-deployer chain before losses occur rather than after.

Swiss Re and Verisk have recognized that robots fundamentally reshape commercial liability, workers' compensation, and property insurance simultaneously. That recognition has not yet translated into coordinated product development at the pace the deployment curve demands. Wiley's 2026 analysis of state AI liability bills documents a patchwork of emerging state-level frameworks that will create jurisdiction-by-jurisdiction liability variation — exactly the wrong structure for a national robotaxi network or a global humanoid robot supply chain.

The honest assessment is that the insurance industry is currently subsidizing physical AI adoption by undercharging for risks it cannot fully model. That subsidy will persist until a cluster of large, unambiguous physical AI losses forces loss experience into the actuarial record. At Waymo's current growth trajectory — targeting 1 million weekly rides by end of 2026 — that record is accumulating. The question is whether the industry builds the analytical infrastructure to use it before the losses outpace the premiums.

Frequently Asked Questions

Why can't insurers simply apply existing commercial auto policy frameworks to autonomous vehicles?

Commercial auto policies are architected around human operator liability, using driver history, licensing status, and behavioral patterns as the core underwriting variables. The NHTSA has documented that 94% of motor vehicle accidents involve driver-related factors, meaning virtually all historical loss data is derived from human-caused incidents that have no direct actuarial relevance to autonomous system failure modes. Physical AI introduces new causal categories — software faults, sensor degradation, adversarial cyber events, and over-the-air update errors — that existing commercial auto definitions simply don't contemplate.

Who currently holds the liability when a robotaxi causes an accident?

The answer varies by operator structure and jurisdiction. Waymo, as a vertically integrated operator owning both vehicle and autonomy stack, effectively assumes primary liability for its system's driving decisions; California mandates $5 million in liability coverage for autonomous rideshare operators. In less vertically integrated deployments — where a third-party software platform runs on a vehicle manufactured by someone else and deployed by a separate fleet operator — liability attribution requires determining whether the fault was a hardware defect, a software error, or an operational decision, a question current policy language has no clean mechanism to resolve without litigation.

What is the 'premium paradox' facing physical AI insurers?

Insurers underwriting novel catastrophic-tail risks with no observed loss history face an insoluble tension: actuarially sound pricing requires loading premiums for unknown severity scenarios that would make physical AI commercially unviable, while competitive market pricing produces premiums that are demonstrably underreserved against tail events. [EY projects](https://www.ey.com/en_us/insights/insurance/the-age-of-autonomous-technologies-in-insurance) that AV adoption could eventually cut auto premiums 30-50% through reduced accident frequency, but that long-run projection doesn't help carriers pricing risk in a market where loss experience is still being established in real time.

How does cyber insurance interact with physical AI liability coverage?

It doesn't, cleanly. Standard cyber policies cover data breaches and business interruption from network events but explicitly exclude physical bodily injury and property damage. Physical AI systems where a cybersecurity compromise causes physical harm — a hacked autonomous vehicle, a warehouse robot receiving a malicious software update — fall into a coverage gap between cyber, general liability, and product liability policies that were designed as separate risk silos. [QED Investors' analysis](https://www.qedinvestors.com/blog/when-robots-go-haywire-who-picks-up-the-tab) identifies this gap as one of the most structurally dangerous in the entire physical AI insurance landscape.

What would a properly structured physical AI insurance product actually look like?

Effective physical AI coverage would require mandatory operational telemetry sharing as an underwriting condition, integrated policy language that treats the manufacturer-software-deployer chain as a single insured unit with clearly defined contribution triggers, and dynamic premium adjustment mechanisms tied to software versioning and system capability changes. [Specialist insurers like AXIS and Founder Shield](https://sixdegreesofrobotics.substack.com/p/the-coming-insurance-layer-in-robotics) are developing robotics-specific products, but market-wide standardization of physical AI policy terms — analogous to the ISO Commercial Lines standardization that made modern commercial auto markets function — remains years away.

More from InsurTech & AI

Hiscox Cut Its Quote Cycle by 99.4%. That Number Is Hiding the Governance Crisis That Hits When Insurance AI Goes Enterprise-WideCarriers Can't Tell the Difference Between a Real AI Partner and a Rebranded SaaS Vendor — InsurTech's $10 Billion Identity ProblemBeyond the Robot Car: Why Industrial AI and Embodied Agents Are the Liability Crisis Insurers Aren't Pricing ForBeyond the Robot Car: Why Industrial AI and Embodied Agents Are the Liability Crisis Insurers Aren't Pricing For
← Back to Blog