What I’ve Learned Designing AI Tools for Auto Insurance

When people think of AI in insurance, they often imagine automation taking over entire processes — replacing humans with algorithms. But that’s not how I approach it. As a product designer working on ClaimsAgent, an AI-powered assistant for auto insurance adjusters, I focused on a different question: How can AI support adjusters — not replace them? Auto claims are messy. No two accidents are alike, and adjusters often face a flood of documents, deadlines, and system toggles. AI has the potential to ease that load — but only if we design it right.

Start with the Adjuster, Not the Algorithm

I’ve seen this happen more than once: a team gets excited about what the AI can do, then starts looking for a use case to justify it. That’s a shortcut to nowhere. When we started designing ClaimsAgent, we didn’t begin with the model — we started with people. We spent time with adjusters, listening closely to what slowed them down, what felt frustrating, and what consumed too much of their day.

Interestingly, no one said, “I need AI.” What we heard instead were things like: “It takes forever to write up notes.” “I’m constantly switching between tools just to find one detail.” “I’m doing the same repetitive tasks over and over.” That shaped everything. We weren’t trying to showcase the latest tech. We were focused on how AI could quietly relieve pressure and free up time for the higher-value parts of their work. The AI wasn’t the hero — the adjuster was.

We also created user journey maps to pinpoint the moments where delays and frustration peaked. These pain points gave us our design priorities. Rather than inserting AI into the process randomly, we built features where they were naturally needed — helping with repetitive note-taking, summarizing long documents, and suggesting next steps during a claim. This way, the technology felt like a natural extension of the workflow, not a foreign add-on.

Build for Trust, Not Just Accuracy

Even if AI is technically accurate, users won’t rely on it if they don’t trust it — and that’s especially true in insurance. Claims adjusters make decisions that directly impact people’s lives and financial outcomes. If they can’t trust what they’re seeing, they’ll fall back on their own judgment.

That’s why we made transparency a non-negotiable in our design. When the AI generated a note or suggestion, we didn’t just present a final result — we showed the supporting details. Adjusters could see what was referenced, what patterns were detected, and how confident the AI was in its suggestion. Most importantly, they had complete control to edit, accept, or ignore those suggestions. That kind of flexibility reinforced a crucial message: “This tool is here to support your decision-making — not override it.” We saw adoption increase because people understood how the system worked — and felt comfortable using it.

We also included a feature where users could click on highlighted terms in the AI summary to see the source data behind it. This made it feel less like a black box and more like a smart assistant — one that could be challenged, verified, and trusted over time.

Co-Pilot, Not Auto-Pilot

One of our design principles was: co-pilot, not auto-pilot. Insurance is a judgment-heavy space. Claims aren’t uniform, and there’s often nuance that only a human can detect. So instead of handing off decision-making to the AI, we built a system that worked alongside the adjuster.

We designed microinteractions that allowed for one-click edits, smart defaults, and quick undo functions. We also gave adjusters options to customize how they interacted with the assistant. These weren’t just UX niceties. They were thoughtful signals of respect — reminders that the human is still in charge. And that made all the difference.

We even included preference settings that allowed users to define how assertive they wanted the AI to be — some preferred gentle nudges, while others wanted the system to take the first stab at drafting content. These choices helped adjusters feel that the tool worked for them, not the other way around.

Don’t Assume — Test

One of the biggest lessons we learned? You can’t predict how users will react to AI. Our first version of the assistant had all the right functionality — summarizing notes, tagging content, recommending next steps. But in usability testing, we got surprising feedback. One tester said, “It feels too aggressive. Like it’s trying to run the show.”

That sparked a major shift. We adjusted the tone of prompts, made them more invitational (“Would you like to use this draft?”), and visually downplayed the AI presence. These small changes created a big shift in how the tool was perceived — from bossy to helpful. Perception is everything. And you only get it right by testing early, and often.

We also ran scenario-based testing where adjusters worked through complex claim types, giving us deeper insight into how the AI behaved in edge cases. The result was a more adaptive and flexible assistant that held up across a wide range of real-world situations.

Context Is the Dealbreaker

Great AI is context-aware. Poorly designed AI feels like it’s just guessing. In auto insurance, context is everything. A simple claim isn’t treated like a total loss. Prior policy limits, past interactions, vehicle type — these details matter.

That’s why we worked closely with engineering to ensure the AI surfaced the right data at the right time. We didn’t overload the UI with data. Instead, we designed for just-in-time context — so the adjuster always had what they needed in the moment.

“Context is what makes AI feel smart — not just more data.” When AI understands the nuances of the environment, it becomes a true assistant. Not a robot — a partner.

We also built in contextual guardrails — for instance, if a claim lacked a necessary document, the AI wouldn’t offer incomplete advice. These boundaries helped the tool feel more reliable and prevented risky assumptions.

So, What’s the Takeaway?

If you want AI to work in high-stakes, high-complexity spaces like insurance, you have to lead with empathy. Listen deeply. Design intentionally. Focus on clarity, not complexity.

ClaimsAgent succeeded not because of what the AI could do — but because of how we designed around it:

  • We started with the adjusters.
  • We prioritized trust.
  • We preserved control.
  • We tested with real users.
  • We designed for context.

That’s what made the difference. That’s how you build AI that helps people — and earns their trust along the way.