
When people think of AI in insurance, they often imagine automation taking over entire processes — replacing humans with algorithms. But that’s not how I approach it. As a product designer working on ClaimsAgent, an AI-powered assistant for auto insurance adjusters, I focused on a different question: How can AI support adjusters — not replace them? Auto claims are messy. No two accidents are alike, and adjusters often face a flood of documents, deadlines, and system toggles. AI has the potential to ease that load, but only if we design it right. Here’s what I’ve learned about designing AI in this space:
Start with the Adjuster, Not the Algorithm
I’ve seen this happen more than once: a team gets excited about what the AI can do, then starts looking for a use case to justify it. That’s a shortcut to nowhere. When we started designing ClaimsAgent, we didn’t begin with the model—we started with people. We spent time with adjusters, listening closely to what slowed them down, what felt frustrating, and what consumed too much of their day.
Interestingly, no one said, “I need AI.” What we heard instead were things like: “It takes forever to write up notes,” “I’m constantly switching between tools just to find one detail,” and “I’m doing the same repetitive tasks over and over.” That’s what shaped our design direction. AI wasn’t the headline feature; it was the helper working quietly behind the scenes to relieve the pressure. Our goal was to solve real problems, so adjusters could focus on the higher-value parts of their work.
Build for Trust, Not Just Accuracy
Even if AI is technically accurate, users won’t rely on it if they don’t trust it—and that’s especially true in insurance. Claims adjusters are making decisions that directly impact people’s lives and financial outcomes. If they can’t trust what they’re seeing, they’ll default to their own judgment.
So we made transparency a design priority in ClaimsAgent. Whenever the AI generated a note or suggestion, we didn’t just show the end result—we surfaced the source. Adjusters could see which parts of the file were referenced and what the model’s confidence level was. Most importantly, we gave them full control to refine, edit, or reject anything the AI produced.
“If people don’t trust the AI, they won’t use it — no matter how accurate it is.”
That sense of control mattered. It reinforced the message: “This tool is here to support your decision-making, not override it.” That’s what helped us build trust and drive adoption.
Co-Pilot, Not Auto-Pilot
One of the core ideas we rallied around as a design team was simple: co-pilot, not auto-pilot. That mindset shaped how we approached every interaction in the tool. Insurance is a nuanced space. No two claims are the same, and you can’t hand off control to a system and expect consistently good outcomes.
So we designed for assistive, not autonomous AI. We built in microinteractions—quick edits, undo options, opt-in controls—so adjusters could steer the experience based on what felt right for the moment. We even let users set preferences for how detailed or formal the AI’s responses should be.
These weren’t just UX niceties—they were signals of respect. Signals that said, “You’re still the expert here.” That’s what made adjusters feel comfortable, and ultimately, it’s what made them stick with the tool.
Don’t Assume — Test
We had a version of the AI assistant that looked great on paper. It did everything we wanted: auto-summarized notes, classified documents, and prompted next steps. But when we tested it with real users, the response was lukewarm at best. One quote stuck with me: “It’s too aggressive. It feels like it’s trying to take over.”
That was our signal to course-correct. We softened the tone, rewrote prompts to sound more collaborative (“Would you like to use this draft?” instead of “Here’s your summary”), and visually toned down the AI’s presence in the UI. These changes might sound small, but they had a huge impact on perception—and adoption.
The big lesson here? Perception matters. You can have the smartest AI in the world, but if it doesn’t feel helpful or approachable, it won’t get used.
Context Is the Dealbreaker
One of the most critical things we did was ensure the AI understood the environment it was working in. Auto insurance claims aren’t one-size-fits-all. A fender bender is nothing like a total loss, and the supporting data for each can vary wildly.
We made sure ClaimsAgent pulled in the right context at the right time—things like claim history, policy coverage, customer interactions, and even photos or adjuster notes. And we didn’t flood users with all the data at once. We focused on surfacing just what was relevant for the task at hand.
“Context is what makes AI feel smart — not just more data.”
That kind of thoughtful context design is what made the AI feel useful. It wasn’t just generating output—it was showing up at the right moment with the right information. That’s what created real value.
So, What’s the Takeaway?
Designing AI for the insurance industry isn’t just about cleaner interfaces or faster workflows. It’s about building tools that respect the complexity of the work and the people doing it.
What made ClaimsAgent successful wasn’t the underlying model. It was the way we designed around it.
We stayed grounded in human needs. We prioritized trust. We gave people control. We listened to feedback. We designed for context.
That’s what turns AI from a buzzword into a truly useful assistant.