How Do You Design for AI? Hint: You Don’t Start with the AI.

There’s a lot of hype around AI right now — and with good reason. The tools are getting smarter, the use cases are multiplying, and every industry is asking the same question: “How do we use this responsibly?” But when it comes to applying AI in real-world settings like auto insurance, I’ve learned the question needs to shift. It’s not just “what can this AI do?” It’s “how do we design this so it actually helps people do their work better?” I had the opportunity to answer that firsthand while leading design for ClaimsAgent, an AI-powered assistant for insurance adjusters. What follows are some of the most important lessons I learned about designing AI that’s not just powerful — but genuinely useful, trustworthy, and human-centered.

Start with the Adjuster, Not the Algorithm

One of the biggest missteps I see when teams approach AI is starting with the tech. It usually begins with curiosity or pressure: “We have access to GPT — what can we do with it?” But flipping that question changes everything. Instead of asking what the AI can do, we asked, “What’s actually hard about being an insurance adjuster today?”

That led us to ClaimsAgent, an AI assistant designed to reduce administrative burden and improve decision-making. But we didn’t start with AI. We started by watching and listening to the people doing the job. Claims adjusters are often managing multiple tools, documents, policies, and deadlines at once. They’re under pressure to be fast — but also precise. And most of them weren’t saying, “We need smarter tech.” They were saying things like, “It takes forever to write up notes,” or “Switching between systems is exhausting.”

Those insights reframed our role as designers. We weren’t there to introduce AI. We were there to reduce friction — and AI just happened to be one of the ways we could do that. By focusing on real-world tasks, not theoretical capabilities, we kept our design grounded in what mattered most: making people’s work easier and more intuitive.

Build for Trust, Not Just Accuracy

AI might be technically correct, but that doesn’t automatically make it trustworthy. Especially in regulated industries like insurance, where errors can lead to financial loss, legal trouble, or customer complaints, trust is everything.

In ClaimsAgent, we realized early on that users were less interested in “how smart” the AI was, and more interested in whether they could understand and control it. That led us to build in an explainability layer — a way to show the why behind each suggestion. If the AI generated a claim summary, users could see which fields were used, what patterns it detected, and how confident it was. No guesswork. No mystery.

We also made sure that adjusters could easily edit, accept, or reject the output. That subtle but critical design choice gave people a sense of ownership over the process. It sent a clear message: “This tool works for you, not the other way around.”

“If people don’t trust the AI, they won’t use it — no matter how accurate it is.”

That sense of control mattered. It reinforced the message: “This tool is here to support your decision-making, not override it.” That’s what helped us build trust and drive adoption.

Co-Pilot, Not Auto-Pilot

From the beginning, we knew we didn’t want ClaimsAgent to make decisions for users — we wanted it to make decisions with them.

That meant we had to get the balance right between automation and autonomy. Too much automation, and users feel like they’re losing control. Too little, and the AI becomes meaningless. The mantra we kept returning to was: Co-pilot, not auto-pilot. We wanted ClaimsAgent to support the adjuster — not drive the whole process.

We brought that idea to life in the details. Users could edit AI-suggested notes with a single tap. They could undo AI-generated input without friction. We introduced settings that let them control tone and length, so the AI would speak in a way that felt like an extension of their own voice.

These aren’t flashy features. But they’re the kind of design decisions that build long-term trust and usability. They remind the user: you’re still the expert, and this tool is here to help you — not take over.

Don’t Assume — Test

You can’t fully predict how people will react to AI until they experience it themselves. That’s why we didn’t treat usability testing as a one-time check. We treated it as a design muscle — something we came back to over and over as we refined ClaimsAgent.

In one early round of testing, we thought we had nailed it. The AI was producing high-quality summaries and suggestions. But then we sat down with real users — and the feedback was direct:

“It feels like it’s giving me orders.”
“I’m not sure I trust what it’s saying.”
“It’s too confident.”

That was our moment of clarity. We had built something functionally impressive, but emotionally off. So we went back and tuned the tone, softened the prompts, and gave the interface a less assertive presence. Instead of “Here’s your summary,” we said, “Would you like to use this draft?” That one change — from command to invitation — shifted the tone entirely.

Testing doesn’t just improve usability. It reveals emotional reactions, power dynamics, and trust barriers. And in AI design, those are just as important as accuracy or performance.

Perception matters. Even a great tool will fail if it feels wrong.

Context Is the Dealbreaker

AI is only useful if it understands where it is and what it’s doing. In ClaimsAgent, that meant more than pulling data — it meant surfacing the right data at the right time.

Auto insurance claims are complex. A small fender bender requires a different workflow, tone, and documentation than a totaled vehicle or a fraud investigation. We worked closely with engineering to ensure ClaimsAgent could read the room: case type, customer history, prior decisions, policy constraints — and then tailor the AI’s input based on those factors.

We didn’t dump all this information on the screen. Instead, we surfaced just enough context to help the adjuster stay grounded and informed without feeling overwhelmed. It was about intelligent restraint — giving the AI the awareness to be useful, without being noisy.

“Context is what makes AI feel smart — not just more data.”

When an AI understands context, it starts to feel less like a tool and more like a real assistant. That’s when the experience becomes seamless — and trust becomes second nature.

So, What’s the Takeaway?

Designing AI isn’t about novelty. It’s about need. And in high-stakes industries like insurance, need starts with the people doing the work — not the technology behind it.

What made ClaimsAgent work wasn’t just the model. It was the way we designed around it. We started with real problems, embedded trust, kept people in control, tested continuously, and made sure the AI understood context. It wasn’t about replacing humans — it was about respecting them and giving them better tools to do their jobs well.

We didn’t try to make ClaimsAgent sound like a robot, or act like a genius. We just made it feel helpful.

That’s the real goal: AI that supports people. Not silently. Not magically. But thoughtfully.