How Do You Design for AI? Hint: You Don’t Start with the AI.

With all the buzz around generative AI, machine learning, and LLMs, it’s tempting to approach design with the mindset: “What can this AI do?” But when I’m designing AI-powered tools, I start with a different question: What does the user need — and where are they struggling?

AI is powerful, but it’s not magic. It’s another tool in our toolkit — one that needs careful, intentional design to deliver real value. Especially in enterprise environments, trust, clarity, and human oversight aren’t “nice to haves.” They’re essential. Here’s how I approach AI design through a human-centered lens — with some real-world examples to ground the thinking.


Start with the Human, Not the AI

It sounds simple, but this is where a lot of teams go off track: they start with the capabilities of the AI rather than the needs of the people.

In my work on ClaimsAgent, an AI assistant for insurance adjusters, we didn’t begin by asking, “What can generative AI do for claims?” We started by listening to the adjusters themselves.

They weren’t saying, “Give me AI.”
They were saying, “I’m tired of spending hours on documentation,” or “It takes too long to switch between systems.”

That insight shaped everything. We focused on where adjusters felt overwhelmed and how AI might relieve that without adding new complexity. Our goal wasn’t to replace them — it was to give them a co-pilot.

This shift in mindset — from tech-first to human-first — kept us grounded in real value.


Design for Transparency and Trust

Here’s the thing about AI: If people don’t trust it, they won’t use it. And trust isn’t built by saying, “Just trust us.” It’s built through transparency.

That’s why we integrated an explainability layer into ClaimsAgent. When the AI auto-generated claim summaries or suggested notes, we didn’t just show the result — we showed the reasoning behind it. What fields were referenced? What patterns were detected? How confident was the AI?

We also made it easy for users to edit, accept, or reject suggestions. That simple control — paired with clarity on why the AI did what it did — helped adjusters feel in charge, not sidelined.

The result? We saw increased adoption and reduced hesitation from users who initially didn’t trust “black box” solutions.


Maintain Human Oversight

I believe AI should assist, not decide.

When designing AI features, one of our mantras is: Co-pilot, not auto-pilot.

This came to life through small but intentional details — like inline editing, smart defaults that could be quickly overridden, and microinteractions that reinforced the user’s control.

One feature let users edit AI-suggested text with a single tap. Another allowed undoing AI-generated input without friction. These details matter. They signal respect for the user’s judgment.

AI should empower people to be faster and more confident — not make them feel like they’re just babysitting automation.


Iterate with Real Users

One of the biggest traps in AI design is assuming users will react a certain way. The truth?
You won’t know how people respond until you test.

In early usability testing for ClaimsAgent, we got feedback that surprised us. Some users felt the assistant was “too assertive” — it sounded like it was giving orders, not offering help.

So we reworked the tone. We rewrote prompts to feel more collaborative: “Would you like to use this summary?” instead of “Here’s your summary.” We changed the visuals too, toning down the presence of the AI avatar.

The shift from prescriptive to suggestive created a better experience — and helped users feel like the AI was there to help, not to take over.


Context Is Everything

AI that ignores context isn’t helpful — it’s annoying.

Whether it’s a claims adjuster reviewing a complex case or a customer service agent replying to a complaint, AI needs to understand the user’s environment, history, and goals.

In ClaimsAgent, that meant integrating case-specific information: the type of claim, who the customer was, what previous interactions occurred, and more. We didn’t surface everything all at once — we worked closely with engineering to deliver just-in-time context.

Done right, contextual AI feels almost invisible — it simply knows what to surface, when to surface it, and how to be helpful in the moment. That’s what creates trust and flow.


Final Thoughts: Designing for AI is Designing for Trust

If you take away one thing from this: AI doesn’t design itself.

It needs human guidance, human judgment, and human empathy.

Designing for AI is really about designing for people — helping them feel supported, not disrupted. It means respecting their expertise, communicating clearly, and creating interfaces that are transparent, flexible, and grounded in real-world context.

When we do that well, AI becomes more than just a tool.
It becomes a trusted partner.