The Harbormaster by Shondon

If you're involved in creating software products, you’ve probably done the reasonable, common-sense thing many times: you asked people what they wanted.

Maybe you sent a survey, asked in an interview, or dug through a pile of feature requests. It feels natural and intuitive. Who better to tell you what to build than the people who will use it?

But decades of UX research, behavioural science, and product practice point to a surprising truth: people's opinions about themselves and the things they use rarely match real behaviours.

We confidently and frequently declare tastes we don't have and predict behaviours we'll never actually carry out.

Relying on user opinions to make product decisions leads to strategic drift, fragmented user experiences, wasted resources, user churn, and competitive risk.

Why asking people for opinions feels right—but doesn't work well.

The instinct to ask people for opinions isn’t misguided exactly. It’s just incomplete, and an over-reliance on requests and opinions leads many a well-intentioned team quietly off course.

Asking people what they want is one of the most natural instincts in product work. Surveys, interviews, and feature wish lists feel accessible, social, and collaborative. They open channels to understand and empathise with the user base. They help teams feel closer to the people they serve. For teams under pressure, a stack of opinions can feel like solid data.

But this breaks when we compare what users say to what they actually do (say-do gap).

We all want to present ourselves a certain way. We want to seem more competent than confused (social desirability bias). Our memories can be fuzzy, especially about routine tasks (recall bias). Standards for what feels “easy” or “intuitive” can vary wildly between people (reference bias).

Most human decision making happens somewhat automatically, outside of our conscious awareness, shaped by our moods and habits. This makes accurately articulating our needs, the problems we face, and the motivations guiding our choices really hard.

The say-do gap isn't a bug, but a universal feature of human cognition. We don't have introspective access to the underlying forces that shape our actions, which makes opinions unreliable.

There's another trap baked into opinion-based research: when we ask users what they want, they naturally jump to imagined solutions that usually reflect personal taste, current workarounds, or the narrow scope of their own workflow problems.

“Add a button for X” or “let me export Y as a CSV” can be genuine signals of friction—but they’re still just guesses. Taken as-is, they can steer teams toward incremental tweaks and scattered features instead of shared problems that matter to all the user base.

This is how roadmaps fill with "faster horses", "low hanging fruit" and "quick wins": the low-impact clutter that feels like insight but rarely leads to scalable sustainable products producing great outcomes for both companies and customers.

All this points toward a simple principle: treat opinions as clues, not conclusions. They're useful for understanding context, language, expectations, and emotional textures but they cannot (and must not) stand alone as a source of truth.

To understand what people actually need—and to build products that actually solve their problems—we have to look beyond what they say and pay attention to what they do.

The quiet consequences of taking opinions at face value

Opinions connect to past experiences. People struggle to imagine new possibilities. Designing good things is actually quite hard!

Breakthrough products—from cars to Walkmans to iPhones—didn’t emerge from asking users “What should we build?” but from deeply understanding people, problems, technological opportunities, and patterns of context that customers don't know how to articulate.

When product makers anchor decisions on self-reported opinions, some predictable anti-patterns emerge:

Strategic drift

The roadmap becomes reactive, not proactive. Everything feels "validated by users", so saying no gets harder. You lose sight of outcomes while chasing outputs. Meanwhile markets shift and you're still optimising last year's workflow.

Fragmented user experience

Feature requests represent well-intentioned guesses: someone's best attempt at describing a fix from their limited perspective. Ship these literally and you accumulate scattered tweaks instead of elegant, system-level solutions. The interface bloats. The value proposition blurs.

Treat symptoms, not causes, and users inherit the complexity.

Resource waste

Research budgets fund work that misleads. Design and engineering time goes to features nobody uses after launch. Teams build what was asked for, ship it, then watch the metrics stay flat. The cost isn't just wasted effort—it's the opportunity cost of not building what would have mattered.

Measurement problems

Success becomes "we shipped what was requested", measuring outputs and mistaking them for outcomes. There's no way to learn what actually works or delivers value. Over time, costs compound, progress slows, impact becomes less clear, and the product gets shaped by noise instead of evidence.

Competitive risk

Competitors using observation-based methods ship better products. They solve deeper problems while you're still adding buttons. They innovate while you iterate. The gap widens slowly, then very fast, and by the time it's visible, you're behind.

“How did you go bankrupt?” Bill asked.
“Two ways”, Mike said. “Gradually and then suddenly”
— Ernest Hemingway, “The Sun Also Rises”

What to do instead: observe, measure, and uncover the real problems.

The alternative isn't to stop listening to users. Watch what people do, measure what matters, and use what they say to add context.

Start with observation

Watch people use your product or their current workaround. Look for hesitation, what they miss or ignore, what they repeat, what they invent to get the job done.

Five short sessions will reveal more than 100 survey responses to uncover needs that users can't articulate simply because they don't consciously notice them.

Measure real behaviour

Analytics show what people do, not what they remember doing.

Track where users actually drop off in funnels, which features they actually use, how long tasks actually take. A/B test changes against behaviour—completion rates, error rates, return visits—not opinions. Cohort analysis shows who sticks around.

Bypass bias entirely.

Use self-reporting to understand perceptions and build context, not collect feature requests

Interviews work better when anchored in specific recent behaviour, not hypotheticals.

Ask "Tell me about the last time you tried to do X" over "Would you use a feature that does Y?". Ask "Why does this matter to you?" over "Do you prefer option A or B?". Ask “Why?” to dig past symptoms to root causes.

Listen for the job-to-be-done, the friction they face, and the outcomes they want. And always always always ask why.

Treat feature requests as problem signals

Every request contains useful information, but you have to learn to look past the instruction.

Someone asking for "an export button" is telling you they need to get data elsewhere. Someone asking for "dark mode" is telling you they're using your product late at night or for long stretches and it's physically uncomfortable. Someone asking to "hide completed items" is telling you visual clutter is making it harder to focus on what still needs their attention.

Find out why they need it. Find out what they'll do with it. Find out if there's a better way to serve that need. If there's only one solution, you haven't understood the real problem yet.

Learn to say no!

Maintain a structured backlog. Prioritise based on observed patterns across many users, validated problems, and measured impact—not vote counts or volume. Synthesise what you learn into systemic solutions, not scattered features. You're designing for the crowd, not the individual.

Anchor your decisions in outcomes

Define what success looks like before you build. Will this change behaviour? Will it improve the metrics that matter? Build lightweight tests. Ship, measure, learn. If behaviour doesn't shift, the problem wasn't real or the solution didn't work. Either way, you learn.

Observe first, ask second

The goal here is not to reject user opinions.

It's to understand users better than they understand themselves—which means watching what they do, not just hearing what they say. When you make this shift, clarity follows close behind.

Your roadmap stops being a wish list and becomes a strategic tool. You build fewer features but solve bigger problems. You stop reacting to requests and start driving outcomes.

And you understand your users not because they told you what they wanted, but because you saw what they needed.