How would you improve our product?
The complete answer guide: what this question really tests, two example strong answers in different angles, the common weak answer rewritten, and the trap most candidates fall into. This is a why this company / role archetype question — see the broader pattern guide for the structural shape.
What this question is really testing
The interviewer isn't asking you to be a free consultant or to solve their roadmap for them. They're testing whether you can think like a PM who already works there—someone who balances user needs, business constraints, and strategic priorities without being told what those are. The specific signal they want: can you demonstrate product judgment that goes beyond surface-level feature requests? They're watching to see if you ask clarifying questions, acknowledge tradeoffs, and ground your suggestions in a hypothesis about users or the business. The binary read is harsh: either you show you can operate with incomplete information and still add value, or you reveal that you need your hand held to think strategically.
What worries interviewers most is hiring someone who will show up on day one and immediately suggest building features that sound good in a vacuum but ignore context—the technical debt, the target segment, the monetization model, the competitive position. They've seen too many PMs who treat every product like a blank canvas for their pet ideas. When you answer this question well, you're proving you won't be that person. You're demonstrating that you can absorb context quickly, prioritize ruthlessly, and think in terms of outcomes rather than outputs. The interviewer is deciding: would I trust this person to own a feature area with minimal supervision?
Two strong answers, two angles
Angle A: User research insight
"I spent about 30 minutes going through your App Store reviews and noticed a pattern—users love the core functionality but at least a dozen mentioned they couldn't figure out how to export their data in the format they needed. That suggests a potential retention risk if people hit that wall after investing time in your product. I'd want to validate this with usage data to see where drop-off happens, but if it holds up, I'd prioritize building 2-3 export templates for the most common use cases. The ROI seems high: relatively contained engineering effort that could meaningfully reduce churn for power users who are probably your highest LTV segment."
Angle B: Strategic positioning
"Looking at your competitive set, you're positioned as the premium option but your free tier seems designed to convert quickly rather than build habit. I'd experiment with extending the free tier to allow one full project completion—let users experience the entire workflow and get value before hitting a paywall. This is counter to conventional wisdom about paywalls, but if your product has strong network effects or data lock-in after that first project, you might see higher LTV from users who convert later but stick longer. I'd run this as a cohort test with 10% of new signups and measure 90-day retention and revenue per user, not just conversion rate."
The common weak answer
"I really like your product, but I think you should add dark mode and maybe some AI features since that's what everyone wants right now. Also the onboarding could be smoother—I was a little confused when I first signed up. Maybe add more integrations too, since users always want those."
This fails because it's a shopping list of generic suggestions that could apply to literally any product, with no indication that you've thought about why these features matter or what problem they solve for the business. The interviewer hears: "This person downloaded our app for 10 minutes, noticed some obvious gaps, and is now telling us things we've heard a hundred times." Worse, you're signaling that you think PM work is about collecting feature requests rather than diagnosing problems. The reframe: pick ONE of these (say, onboarding) and make it specific: "I noticed the initial setup asks for company size before explaining why that matters—I'd test moving that question after users see their first dashboard, once they understand how the product adapts to different team sizes."
The one trap most candidates fall into
The trap is criticizing the product's core experience or business model in a way that suggests the company has been fundamentally wrong about their strategy. You might think you're showing bold thinking, but what the interviewer hears is: "This person doesn't respect the years of iteration, user research, and market validation that led to our current approach." Even if you're right that their pricing is too high or their main feature is clunky, leading with that creates an adversarial dynamic. The interviewer starts defending rather than evaluating your thinking.
The sophisticated move is to assume their core decisions were right for a previous context and suggest evolution rather than revolution. Instead of "Your pricing is too complicated," try "Your pricing structure makes sense for the enterprise customers you've focused on, but if you're moving down-market like your recent blog posts suggest, you might need a simpler self-serve tier." You're showing the same analytical insight but framing it as building on their success rather than fixing their mistakes. This is especially important for PM roles because so much of the job is influencing without authority—you need to demonstrate that you can make people want to consider your ideas rather than forcing them to defend against your criticism.
Common questions
How long should my answer to "How would you improve our product?" be?
Aim for 60-120 seconds spoken (250-350 words). Long enough to land the situation, action, and result; short enough that the interviewer has room to follow up. Anything past two minutes risks losing them.
Should I memorize my answer word-for-word?
No — that reads as canned and falls apart the moment the interviewer asks a follow-up. Memorize the structure (the bones of the story) and the specific numbers/names that anchor it. Let the words come naturally each time.
What if I have a really good story but it was years ago?
Recent is better, but a strong story from 3 years ago beats a vague story from last quarter. If the example is older than 5 years, frame it as the moment that crystallized the lesson, then briefly bridge to how you've applied it since.
Can I use the same story for multiple questions?
Often yes — strong stories tend to demonstrate multiple competencies. The trick is reframing the angle each time. Same situation, different opening sentence: lead with the conflict for conflict questions, lead with the leadership move for leadership questions.
How do I know if my answer is actually good?
Practice it out loud and have it scored. The fastest way is a mock interview where the AI flags exactly what's vague, where you used 'we' when the question asked about 'I,' and rewrites the weakest sentence. Reading example answers helps; getting yours scored is what moves performance.
Reading isn't practicing.
Try answering this question right now before checkout, with real Claude-scored feedback in 5 seconds.
Practice this question free →