Master the frameworks, metrics, and strategies
JTBD is a framework that focuses on understanding why users choose a product. Instead of asking "what features do users want?", ask "what job is the user trying to accomplish?"
Surface need: "I need interview prep software"
JTBD: "Help me feel confident and prepared before my job interview so I can make a great impression and land the offer"
Emotional job: Reduce anxiety, build confidence
Functional job: Practice questions, receive feedback
Social job: Signal to myself that I'm taking action
How to discover JTBD:
The North Star Metric is a single, quantifiable measure that represents the core value your product delivers. It's the metric that best indicates product-market fit and future revenue.
Metric: Minutes of music played
Captures engagement and value delivery in one number.
Metric: Nights booked
Direct proxy for business value and platform health.
Metric: Messages sent
Pure engagement—the more messages, the stickier the app.
Metric: Engaged users
Users who view profiles, make connections, post content.
OKRs are a goal-setting framework that combines qualitative ambitions (Objectives) with measurable outcomes (Key Results).
Objective: "Become the fastest checkout experience in the market"
Key Result 1: Reduce average checkout time from 3 min to 90 seconds (70% reduction)
Key Result 2: Improve checkout conversion rate from 65% to 75%
Key Result 3: Achieve 95% customer satisfaction rating for checkout flow
Product-market fit (PMF) occurs when your product satisfies a strong market demand. Users love the product so much they'd be "devastated" if it disappeared.
Strengths: What does your product do better? (unique tech, team expertise, brand loyalty)
Weaknesses: What are you missing? (limited resources, feature gaps, slow execution)
Opportunities: What markets or features can you expand into? (new geographies, user segments, adjacent products)
Threats: What could hurt you? (competitors, market disruption, regulatory changes)
Competitive Rivalry: How intense is competition? High = low margins, more innovation required
Threat of New Entrants: How easy is it for new competitors to enter? High barriers = defensible market
Bargaining Power of Buyers: How much control do customers have? Strong buyers = must deliver value
Bargaining Power of Suppliers: How dependent are you on suppliers? Strong suppliers = higher costs
Threat of Substitutes: Are there alternative solutions? Strong substitutes = lower pricing power
RICE helps you quantitatively score features to rank them by impact potential. The formula balances reach, impact, and confidence against effort.
RICE Score = (Reach × Impact × Confidence) / Effort
Reach: How many users will this affect in a given time period (usually per quarter)?
Impact: How much will this move the needle for each user? Scale: 3x (major), 2x (medium), 1x (minor), 0.5x (minimal), 0.25x (if it actually hurts)
Confidence: How confident are you in your reach and impact estimates? Express as percentage (100%, 80%, 50%, etc.)
Effort: How much engineering time required in person-weeks? Estimate high to account for testing, deployment, monitoring
MoSCoW is a simple qualitative framework for categorizing features by priority—useful for sprint planning and stakeholder communication.
Non-negotiable. Product doesn't work without these. Ship in the current release.
Example: Login functionality, payment processing
Important but not critical. Include if time permits. If not, include in next release.
Example: Password reset, user profile editing
Nice-to-have. Low priority. Include only if time/resources remain.
Example: Dark mode, advanced filters
Explicitly deprioritized. Out of scope for current cycle.
Example: International expansion, API access
The Kano Model maps features to user satisfaction. It reveals which features delight, which are expected, and which are indifferent.
Basic (Threshold) Attributes: Expected by users. Absence causes dissatisfaction, but presence doesn't create satisfaction. Linear relationship.
Performance (Linear) Attributes: The more, the better. Directly proportional to satisfaction. Clear value.
Delighters (Excitement) Attributes: Unexpected benefits. Presence creates delight, but absence doesn't create dissatisfaction. Non-linear.
Indifferent Attributes: Users don't care. Neutral impact on satisfaction.
Reverse Attributes: For some users, more is worse. Creates dissatisfaction for a segment.
A simpler alternative to RICE. Use when you need quick decisions or have limited data.
ICE Score = (Impact × Confidence × Ease)
Impact: 1-10 scale (how much does this improve the product?)
Confidence: 1-10 scale (how sure are you?)
Ease: 1-10 scale (how easy is it to implement?)
Limitation: ICE doesn't account for resource constraints as explicitly as RICE. Better for feature brainstorms than release planning.
Visual prioritization framework that plots features on two dimensions.
High Value, Low Effort
Do these first. High ROI.
Example: Fix critical UI bug affecting 20% of users (1 week)
High Value, High Effort
Strategic priorities. Invest heavily.
Example: Rebuild search engine (3 months, transforms experience)
Low Value, Low Effort
Backlog items. Do between bigger projects.
Example: Add new color theme (2 days)
Low Value, High Effort
Avoid. Say "no" or find alternative approach.
Example: Complete redesign of rarely-used settings
What: 1-on-1 conversations with users about their needs, pain points, and behaviors.
When: Discovery phase, exploring new problems, validating assumptions
Sample size: 5-7 users reveals 85% of problems. More is diminishing returns.
Key techniques:
What: Users interact with your product while thinking aloud, and you observe.
When: Validation phase, before launch, understanding user behavior
Key metrics:
Sample size: 5 users uncovers 85% of usability issues. 8-10 for statistical significance.
What: Observe users in their natural environment using your product or performing the job.
When: Understanding real-world behavior, finding opportunities you wouldn't discover in lab
Benefits: Reveals implicit needs, workarounds, and environmental constraints
Example: Observing how a surgeon uses hospital software during surgery (not in a lab)
When: Testing hypotheses, gathering feedback from large populations, measuring satisfaction
Best practices:
Sample size calculation: n = (Z² × p × (1-p)) / E²
When: Deciding between two designs/features, optimizing conversion
Key concepts:
Semi-fictional representations of your target users based on research. Used for design and product decisions.
Name: Sarah Chen
Job: Engineering Manager at a Series B startup
Goals: Keep her 6-person team organized, reduce status update meetings, ship on time
Frustrations: Too many tools (Slack, Jira, Spreadsheets), hard to get one source of truth, micromanaging feels needed when visibility is low
Behavior: Checks project status first thing every morning, uses mobile during commute
Motivation: Wants to be a great leader and unblock her team quickly
Visualizes the user's experience across touchpoints, revealing pain points and opportunities.
Formula: CAC = (Sales + Marketing Spend) / New Customers Acquired
"Activation" is when a user experiences core value. Define the "aha moment" for your product:
Activation Rate: % of new users who reach aha moment within first 7 days
Benchmark: >40% is good, >60% is excellent
Optimization: Reduce time-to-aha. Every extra step drops conversion 20%.
Retention is the most important metric because it indicates if users find lasting value.
D1 (Day 1) Retention: % of new users who return 1 day after signup
D7 (Day 7) Retention: % of new users who return within 7 days
D30 (Day 30) Retention: % of new users who return within 30 days
Cohort Retention Curve: Group users by signup date, track retention over 30/60/90 days. If curve is flat or improving, retention is healthy.
MRR (Monthly Recurring Revenue): Total subscription revenue expected in a month
ARR (Annual Recurring Revenue): MRR × 12. Used for company valuation.
LTV (Lifetime Value): Total revenue from a customer over their lifetime
LTV:CAC Ratio: How much value you get per dollar spent acquiring
Stickiness Ratio (DAU/MAU): % of monthly users who are active daily
Other Engagement Metrics:
Look good but don't reflect real value:
Indicate real value and business health:
Great product organizations continuously alternate between two modes:
Understanding the problem and validating solutions
Building the validated solution
Run discovery and delivery in parallel. While engineering ships Feature A, product is validating Feature B with users.
Purpose: Define what work will be done this sprint
Inputs: Prioritized backlog, team capacity, dependencies
Outputs: Committed stories, acceptance criteria clear, sprint goal defined
Anti-pattern: Over-committing. Better to undercommit and finish early than miss commitments.
Purpose: Sync on progress, identify blockers, maintain momentum
Format: Each person: "Yesterday I [work], today I [work], blocker: [if any]"
Anti-pattern: Status reporting to PM. Should be peer-to-peer.
Purpose: Demo completed work, gather feedback
Audience: Stakeholders, other teams, customers if possible
What to show: Only completed, shippable work. Demo the feature working, not code.
Purpose: Reflect on process, identify improvements
Format: "What went well? What didn't? What will we do differently?"
Key: Blameless—focus on systems, not people. Pick 1-2 experiments for next sprint.
User stories capture work from the user's perspective with acceptance criteria.
As a [user type],
I want to [action]
so that [benefit]
Acceptance Criteria:
□ Given [context], when [action], then [result]
□ [User can do X]
□ [Error handling: if Y, then Z]
Title: Customer can sort order history by date
As a returning customer,
I want to sort my order history by date (newest first or oldest first)
so that I can quickly find a recent purchase
Acceptance Criteria:
□ Order history page shows "Sort by" dropdown
□ Default is "Newest first"
□ "Oldest first" option available
□ Sorting persists if user navigates away and returns
□ Mobile responsive
Clear criteria that determine when a story is truly complete.
The smallest version of your product that validates your core hypothesis.
Now: Current quarter. High confidence, committed.
Next: Next 1-2 quarters. Direction clear, but flexible.
Later: 3+ quarters out. Strategic themes, not specific features.
Benefit: Communicates direction without over-committing. Shows vision without false precision.
Features mapped to specific dates (Q1, Q2, Q3, etc.)
Use for: Communicating to board, enterprise customers
Risk: Creates false precision. Team and market change. Only commit current quarter to dates.
Organized by strategic themes (e.g., "Improve onboarding," "Increase engagement")
Benefit: Outcome-focused, not feature-focused. Allows flexibility in how you achieve theme.
Example theme: "Make mobile experience fastest in category"
Categorize stakeholders by Power (influence on decisions) and Interest (in the product) to determine engagement strategy.
High Power, High Interest → Manage Closely
High Power, Low Interest → Keep Satisfied
Low Power, High Interest → Keep Informed
Low Power, Low Interest → Monitor
Executives are busy. Communicate clearly and concisely.
Subject: [Recommendation] Brief context - [metric] impact
Example: RECOMMEND feature X launch - drives $2M ARR by Q4
RECOMMENDATION:
Launch Feature X in Q3. Projected 15% revenue lift ($2M ARR by Q4).
RATIONALE:
• 87% of enterprise customers requested this feature
• Addresses #1 churn reason (now 12% - will drop to 5%)
• Competition launched similar feature last month
• Implementation is 6 weeks with existing team capacity
METRICS:
• Confidence: 85% (validated with 20 customers)
• ROI: 3:1 (6-week build cost vs $2M annual revenue)
• Risk: Low (feature is additive, doesn't affect existing UX)
DECISION NEEDED: Budget approval for any contractors if internal team unavailable
Engineering is not just a delivery mechanism. Great engineers spot problems and ideas you missed.
Stakeholders will disagree with your decisions. Here's how to handle it.
When stakeholder says: "We should build Feature X instead"
Don't respond with: "No, that's not strategic" (dismissive)
Do respond with: "I hear you. Tell me more about why X matters." (Listen first)
A/B tests are the PM's best tool for data-driven decisions. But most are done wrong.
1. Hypothesis: "Changing button color from blue to red will increase CTR by 5%"
2. Randomization: 50% of users see blue (control), 50% see red (variant). Must be random and persistent.
3. Sample Size: Determine how many users you need before running test
4. Duration: Run for 1-2 weeks to account for day-of-week effects
5. Analysis: Did the variant outperform control statistically significantly? (p < 0.05)
6. Decision: Launch if significant. Iterate if not.
Formula: n = 16σ² / δ² (simplified for 80% power, 95% confidence)
Example: You want to detect 5% increase in conversion rate (0.5% → 0.525%)
n = 16 × (0.005 × 0.995)² / (0.025)²
n = 16 × 0.000025 / 0.000625
n ≈ 640 users per variant
Total: 1,280 users needed
Practical tips:
Group users by signup date and track their behavior over time. Reveals if your product is improving.
Week 1-4 retention by signup cohort:
| Cohort | Week 1 | Week 2 | Week 3 | Week 4 | Trend |
|---|---|---|---|---|---|
| Jan 1-7 cohort | 45% | 28% | 18% | 12% | Declining |
| Feb 1-7 cohort | 48% | 32% | 22% | 15% | Improving |
| Mar 1-7 cohort | 52% | 38% | 26% | 18% | Improving |
Insight: Retention curves are improving! Product changes or onboarding improvements are working. Keep going in that direction.
Identify where users drop off and prioritize biggest opportunity.
Each row shows % of previous step (conversion rate between stages)
1. Landing page: 10,000 users
→ 2. Product page: 5,000 users (50% conversion)
→ 3. Add to cart: 2,000 users (40% conversion) ← BIGGEST DROP
→ 4. Checkout: 1,600 users (80% conversion)
→ 5. Purchase: 1,200 users (75% conversion)
Key insight: Biggest drop is between product page and add-to-cart. Investigate why 60% of browsers don't add items. A/B test:
You don't need to be a SQL expert, but knowing SQL lets you pull your own data and iterate fast.
-- Find daily active users over time
SELECT
DATE(created_at) as signup_date,
COUNT(DISTINCT user_id) as dau
FROM events
WHERE created_at >= '2024-01-01'
GROUP BY DATE(created_at)
ORDER BY signup_date DESC;
-- Calculate retention rate
SELECT
DATE(a.created_at) as signup_date,
COUNT(DISTINCT a.user_id) as cohort_size,
COUNT(DISTINCT b.user_id) as day_7_retained,
ROUND(100.0 * COUNT(DISTINCT b.user_id) / COUNT(DISTINCT a.user_id), 1) as retention_pct
FROM events a
LEFT JOIN events b
ON a.user_id = b.user_id
AND DATE(b.created_at) = DATE(a.created_at) + INTERVAL '7 day'
WHERE a.event_type = 'signup'
GROUP BY DATE(a.created_at)
ORDER BY signup_date DESC;
This tests: Product thinking, analytical ability, research, articulation
How to answer: Pick a product you genuinely use (not the company's product unless you really use it). Spend 60 seconds on why you admire it. Then propose 2-3 specific improvements:
What not to do: Don't criticize the product harshly. Don't suggest changes without reasoning. Don't pick a product you don't actually use.
This tests: Prioritization frameworks, strategic thinking, collaborative skills
How to answer: Walk through your process, not just the answer. Use a framework.
Key point: Show that you involve others and use data, not gut feel.
This tests: Metrics thinking, understanding of causation, analytical skills
How to answer: Define success metrics that ladder up to business outcomes.
Example: "For a 'save for later' feature on Amazon: Week 1 = adoption %, Month 1 = saves per user per week, Quarter 1 = Does it increase LTV by letting users find products later?"
This tests: Product sense, user empathy, problem identification
How to answer: Focus on a real problem you've experienced, not obvious features everyone knows about.
This tests: Crisis management, analytical thinking, prioritization under pressure
How to answer: Stay calm. Think systematically.
Key: Show you don't panic. You investigate before concluding.
This tests: Collaboration, conflict resolution, maturity
How to answer: Show respect for engineering and willingness to change your mind.
Story example: "I wanted to add a recommendation engine. Engineering said it'd take 4 months. I asked what's possible in 2 weeks. They said 'simple content-based recommendations.' We shipped that, measured impact, and only built ML when we had the data to justify it."
This tests: Learning mindset, self-awareness, humility
How to answer: Pick something real. Show you learned from mistakes.
Key: Don't make excuses. Own it. Show learning.
This tests: Strategic thinking, prioritization, saying no
How to answer: "Saying no is as important as saying yes."
This tests: Org/planning skills, strategy, stakeholder management
How to answer: Show an annual cycle with flexibility.
This tests: Lean methodology, speed, research skills
How to answer: Show you can learn fast and cheap.
Timeframe: "All of this takes 2-3 weeks, not 3 months of development."
Timeline: Idea → Validation (2 weeks) → Planning (1 week) → Build (2-6 weeks) → Launch (1 week) → Iterate
This tests: Cross-functional collaboration, user-centric thinking
Pattern: Understand → Align on Goal → Propose Alternative
Key: Make it about evidence and the problem, not about being right.
This tests: End-to-end product thinking, market understanding, strategy
How to answer: Think out loud. Ask clarifying questions.
This tests: Analytical thinking, user psychology, feature prioritization
How to answer:
Key factors:
17. How would you handle a post-launch failure? Investigate root cause (product, GTM, timing). Decide: iterate product, change messaging, pivot GTM, or sunset. Own it.
18. How do you respond to competitive threats? Analyze competitor strengths/weaknesses. Double down on your strengths. Don't copy—differentiate. Validate with customers first.
19. How would you decide on pricing? Value-based pricing: what's the economic value delivered? Competitive pricing: what do alternatives cost? Cost-plus: cost + margin. Run experiments. Test with small segment first.
20. How would you approach international expansion? Start with markets similar to home (language, culture). Validate demand. Adapt product for local needs (payment, language, regulations). Hire local PM.
21. How do you prioritize technical debt? It's a feature: adds reliability, speed, team velocity. Allocate 20% of capacity to it. Measure: deployment frequency, incident rate, time-to-value.
22. How would you improve onboarding? Find time-to-aha. Reduce friction at each step. A/B test onboarding flows. Measure: % reaching aha in 5 minutes, D1 retention.
23. How do you decide between acquisitions and building? Speed, risk, culture, cost. Acquisition is faster but riskier. Building is slower but aligns with strategy. Usually: acquire talent/customers, build product.
24. How would you monetize a freemium product? Freemium must solve 80% of user's problem (or they won't convert). Paywall is feature users hit (not arbitrary). Measure: free-to-paid conversion, LTV.
25. How do you build a product culture? Hire product-minded engineers. Involve team in discovery, not just delivery. Share metrics. Celebrate launches. Debrief failures without blame.
26. How would you measure product quality? Reliability (uptime), performance (load time), usability (SUS score), user satisfaction (NPS), adoption (% using feature).
27. How do you decide between building in-house vs. using third-party tools? Build if: core to differentiation, frequent changes needed. Partner if: commodity, maintenance burden, cost-prohibitive.
28. How would you handle feature requests from power users? Listen, but don't build immediately. Survey broader base: do others want it? High-power users aren't representative. If niche, consider "pro tier".
29. How do you maintain focus with limited resources? Say no to 95% of ideas. One product focus at a time. Measure: are we moving North Star? If not, we're distracted.
30. What's your biggest product failure and what did you learn? Pick something real. Humble. Show learning. Example: "Built feature nobody wanted. Learned to validate with users first, not assumptions. Now we test before building."
Spending weeks building a feature nobody wants. Always validate before building: interviews, surveys, MVPs.
Optimizing for "features shipped" instead of "problems solved." Focus on impact, not output.
Telling them how to build instead of why. Engineers are smarter than you at technical execution. Trust them.
Chasing new users while ignoring why current users leave. Retention is 10x more valuable than acquisition.
Making decisions based on gut feel. Use frameworks (RICE, Kano, Value/Effort). Be consistent.
The moment a user experiences core value from your product
Customer Acquisition Cost = Marketing spend / new customers
% of users/customers who stop using product in a period
Group of users grouped by signup date or characteristic
Daily / Monthly Active Users
Lifetime Value = total revenue from a customer
Net Promoter Score = % promoters - % detractors (0-100)
Objective (goal) + Key Result (measurable outcome)