Post-Launch Workflow
Optimize and iterate after launch with feedback collection, metrics analysis, quick wins, and roadmap prioritization
The Post-Launch workflow helps you capitalize on launch momentum by systematically collecting feedback, analyzing metrics, implementing quick wins, and building a data-driven roadmap for continued growth.
Overview#
| Property | Value |
|---|---|
| Phases | 4 (Feedback, Analysis, Quick Wins, Roadmap) |
| Tier | Free |
| Typical Duration | 2-4 weeks |
| Best For | Post-launch optimization, continuous improvement |
Outcomes#
A successful post-launch workflow results in:
- Comprehensive understanding of user feedback
- Clear metrics baseline for future comparison
- Immediate improvements deployed within first week
- Data-driven product roadmap
- Foundation for product-market fit measurement
Timeline#
WEEK 1: Feedback Collection
├── Day 1-2: Set up feedback channels
├── Day 3-5: Active outreach to early users
└── Day 6-7: Consolidate and categorize feedback
WEEK 2: Analysis & Quick Wins
├── Day 1-2: Deep dive into metrics
├── Day 3-4: Identify and prioritize quick wins
└── Day 5-7: Implement quick wins
WEEK 3-4: Roadmap & Iteration
├── Week 3: Build prioritized roadmap
└── Week 4: Begin first iteration cycle
Phase 1: Feedback Collection (Week 1)#
Set Up Feedback Channels#
In-App Feedback:
1// components/FeedbackWidget.tsx
2'use client';
3
4import { useState } from 'react';
5import { MessageSquare, X } from 'lucide-react';
6
7export function FeedbackWidget() {
8 const [isOpen, setIsOpen] = useState(false);
9 const [feedback, setFeedback] = useState('');
10 const [type, setType] = useState<'bug' | 'feature' | 'other'>('feature');
11 const [submitted, setSubmitted] = useState(false);
12
13 async function handleSubmit(e: React.FormEvent) {
14 e.preventDefault();
15
16 await fetch('/api/feedback', {
17 method: 'POST',
18 body: JSON.stringify({ feedback, type }),
19 });
20
21 setSubmitted(true);
22 setTimeout(() => {
23 setIsOpen(false);
24 setSubmitted(false);
25 setFeedback('');
26 }, 2000);
27 }
28
29 return (
30 <>
31 <button
32 onClick={() => setIsOpen(true)}
33 className="fixed bottom-4 right-4 p-3 bg-primary text-primary-foreground rounded-full shadow-lg"
34 >
35 <MessageSquare className="w-6 h-6" />
36 </button>
37
38 {isOpen && (
39 <div className="fixed bottom-20 right-4 w-80 bg-card border rounded-lg shadow-xl p-4">
40 <div className="flex justify-between items-center mb-4">
41 <h3 className="font-semibold">Send Feedback</h3>
42 <button onClick={() => setIsOpen(false)}>
43 <X className="w-4 h-4" />
44 </button>
45 </div>
46
47 {submitted ? (
48 <p className="text-center py-8 text-green-600">
49 Thanks for your feedback!
50 </p>
51 ) : (
52 <form onSubmit={handleSubmit} className="space-y-4">
53 <div className="flex gap-2">
54 {(['bug', 'feature', 'other'] as const).map((t) => (
55 <button
56 key={t}
57 type="button"
58 onClick={() => setType(t)}
59 className={`px-3 py-1 rounded text-sm ${
60 type === t
61 ? 'bg-primary text-primary-foreground'
62 : 'bg-muted'
63 }`}
64 >
65 {t.charAt(0).toUpperCase() + t.slice(1)}
66 </button>
67 ))}
68 </div>
69 <textarea
70 value={feedback}
71 onChange={(e) => setFeedback(e.target.value)}
72 placeholder="Tell us what you think..."
73 className="w-full p-2 border rounded min-h-[100px]"
74 required
75 />
76 <button
77 type="submit"
78 className="w-full py-2 bg-primary text-primary-foreground rounded"
79 >
80 Submit
81 </button>
82 </form>
83 )}
84 </div>
85 )}
86 </>
87 );
88}Post-Action Surveys:
1// Trigger after key actions
2export function triggerSurvey(action: string, userId: string) {
3 const surveys = {
4 first_project_created: {
5 question: "How easy was it to create your first project?",
6 options: ['Very easy', 'Somewhat easy', 'Neutral', 'Difficult', 'Very difficult'],
7 },
8 first_week_complete: {
9 question: "How likely are you to recommend us to a colleague?",
10 options: ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10'],
11 },
12 upgrade_considered: {
13 question: "What's holding you back from upgrading?",
14 options: ['Price', 'Missing feature', 'Not sure of value', 'Other'],
15 },
16 };
17
18 const survey = surveys[action];
19 if (survey) {
20 // Show survey modal or embed
21 }
22}User Interview Framework#
Interview Request Email:
1Subject: Quick chat about your experience with [Product]?
2
3Hi [Name],
4
5Thanks for being one of our early users! I noticed you've been
6[using feature X / active for Y days / etc].
7
8I'd love to hear about your experience so far. Would you have
915 minutes for a quick video call this week?
10
11As a thank you, I'll [add credits / extend trial / give swag].
12
13Here's my calendar: [Calendly link]
14
15Best,
16[Founder name]Interview Questions:
1## User Interview Script (15 min)
2
3### Opening (2 min)
4- Thank them for their time
5- Explain this is about learning, not selling
6- Ask permission to take notes
7
8### Background (3 min)
9- What do you do? What's your role?
10- What were you using before [Product]?
11- How did you hear about us?
12
13### Experience (5 min)
14- Walk me through how you use [Product]
15- What's the most valuable part for you?
16- What's been frustrating or confusing?
17- What's missing that would make this a must-have?
18
19### Deep Dive (3 min)
20- [Follow up on interesting points]
21- Can you show me how you do [specific task]?
22
23### Closing (2 min)
24- On a scale of 1-10, how likely to recommend?
25- Anything else you want to share?
26- Can I follow up with more questions later?Feedback Categorization#
Organize feedback into these buckets:
| Category | Examples | Priority |
|---|---|---|
| Bugs | Crashes, errors, broken features | Highest |
| UX Issues | Confusing flows, unclear UI | High |
| Missing Features | Requested capabilities | Medium |
| Nice-to-Have | Polish, optimizations | Low |
| Praise | What's working well | Track |
Phase 2: Metrics Analysis (Week 2)#
Key Metrics Dashboard#
1// lib/analytics/post-launch.ts
2export async function getPostLaunchMetrics(launchDate: Date) {
3 const weeksSinceLaunch = Math.floor(
4 (Date.now() - launchDate.getTime()) / (7 * 24 * 60 * 60 * 1000)
5 );
6
7 return {
8 // Acquisition
9 signups: await getSignupsByWeek(launchDate, weeksSinceLaunch),
10 signupSources: await getSignupSources(),
11
12 // Activation
13 activationRate: await getActivationRate(), // % completing key action
14 timeToActivation: await getTimeToActivation(), // median time
15
16 // Engagement
17 dau: await getDAU(),
18 wau: await getWAU(),
19 dauWauRatio: await getDAU() / await getWAU(), // stickiness
20
21 // Retention
22 d1Retention: await getRetention(1),
23 d7Retention: await getRetention(7),
24 d30Retention: await getRetention(30),
25
26 // Revenue (if applicable)
27 trialToPayingRate: await getConversionRate(),
28 mrr: await getMRR(),
29 arpu: await getARPU(),
30 };
31}Metrics Benchmarks#
Compare your metrics to these benchmarks:
| Metric | Good | Great | Excellent |
|---|---|---|---|
| Activation Rate | 20% | 40% | 60%+ |
| D1 Retention | 30% | 40% | 50%+ |
| D7 Retention | 15% | 25% | 35%+ |
| D30 Retention | 10% | 15% | 25%+ |
| DAU/WAU | 25% | 40% | 60%+ |
| Trial Conversion | 5% | 10% | 20%+ |
Cohort Analysis#
1-- Weekly cohort retention analysis
2WITH cohorts AS (
3 SELECT
4 user_id,
5 DATE_TRUNC('week', created_at) as cohort_week
6 FROM users
7),
8activity AS (
9 SELECT
10 user_id,
11 DATE_TRUNC('week', timestamp) as activity_week
12 FROM events
13)
14SELECT
15 c.cohort_week,
16 COUNT(DISTINCT c.user_id) as cohort_size,
17 COUNT(DISTINCT CASE
18 WHEN a.activity_week = c.cohort_week + INTERVAL '1 week'
19 THEN a.user_id
20 END) as week_1,
21 COUNT(DISTINCT CASE
22 WHEN a.activity_week = c.cohort_week + INTERVAL '2 weeks'
23 THEN a.user_id
24 END) as week_2,
25 COUNT(DISTINCT CASE
26 WHEN a.activity_week = c.cohort_week + INTERVAL '4 weeks'
27 THEN a.user_id
28 END) as week_4
29FROM cohorts c
30LEFT JOIN activity a ON c.user_id = a.user_id
31GROUP BY c.cohort_week
32ORDER BY c.cohort_week;Funnel Analysis#
1// Track key funnel steps
2const funnel = [
3 { step: 'Landing Page Visit', count: 10000 },
4 { step: 'Signup Started', count: 2000 },
5 { step: 'Signup Completed', count: 1500 },
6 { step: 'First Action', count: 600 },
7 { step: 'Core Feature Used', count: 300 },
8 { step: 'Retained Day 7', count: 150 },
9];
10
11// Calculate conversion rates
12funnel.forEach((step, i) => {
13 if (i > 0) {
14 step.conversionRate = (step.count / funnel[i-1].count * 100).toFixed(1) + '%';
15 }
16});Phase 3: Quick Wins (Week 2)#
Identifying Quick Wins#
Quick wins meet these criteria:
- High impact - Addresses common feedback
- Low effort - Can ship in 1-2 days
- Low risk - Won't break existing features
Quick Win Categories#
UX Improvements:
- Clearer error messages
- Better loading states
- Improved empty states
- Tooltip additions
- Mobile responsiveness fixes
Performance:
- Image optimization
- Query optimization
- Caching implementation
- Lazy loading
Onboarding:
- Welcome tour
- Sample data
- Contextual help
- Email sequence improvements
Quick Win Tracking#
1// Track quick win impact
2interface QuickWin {
3 id: string;
4 description: string;
5 metric: string;
6 before: number;
7 after: number;
8 impact: number;
9 deployedAt: Date;
10}
11
12const quickWins: QuickWin[] = [
13 {
14 id: 'empty-state-cta',
15 description: 'Added CTA to empty project list',
16 metric: 'projects_created_first_session',
17 before: 23, // %
18 after: 41, // %
19 impact: 78, // % improvement
20 deployedAt: new Date('2024-01-15'),
21 },
22 // ... more wins
23];Phase 4: Roadmap Prioritization (Week 3-4)#
RICE Scoring Framework#
Score features using RICE:
| Factor | Definition | Scale |
|---|---|---|
| Reach | Users affected per quarter | Number |
| Impact | Effect on user | 0.25, 0.5, 1, 2, 3 |
| Confidence | How sure are you | 0-100% |
| Effort | Person-weeks | Number |
RICE Score = (Reach × Impact × Confidence) / Effort
Feature Prioritization Template#
1## Feature: [Name]
2
3### User Story
4As a [user type], I want to [action] so that [benefit].
5
6### RICE Score
7- Reach: X users/quarter
8- Impact: X (3=massive, 2=high, 1=medium, 0.5=low, 0.25=minimal)
9- Confidence: X%
10- Effort: X person-weeks
11- **Score: X**
12
13### Supporting Evidence
14- X users requested this
15- Competitor Y has this
16- Would improve [metric] by [estimate]
17
18### Dependencies
19- Requires [other feature/infrastructure]
20- Blocked by [blocker]
21
22### Notes
23[Additional context]Roadmap Visualization#
NOW (This Month) NEXT (Next Month) LATER (Future)
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Quick Win A │ │ Feature X │ │ Big Feature Y │
│ Score: 142 │ │ Score: 89 │ │ Score: 45 │
├─────────────────┤ ├─────────────────┤ ├─────────────────┤
│ Quick Win B │ │ Feature Z │ │ Feature W │
│ Score: 128 │ │ Score: 76 │ │ Score: 38 │
├─────────────────┤ ├─────────────────┤ └─────────────────┘
│ Bug Fix C │ │ Integration A │
│ Critical │ │ Score: 65 │
└─────────────────┘ └─────────────────┘
Recommended Agents#
| Phase | Agent | Purpose |
|---|---|---|
| Feedback | copywriting-expert | Interview scripts, survey design |
| Analysis | analytics-expert | Metrics setup, SQL queries |
| Quick Wins | frontend-expert | UX improvements |
| Quick Wins | performance-expert | Speed optimizations |
| Roadmap | product-expert | Prioritization framework |
Deliverables#
| Deliverable | Description |
|---|---|
| Feedback summary | Categorized user feedback with themes |
| Metrics baseline | Current state of key metrics |
| Quick wins list | Prioritized list of immediate improvements |
| Impact report | Before/after metrics for quick wins |
| Product roadmap | RICE-scored feature prioritization |
Best Practices#
- Act on feedback quickly - Show users you're listening
- Close the loop - Tell users when you ship their requests
- Measure everything - You can't improve what you don't measure
- Ship small - Many small improvements beat one big release
- Stay focused - Don't try to fix everything at once
- Celebrate progress - Team morale matters post-launch
Common Pitfalls#
- Analysis paralysis - Don't wait for perfect data
- Ignoring qualitative - Numbers don't tell the whole story
- Feature factory - Building features without validation
- Premature optimization - Fix real problems first
- Burnout - Pace yourself after an intense launch
Iteration Cadence#
Establish a sustainable iteration rhythm:
Weekly:
- Review metrics dashboard (30 min)
- Triage feedback inbox (1 hr)
- Ship 1-2 quick wins
Bi-weekly:
- User interview (30 min each, 2-3 users)
- Team retrospective (1 hr)
- Roadmap review (1 hr)
Monthly:
- Cohort analysis deep dive
- RICE re-scoring
- Stakeholder update
Related Workflows#
- Product-Market Fit - Measure PMF
- Metrics Dashboard - Track KPIs
- Retention - Keep users engaged
- Acquisition - Continue growing