Post-Launch Workflow
Optimize and iterate after launch with feedback collection, metrics analysis, quick wins, and roadmap prioritization
The Post-Launch workflow helps you capitalize on launch momentum by systematically collecting feedback, analyzing metrics, implementing quick wins, and building a data-driven roadmap for continued growth.
Overview#
| Property | Value |
|---|---|
| Phases | 4 (Feedback, Analysis, Quick Wins, Roadmap) |
| Tier | Free |
| Typical Duration | 2-4 weeks |
| Best For | Post-launch optimization, continuous improvement |
Outcomes#
A successful post-launch workflow results in:
- Comprehensive understanding of user feedback
- Clear metrics baseline for future comparison
- Immediate improvements deployed within first week
- Data-driven product roadmap
- Foundation for product-market fit measurement
Timeline#
WEEK 1: Feedback Collection
├── Day 1-2: Set up feedback channels
├── Day 3-5: Active outreach to early users
└── Day 6-7: Consolidate and categorize feedback
WEEK 2: Analysis & Quick Wins
├── Day 1-2: Deep dive into metrics
├── Day 3-4: Identify and prioritize quick wins
└── Day 5-7: Implement quick wins
WEEK 3-4: Roadmap & Iteration
├── Week 3: Build prioritized roadmap
└── Week 4: Begin first iteration cycle
Phase 1: Feedback Collection (Week 1)#
Set Up Feedback Channels#
In-App Feedback:
Loading code block...
Post-Action Surveys:
Loading code block...
User Interview Framework#
Interview Request Email:
Loading code block...
Interview Questions:
Loading code block...
Feedback Categorization#
Organize feedback into these buckets:
| Category | Examples | Priority |
|---|---|---|
| Bugs | Crashes, errors, broken features | Highest |
| UX Issues | Confusing flows, unclear UI | High |
| Missing Features | Requested capabilities | Medium |
| Nice-to-Have | Polish, optimizations | Low |
| Praise | What's working well | Track |
Phase 2: Metrics Analysis (Week 2)#
Key Metrics Dashboard#
Loading code block...
Metrics Benchmarks#
Compare your metrics to these benchmarks:
| Metric | Good | Great | Excellent |
|---|---|---|---|
| Activation Rate | 20% | 40% | 60%+ |
| D1 Retention | 30% | 40% | 50%+ |
| D7 Retention | 15% | 25% | 35%+ |
| D30 Retention | 10% | 15% | 25%+ |
| DAU/WAU | 25% | 40% | 60%+ |
| Trial Conversion | 5% | 10% | 20%+ |
Cohort Analysis#
Loading code block...
Funnel Analysis#
Loading code block...
Phase 3: Quick Wins (Week 2)#
Identifying Quick Wins#
Quick wins meet these criteria:
- High impact - Addresses common feedback
- Low effort - Can ship in 1-2 days
- Low risk - Won't break existing features
Quick Win Categories#
UX Improvements:
- Clearer error messages
- Better loading states
- Improved empty states
- Tooltip additions
- Mobile responsiveness fixes
Performance:
- Image optimization
- Query optimization
- Caching implementation
- Lazy loading
Onboarding:
- Welcome tour
- Sample data
- Contextual help
- Email sequence improvements
Quick Win Tracking#
Loading code block...
Phase 4: Roadmap Prioritization (Week 3-4)#
RICE Scoring Framework#
Score features using RICE:
| Factor | Definition | Scale |
|---|---|---|
| Reach | Users affected per quarter | Number |
| Impact | Effect on user | 0.25, 0.5, 1, 2, 3 |
| Confidence | How sure are you | 0-100% |
| Effort | Person-weeks | Number |
RICE Score = (Reach × Impact × Confidence) / Effort
Feature Prioritization Template#
Loading code block...
Roadmap Visualization#
NOW (This Month) NEXT (Next Month) LATER (Future)
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Quick Win A │ │ Feature X │ │ Big Feature Y │
│ Score: 142 │ │ Score: 89 │ │ Score: 45 │
├─────────────────┤ ├─────────────────┤ ├─────────────────┤
│ Quick Win B │ │ Feature Z │ │ Feature W │
│ Score: 128 │ │ Score: 76 │ │ Score: 38 │
├─────────────────┤ ├─────────────────┤ └─────────────────┘
│ Bug Fix C │ │ Integration A │
│ Critical │ │ Score: 65 │
└─────────────────┘ └─────────────────┘
Recommended Agents#
| Phase | Agent | Purpose |
|---|---|---|
| Feedback | copywriting-expert | Interview scripts, survey design |
| Analysis | analytics-expert | Metrics setup, SQL queries |
| Quick Wins | frontend-expert | UX improvements |
| Quick Wins | performance-expert | Speed optimizations |
| Roadmap | product-expert | Prioritization framework |
Deliverables#
| Deliverable | Description |
|---|---|
| Feedback summary | Categorized user feedback with themes |
| Metrics baseline | Current state of key metrics |
| Quick wins list | Prioritized list of immediate improvements |
| Impact report | Before/after metrics for quick wins |
| Product roadmap | RICE-scored feature prioritization |
Best Practices#
- Act on feedback quickly - Show users you're listening
- Close the loop - Tell users when you ship their requests
- Measure everything - You can't improve what you don't measure
- Ship small - Many small improvements beat one big release
- Stay focused - Don't try to fix everything at once
- Celebrate progress - Team morale matters post-launch
Common Pitfalls#
- Analysis paralysis - Don't wait for perfect data
- Ignoring qualitative - Numbers don't tell the whole story
- Feature factory - Building features without validation
- Premature optimization - Fix real problems first
- Burnout - Pace yourself after an intense launch
Iteration Cadence#
Establish a sustainable iteration rhythm:
Weekly:
- Review metrics dashboard (30 min)
- Triage feedback inbox (1 hr)
- Ship 1-2 quick wins
Bi-weekly:
- User interview (30 min each, 2-3 users)
- Team retrospective (1 hr)
- Roadmap review (1 hr)
Monthly:
- Cohort analysis deep dive
- RICE re-scoring
- Stakeholder update
Related Workflows#
- Product-Market Fit - Measure PMF
- Metrics Dashboard - Track KPIs
- Retention - Keep users engaged
- Acquisition - Continue growing