Research Expert
The Research Expert agent specializes in technology research, technical evaluation, prototyping, and making informed technology decisions.
Expertise Areas#
- Technology Evaluation - Systematic tool and library assessment
- Comparison Analysis - Side-by-side technology comparisons
- Proof of Concept - Structured POC planning and execution
- Research Documentation - Clear findings and recommendations
- Technology Radar - Tracking adoption readiness
- Benchmarking - Performance comparison testing
- Decision Frameworks - Structured decision-making
Usage Examples#
Technology Evaluation#
Use the research-expert agent to evaluate database options for a high-traffic SaaS application.
Response includes:
- Evaluation criteria
- Comparison matrix
- Benchmark results
- Recommendation
POC Planning#
Use the research-expert agent to plan a proof of concept for implementing real-time features with WebSockets.
Response includes:
- POC scope and objectives
- Success criteria
- Timeline and resources
- Decision framework
Research Documentation#
Use the research-expert agent to document findings from evaluating CI/CD platforms.
Response includes:
- Executive summary
- Detailed analysis
- Comparison tables
- Clear recommendation
Best Practices Applied#
1. Evaluation Process#
- Define clear criteria
- Weight by importance
- Test thoroughly
- Document findings
2. Objectivity#
- Multiple perspectives
- Quantitative metrics
- Reproducible tests
- Transparent methodology
3. Decision Making#
- Risk assessment
- Long-term viability
- Team capability
- Migration paths
4. Documentation#
- Executive summaries
- Detailed appendices
- Clear recommendations
- Action items
Common Patterns#
Technology Evaluation Framework#
1## Evaluation Criteria
2
3### Technical Fit (40%)
4- [ ] Solves the core problem
5- [ ] Compatible with current stack
6- [ ] Scalability potential
7- [ ] Performance characteristics
8- [ ] Security posture
9
10### Developer Experience (25%)
11- [ ] Documentation quality
12- [ ] Learning curve
13- [ ] Debugging experience
14- [ ] IDE/tooling support
15- [ ] Error messages clarity
16
17### Ecosystem (20%)
18- [ ] Community size and activity
19- [ ] Third-party integrations
20- [ ] Plugin/extension ecosystem
21- [ ] Stack Overflow presence
22- [ ] GitHub activity
23
24### Business Viability (15%)
25- [ ] Company/maintainer stability
26- [ ] Licensing terms
27- [ ] Pricing (if applicable)
28- [ ] Support options
29- [ ] Longevity outlookComparison Matrix#
1## Technology Comparison: [Category]
2
3| Criteria | Option A | Option B | Option C |
4|----------|----------|----------|----------|
5| **Technical** |
6| Performance | 5/5 | 4/5 | 3/5 |
7| Scalability | 4/5 | 5/5 | 3/5 |
8| Security | 4/5 | 3/5 | 4/5 |
9| **DX** |
10| Documentation | 5/5 | 3/5 | 4/5 |
11| Learning curve | Easy | Moderate | Hard |
12| **Ecosystem** |
13| Community | Large | Medium | Small |
14| GitHub stars | 50K | 25K | 10K |
15| **Business** |
16| Pricing | Free | Freemium | Paid |
17| License | MIT | Apache | Commercial |
18
19### Recommendation
20[Your recommendation with rationale]POC Plan Template#
1## POC Plan: [Technology/Approach]
2
3### Objective
4What are we trying to learn/validate?
5
6### Scope
7- In scope: [List]
8- Out of scope: [List]
9
10### Success Criteria
11- [ ] Criterion 1: [Measurable outcome]
12- [ ] Criterion 2: [Measurable outcome]
13- [ ] Criterion 3: [Measurable outcome]
14
15### Timeline
16| Phase | Duration | Activities |
17|-------|----------|------------|
18| Setup | 1 day | Environment, dependencies |
19| Build | 2-3 days | Core functionality |
20| Test | 1 day | Performance, edge cases |
21| Document | 0.5 day | Findings, decision |
22
23### Decision Framework
24- GO if: [Criteria met]
25- NO-GO if: [Criteria failed]
26- PIVOT if: [Partial success]Benchmarking Template#
1## Performance Benchmark: [Subject]
2
3### Test Environment
4- Hardware: [Specs]
5- OS: [Version]
6- Runtime: [Node/Bun version]
7
8### Methodology
9- Tool used: [k6, wrk, etc.]
10- Duration: [X minutes]
11- Concurrent users: [N]
12
13### Results
14
15| Metric | Option A | Option B | Delta |
16|--------|----------|----------|-------|
17| Requests/sec | 1,500 | 2,100 | +40% |
18| P50 latency | 45ms | 32ms | -29% |
19| P99 latency | 120ms | 85ms | -29% |
20| Memory | 512MB | 380MB | -26% |
21
22### Conclusion
23[Recommendation based on findings]Sample Prompts#
| Task | Prompt |
|---|---|
| Evaluation | "Evaluate state management libraries for a React application" |
| Comparison | "Compare Vercel vs Railway for deployment" |
| POC | "Plan a POC for implementing GraphQL" |
| Benchmark | "Benchmark Prisma vs Drizzle ORM performance" |
| Decision | "Document decision for choosing authentication provider" |
Configuration#
1// bootspring.config.js
2module.exports = {
3 agents: {
4 customInstructions: {
5 'research-expert': `
6 - Use structured evaluation frameworks
7 - Include quantitative benchmarks
8 - Document all findings clearly
9 - Provide actionable recommendations
10 - Consider long-term implications
11 `,
12 },
13 },
14 research: {
15 documentationFormat: 'markdown',
16 includeComparisons: true,
17 },
18};Related Agents#
- Architecture Expert - System design decisions
- Performance Expert - Performance evaluation
- DevOps Expert - Infrastructure decisions