Back to Blog
CachingRedisPerformanceArchitecture

Caching Strategies with Redis

Speed up your application with intelligent caching. From cache patterns to invalidation strategies to Redis best practices.

B
Bootspring Team
Engineering
April 28, 2024
5 min read

Caching reduces database load and speeds up responses. Redis provides fast, flexible caching for modern applications. Here's how to use it effectively.

Why Cache?#

Benefits: - Reduce database load - Lower latency (memory vs disk) - Handle traffic spikes - Save money on database scaling When to cache: - Expensive computations - Frequently accessed data - Rarely changing data - External API responses

Redis Setup#

1import Redis from 'ioredis'; 2 3const redis = new Redis({ 4 host: process.env.REDIS_HOST, 5 port: parseInt(process.env.REDIS_PORT || '6379'), 6 password: process.env.REDIS_PASSWORD, 7 maxRetriesPerRequest: 3, 8 retryDelayOnFailover: 100, 9}); 10 11redis.on('error', (error) => { 12 console.error('Redis error:', error); 13}); 14 15redis.on('connect', () => { 16 console.log('Redis connected'); 17});

Cache-Aside Pattern#

1// Most common pattern: check cache, fallback to database 2 3async function getUser(id: string): Promise<User | null> { 4 const cacheKey = `user:${id}`; 5 6 // Try cache first 7 const cached = await redis.get(cacheKey); 8 if (cached) { 9 return JSON.parse(cached); 10 } 11 12 // Cache miss - fetch from database 13 const user = await db.user.findUnique({ where: { id } }); 14 15 if (user) { 16 // Store in cache with TTL 17 await redis.setex(cacheKey, 3600, JSON.stringify(user)); // 1 hour TTL 18 } 19 20 return user; 21} 22 23// Update cache on write 24async function updateUser(id: string, data: Partial<User>): Promise<User> { 25 const user = await db.user.update({ 26 where: { id }, 27 data, 28 }); 29 30 // Invalidate cache 31 await redis.del(`user:${id}`); 32 33 return user; 34}

Write-Through Pattern#

1// Write to cache and database together 2 3async function createUser(data: CreateUserInput): Promise<User> { 4 const user = await db.user.create({ data }); 5 6 // Write to cache immediately 7 await redis.setex(`user:${user.id}`, 3600, JSON.stringify(user)); 8 9 return user; 10}

Write-Behind Pattern#

1// Write to cache first, async write to database 2 3class WriteQueue { 4 private queue: Map<string, any> = new Map(); 5 6 async enqueue(key: string, data: any) { 7 this.queue.set(key, data); 8 9 // Flush periodically or on size threshold 10 if (this.queue.size >= 100) { 11 await this.flush(); 12 } 13 } 14 15 async flush() { 16 const entries = Array.from(this.queue.entries()); 17 this.queue.clear(); 18 19 // Batch write to database 20 await db.user.createMany({ 21 data: entries.map(([_, data]) => data), 22 skipDuplicates: true, 23 }); 24 } 25} 26 27const writeQueue = new WriteQueue(); 28 29async function trackPageView(userId: string, page: string) { 30 const key = `pageview:${userId}:${page}:${Date.now()}`; 31 32 // Write to cache immediately 33 await redis.setex(key, 86400, JSON.stringify({ userId, page, timestamp: Date.now() })); 34 35 // Queue for database write 36 await writeQueue.enqueue(key, { userId, page, timestamp: new Date() }); 37}

Cache Invalidation#

1// Pattern-based invalidation 2async function invalidateUserCaches(userId: string) { 3 const keys = await redis.keys(`user:${userId}:*`); 4 if (keys.length > 0) { 5 await redis.del(...keys); 6 } 7} 8 9// Tag-based invalidation 10async function cacheWithTags( 11 key: string, 12 value: any, 13 tags: string[], 14 ttl: number 15) { 16 const pipeline = redis.pipeline(); 17 18 // Store value 19 pipeline.setex(key, ttl, JSON.stringify(value)); 20 21 // Add key to tag sets 22 for (const tag of tags) { 23 pipeline.sadd(`tag:${tag}`, key); 24 pipeline.expire(`tag:${tag}`, ttl); 25 } 26 27 await pipeline.exec(); 28} 29 30async function invalidateByTag(tag: string) { 31 const keys = await redis.smembers(`tag:${tag}`); 32 if (keys.length > 0) { 33 await redis.del(...keys, `tag:${tag}`); 34 } 35} 36 37// Usage 38await cacheWithTags( 39 'user:123:profile', 40 userProfile, 41 ['user:123', 'profiles'], 42 3600 43); 44 45// Invalidate all caches for user 123 46await invalidateByTag('user:123');

Distributed Locking#

1// Prevent cache stampede with locking 2 3async function getWithLock<T>( 4 key: string, 5 fetchFn: () => Promise<T>, 6 ttl: number 7): Promise<T> { 8 // Try cache first 9 const cached = await redis.get(key); 10 if (cached) { 11 return JSON.parse(cached); 12 } 13 14 const lockKey = `lock:${key}`; 15 16 // Try to acquire lock 17 const acquired = await redis.set(lockKey, '1', 'EX', 10, 'NX'); 18 19 if (acquired) { 20 try { 21 // We have the lock, fetch and cache 22 const data = await fetchFn(); 23 await redis.setex(key, ttl, JSON.stringify(data)); 24 return data; 25 } finally { 26 await redis.del(lockKey); 27 } 28 } else { 29 // Another process is fetching, wait and retry 30 await new Promise((resolve) => setTimeout(resolve, 100)); 31 return getWithLock(key, fetchFn, ttl); 32 } 33}

Cache Warming#

1// Pre-populate cache for expected traffic 2 3async function warmCache() { 4 console.log('Warming cache...'); 5 6 // Get frequently accessed items 7 const popularProducts = await db.product.findMany({ 8 orderBy: { viewCount: 'desc' }, 9 take: 100, 10 }); 11 12 const pipeline = redis.pipeline(); 13 14 for (const product of popularProducts) { 15 pipeline.setex( 16 `product:${product.id}`, 17 3600, 18 JSON.stringify(product) 19 ); 20 } 21 22 await pipeline.exec(); 23 console.log(`Warmed ${popularProducts.length} products`); 24} 25 26// Run on startup or schedule 27warmCache();

Cache Decorator#

1// TypeScript decorator for automatic caching 2 3function Cached(ttl: number = 3600) { 4 return function ( 5 target: any, 6 propertyKey: string, 7 descriptor: PropertyDescriptor 8 ) { 9 const originalMethod = descriptor.value; 10 11 descriptor.value = async function (...args: any[]) { 12 const key = `${target.constructor.name}:${propertyKey}:${JSON.stringify(args)}`; 13 14 const cached = await redis.get(key); 15 if (cached) { 16 return JSON.parse(cached); 17 } 18 19 const result = await originalMethod.apply(this, args); 20 21 if (result !== null && result !== undefined) { 22 await redis.setex(key, ttl, JSON.stringify(result)); 23 } 24 25 return result; 26 }; 27 28 return descriptor; 29 }; 30} 31 32// Usage 33class UserService { 34 @Cached(3600) 35 async findById(id: string): Promise<User | null> { 36 return db.user.findUnique({ where: { id } }); 37 } 38 39 @Cached(300) 40 async getStats(userId: string): Promise<UserStats> { 41 // Expensive computation 42 return calculateUserStats(userId); 43 } 44}

Monitoring#

1// Cache hit rate monitoring 2const stats = { 3 hits: 0, 4 misses: 0, 5}; 6 7async function getCached<T>(key: string): Promise<T | null> { 8 const value = await redis.get(key); 9 10 if (value) { 11 stats.hits++; 12 return JSON.parse(value); 13 } 14 15 stats.misses++; 16 return null; 17} 18 19function getHitRate(): number { 20 const total = stats.hits + stats.misses; 21 return total > 0 ? (stats.hits / total) * 100 : 0; 22} 23 24// Expose metrics 25app.get('/cache/stats', (req, res) => { 26 res.json({ 27 hits: stats.hits, 28 misses: stats.misses, 29 hitRate: `${getHitRate().toFixed(2)}%`, 30 }); 31});

Best Practices#

DO: ✓ Set appropriate TTLs ✓ Use consistent key naming (user:123:profile) ✓ Handle cache failures gracefully ✓ Monitor hit rates ✓ Size your Redis instance properly ✓ Use pipelining for bulk operations DON'T: ✗ Cache everything ✗ Use indefinite TTLs ✗ Store sensitive data unencrypted ✗ Ignore memory limits ✗ Forget to invalidate on updates

Conclusion#

Effective caching dramatically improves performance and scalability. Start with cache-aside for simplicity, implement proper invalidation, and monitor your hit rates.

Remember: the best cache is one you don't notice—until it's gone.

Share this article

Help spread the word about Bootspring