Back to Blog
LoggingMonitoringDebuggingDevOps

Logging Best Practices for Production Applications

Log effectively for debugging and monitoring. From structured logging to log levels to centralized aggregation.

B
Bootspring Team
Engineering
November 28, 2023
5 min read

Good logging is your window into production systems. It helps debug issues, monitor health, and understand user behavior. Here's how to log effectively.

Structured Logging#

1// ❌ Unstructured logging 2console.log('User john@example.com logged in from 192.168.1.1'); 3 4// ✅ Structured logging 5logger.info('User logged in', { 6 userId: 'user-123', 7 email: 'john@example.com', 8 ip: '192.168.1.1', 9 userAgent: 'Mozilla/5.0...', 10 timestamp: new Date().toISOString(), 11}); 12 13// Output (JSON) 14{ 15 "level": "info", 16 "message": "User logged in", 17 "userId": "user-123", 18 "email": "john@example.com", 19 "ip": "192.168.1.1", 20 "userAgent": "Mozilla/5.0...", 21 "timestamp": "2024-01-15T10:30:00.000Z" 22}

Logger Setup (Pino)#

1import pino from 'pino'; 2 3const logger = pino({ 4 level: process.env.LOG_LEVEL || 'info', 5 formatters: { 6 level: (label) => ({ level: label }), 7 }, 8 timestamp: pino.stdTimeFunctions.isoTime, 9 redact: { 10 paths: ['password', 'token', 'authorization', '*.password'], 11 censor: '[REDACTED]', 12 }, 13 base: { 14 service: process.env.SERVICE_NAME, 15 version: process.env.APP_VERSION, 16 environment: process.env.NODE_ENV, 17 }, 18}); 19 20// Child loggers for context 21function createRequestLogger(req: Request) { 22 return logger.child({ 23 requestId: req.headers['x-request-id'], 24 method: req.method, 25 path: req.path, 26 userId: req.user?.id, 27 }); 28} 29 30// Usage 31app.use((req, res, next) => { 32 req.log = createRequestLogger(req); 33 req.log.info('Request started'); 34 next(); 35});

Log Levels#

1// Log levels and when to use them 2const logLevels = { 3 fatal: 'System is unusable, immediate attention required', 4 error: 'Error occurred, but system continues running', 5 warn: 'Unexpected situation, but handled gracefully', 6 info: 'Normal operational events', 7 debug: 'Detailed information for debugging', 8 trace: 'Very detailed tracing information', 9}; 10 11// Examples 12logger.fatal('Database connection pool exhausted', { 13 connections: 0, 14 maxConnections: 100, 15}); 16 17logger.error('Payment processing failed', { 18 orderId: 'order-123', 19 error: error.message, 20 stack: error.stack, 21}); 22 23logger.warn('Rate limit approaching', { 24 userId: 'user-123', 25 currentRate: 95, 26 limit: 100, 27}); 28 29logger.info('Order placed', { 30 orderId: 'order-123', 31 userId: 'user-123', 32 total: 99.99, 33}); 34 35logger.debug('Cache lookup', { 36 key: 'user:123', 37 hit: true, 38 ttl: 3600, 39}); 40 41logger.trace('SQL query executed', { 42 query: 'SELECT * FROM users WHERE id = $1', 43 params: ['123'], 44 duration: 15, 45});

Request/Response Logging#

1import pinoHttp from 'pino-http'; 2 3app.use(pinoHttp({ 4 logger, 5 autoLogging: true, 6 customLogLevel: (req, res, error) => { 7 if (res.statusCode >= 500 || error) return 'error'; 8 if (res.statusCode >= 400) return 'warn'; 9 return 'info'; 10 }, 11 customSuccessMessage: (req, res) => { 12 return `${req.method} ${req.url} completed`; 13 }, 14 customErrorMessage: (req, res, error) => { 15 return `${req.method} ${req.url} failed: ${error.message}`; 16 }, 17 serializers: { 18 req: (req) => ({ 19 method: req.method, 20 url: req.url, 21 headers: { 22 'user-agent': req.headers['user-agent'], 23 'content-type': req.headers['content-type'], 24 }, 25 }), 26 res: (res) => ({ 27 statusCode: res.statusCode, 28 }), 29 }, 30}));

Error Logging#

1// Comprehensive error logging 2function logError(error: Error, context: Record<string, any> = {}) { 3 const errorInfo = { 4 name: error.name, 5 message: error.message, 6 stack: error.stack, 7 ...context, 8 }; 9 10 // Add custom error properties 11 if (error instanceof AppError) { 12 errorInfo.code = error.code; 13 errorInfo.statusCode = error.statusCode; 14 errorInfo.isOperational = error.isOperational; 15 } 16 17 // Add cause chain 18 if (error.cause) { 19 errorInfo.cause = { 20 name: (error.cause as Error).name, 21 message: (error.cause as Error).message, 22 }; 23 } 24 25 logger.error(errorInfo, error.message); 26} 27 28// Usage 29try { 30 await processOrder(orderId); 31} catch (error) { 32 logError(error, { 33 orderId, 34 userId: req.user.id, 35 action: 'processOrder', 36 }); 37 throw error; 38}

Correlation IDs#

1import { AsyncLocalStorage } from 'async_hooks'; 2import { v4 as uuid } from 'uuid'; 3 4const asyncLocalStorage = new AsyncLocalStorage<{ requestId: string }>(); 5 6// Middleware to set request ID 7app.use((req, res, next) => { 8 const requestId = req.headers['x-request-id'] as string || uuid(); 9 res.setHeader('x-request-id', requestId); 10 11 asyncLocalStorage.run({ requestId }, () => { 12 next(); 13 }); 14}); 15 16// Logger that includes request ID 17const correlatedLogger = { 18 info: (message: string, data?: object) => { 19 const store = asyncLocalStorage.getStore(); 20 logger.info({ requestId: store?.requestId, ...data }, message); 21 }, 22 error: (message: string, data?: object) => { 23 const store = asyncLocalStorage.getStore(); 24 logger.error({ requestId: store?.requestId, ...data }, message); 25 }, 26}; 27 28// Now all logs include requestId automatically 29correlatedLogger.info('Processing order', { orderId: '123' }); 30// { requestId: "abc-123", orderId: "123", message: "Processing order" }

Log Aggregation#

1# Fluentd configuration 2<source> 3 @type tail 4 path /var/log/app/*.log 5 pos_file /var/log/fluentd/app.log.pos 6 tag app.logs 7 <parse> 8 @type json 9 </parse> 10</source> 11 12<filter app.logs> 13 @type record_transformer 14 <record> 15 hostname "#{Socket.gethostname}" 16 cluster "#{ENV['CLUSTER_NAME']}" 17 </record> 18</filter> 19 20<match app.logs> 21 @type elasticsearch 22 host elasticsearch 23 port 9200 24 index_name app-logs 25 type_name _doc 26</match>

Sensitive Data#

1// Redact sensitive fields 2const sensitiveFields = [ 3 'password', 4 'token', 5 'secret', 6 'authorization', 7 'creditCard', 8 'ssn', 9]; 10 11function redactSensitive(obj: any): any { 12 if (typeof obj !== 'object' || obj === null) { 13 return obj; 14 } 15 16 const redacted = Array.isArray(obj) ? [...obj] : { ...obj }; 17 18 for (const key of Object.keys(redacted)) { 19 if (sensitiveFields.some((f) => key.toLowerCase().includes(f))) { 20 redacted[key] = '[REDACTED]'; 21 } else if (typeof redacted[key] === 'object') { 22 redacted[key] = redactSensitive(redacted[key]); 23 } 24 } 25 26 return redacted; 27} 28 29// Pino redaction 30const logger = pino({ 31 redact: { 32 paths: [ 33 'password', 34 '*.password', 35 'req.headers.authorization', 36 'req.headers.cookie', 37 ], 38 remove: true, // Remove instead of replacing with [REDACTED] 39 }, 40});

Performance Logging#

1// Log slow operations 2async function withTiming<T>( 3 operation: string, 4 fn: () => Promise<T>, 5 threshold = 1000 6): Promise<T> { 7 const start = performance.now(); 8 9 try { 10 const result = await fn(); 11 const duration = performance.now() - start; 12 13 if (duration > threshold) { 14 logger.warn('Slow operation', { 15 operation, 16 duration: Math.round(duration), 17 threshold, 18 }); 19 } else { 20 logger.debug('Operation completed', { 21 operation, 22 duration: Math.round(duration), 23 }); 24 } 25 26 return result; 27 } catch (error) { 28 const duration = performance.now() - start; 29 logger.error('Operation failed', { 30 operation, 31 duration: Math.round(duration), 32 error: error.message, 33 }); 34 throw error; 35 } 36} 37 38// Usage 39const users = await withTiming('fetchUsers', () => db.user.findMany());

Best Practices#

DO: ✓ Use structured JSON logging ✓ Include correlation IDs ✓ Log at appropriate levels ✓ Redact sensitive data ✓ Include relevant context ✓ Centralize log aggregation DON'T: ✗ Log sensitive data (passwords, tokens) ✗ Use console.log in production ✗ Log too much (performance impact) ✗ Log too little (missing context) ✗ Ignore log rotation

Conclusion#

Effective logging is essential for production operations. Use structured logging, include correlation IDs, aggregate logs centrally, and always redact sensitive data.

Good logs tell the story of what happened—make sure yours are readable.

Share this article

Help spread the word about Bootspring