Creating a database connection is expensive—TCP handshake, authentication, protocol negotiation. Connection pooling reuses connections, dramatically improving performance.
The Problem#
Without pooling:
Request 1 → Create connection → Query → Close connection
Request 2 → Create connection → Query → Close connection
Request 3 → Create connection → Query → Close connection
Each connection: ~50-100ms overhead
With pooling:
Request 1 → Get connection from pool → Query → Return to pool
Request 2 → Get connection from pool → Query → Return to pool
Request 3 → Get connection from pool → Query → Return to pool
Each connection: ~1ms overhead
Node.js with PostgreSQL#
Pool Size Calculation#
Formula: connections = (core_count * 2) + effective_spindle_count
For SSDs (no spindles): connections ≈ core_count * 2
Example:
- 4 CPU cores
- SSD storage
- connections = 4 * 2 = 8
But also consider:
- Number of application instances
- Database server limits
- Query patterns
Connection Lifecycle#
Transaction Handling#
Connection Pool Monitoring#
External Pool (PgBouncer)#
Pool modes:
- session: Connection per session (like no pooling)
- transaction: Connection per transaction (recommended)
- statement: Connection per statement (limited use)
Common Pitfalls#
Graceful Shutdown#
Conclusion#
Connection pooling is essential for database performance at scale. Configure pool sizes based on your workload, monitor for exhaustion, and always release connections properly.
For high-traffic applications, consider an external pooler like PgBouncer between your app and database.