Database migrations are one of the riskiest parts of deployment. A bad migration can take down your application or corrupt data. Here's how to migrate safely with zero downtime.
Migration Fundamentals#
The Problem#
Traditional deployment:
1. Stop application
2. Run migrations
3. Start application
→ Downtime during migrations
Zero-downtime requirement:
1. Run migrations (app still running)
2. Deploy new code
3. No interruption
→ Migrations must be backward compatible
Migration Tools#
1// Prisma
2npx prisma migrate dev // Development
3npx prisma migrate deploy // Production
4
5// Drizzle
6npx drizzle-kit generate
7npx drizzle-kit migrate
8
9// Raw SQL with versioning
10migrations/
11 001_create_users.sql
12 002_add_email_index.sql
13 003_add_profiles.sqlSafe Migration Patterns#
Expand-Contract Pattern#
1-- Phase 1: Expand (add new column)
2ALTER TABLE users ADD COLUMN email_new VARCHAR(255);
3
4-- Phase 2: Migrate data (background job)
5UPDATE users SET email_new = LOWER(email) WHERE email_new IS NULL;
6
7-- Phase 3: Switch application to use email_new
8
9-- Phase 4: Contract (remove old column)
10ALTER TABLE users DROP COLUMN email;
11ALTER TABLE users RENAME COLUMN email_new TO email;Adding Columns Safely#
1-- ✅ Safe: Add nullable column
2ALTER TABLE users ADD COLUMN phone VARCHAR(20);
3
4-- ✅ Safe: Add column with default (Postgres 11+)
5ALTER TABLE users ADD COLUMN created_at TIMESTAMP DEFAULT NOW();
6
7-- ❌ Unsafe: Add NOT NULL without default
8ALTER TABLE users ADD COLUMN required_field VARCHAR(255) NOT NULL;
9
10-- ✅ Safe alternative:
11ALTER TABLE users ADD COLUMN required_field VARCHAR(255);
12-- Backfill data
13UPDATE users SET required_field = 'default_value' WHERE required_field IS NULL;
14-- Then add constraint
15ALTER TABLE users ALTER COLUMN required_field SET NOT NULL;Renaming Columns#
1-- Don't do this directly:
2ALTER TABLE users RENAME COLUMN name TO full_name; -- Breaks old code!
3
4-- Instead, use expand-contract:
5
6-- Step 1: Add new column
7ALTER TABLE users ADD COLUMN full_name VARCHAR(255);
8
9-- Step 2: Sync data (trigger for live sync)
10CREATE OR REPLACE FUNCTION sync_name_columns()
11RETURNS TRIGGER AS $$
12BEGIN
13 IF TG_OP = 'INSERT' OR NEW.name IS DISTINCT FROM OLD.name THEN
14 NEW.full_name := NEW.name;
15 END IF;
16 IF TG_OP = 'INSERT' OR NEW.full_name IS DISTINCT FROM OLD.full_name THEN
17 NEW.name := NEW.full_name;
18 END IF;
19 RETURN NEW;
20END;
21$$ LANGUAGE plpgsql;
22
23CREATE TRIGGER sync_name_trigger
24 BEFORE INSERT OR UPDATE ON users
25 FOR EACH ROW EXECUTE FUNCTION sync_name_columns();
26
27-- Step 3: Backfill existing data
28UPDATE users SET full_name = name WHERE full_name IS NULL;
29
30-- Step 4: Deploy code using full_name
31
32-- Step 5: Remove old column and trigger
33DROP TRIGGER sync_name_trigger ON users;
34DROP FUNCTION sync_name_columns();
35ALTER TABLE users DROP COLUMN name;Adding Indexes#
1-- ❌ Blocks writes
2CREATE INDEX idx_users_email ON users(email);
3
4-- ✅ Non-blocking (Postgres)
5CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
6
7-- Note: CONCURRENTLY cannot run in a transaction
8-- Run separately from other migrationsDropping Tables/Columns#
1-- Never drop immediately!
2
3-- Step 1: Stop writing to column (code change)
4-- Step 2: Deploy and verify no reads
5-- Step 3: Drop column in later migration
6
7-- For safety, rename first:
8ALTER TABLE users RENAME COLUMN deprecated_field TO _deprecated_field;
9
10-- Wait a deploy cycle, then drop:
11ALTER TABLE users DROP COLUMN _deprecated_field;Data Migrations#
Background Jobs#
1// Large data migrations should run as background jobs
2class MigrateUserEmails {
3 private batchSize = 1000;
4
5 async run(): Promise<void> {
6 let lastId = '';
7
8 while (true) {
9 const users = await db.users.findMany({
10 where: {
11 id: { gt: lastId },
12 emailNormalized: null,
13 },
14 take: this.batchSize,
15 orderBy: { id: 'asc' },
16 });
17
18 if (users.length === 0) break;
19
20 await db.users.updateMany({
21 where: { id: { in: users.map(u => u.id) } },
22 data: users.map(u => ({
23 id: u.id,
24 emailNormalized: u.email.toLowerCase(),
25 })),
26 });
27
28 lastId = users[users.length - 1].id;
29
30 // Rate limit
31 await sleep(100);
32 }
33 }
34}Dual Writes#
1// During migration, write to both old and new
2class UserService {
3 async updateEmail(userId: string, email: string): Promise<void> {
4 await db.users.update({
5 where: { id: userId },
6 data: {
7 email: email, // Old column
8 emailNormalized: email.toLowerCase(), // New column
9 },
10 });
11 }
12}Rollback Strategies#
Reversible Migrations#
1// Prisma migration with down
2export async function up(db: Database): Promise<void> {
3 await db.execute(`
4 ALTER TABLE users ADD COLUMN phone VARCHAR(20)
5 `);
6}
7
8export async function down(db: Database): Promise<void> {
9 await db.execute(`
10 ALTER TABLE users DROP COLUMN phone
11 `);
12}Point-in-Time Recovery#
# Postgres: Take snapshot before migration
pg_dump -Fc mydb > pre_migration_backup.dump
# If migration fails
pg_restore -d mydb pre_migration_backup.dumpFeature Flags#
1// Gate new code paths
2class UserRepository {
3 async getUser(id: string): Promise<User> {
4 const user = await db.users.findById(id);
5
6 if (featureFlags.isEnabled('new-user-schema')) {
7 return this.mapNewSchema(user);
8 }
9
10 return this.mapOldSchema(user);
11 }
12}Testing Migrations#
Local Testing#
1# Test against production-like data
2pg_dump production_db | psql test_db
3
4# Run migration
5npx prisma migrate deploy
6
7# Verify
8psql test_db -c "SELECT * FROM users LIMIT 10"Migration Tests#
1describe('Migration: add_phone_column', () => {
2 beforeAll(async () => {
3 // Reset to previous state
4 await db.migrate.reset();
5 await db.migrate.up({ to: 'previous_migration' });
6
7 // Seed test data
8 await db.users.create({ email: 'test@example.com' });
9 });
10
11 it('adds phone column', async () => {
12 await db.migrate.up({ to: 'add_phone_column' });
13
14 const columns = await db.query(`
15 SELECT column_name FROM information_schema.columns
16 WHERE table_name = 'users'
17 `);
18
19 expect(columns.map(c => c.column_name)).toContain('phone');
20 });
21
22 it('preserves existing data', async () => {
23 const user = await db.users.findFirst({
24 where: { email: 'test@example.com' },
25 });
26
27 expect(user).toBeDefined();
28 expect(user.email).toBe('test@example.com');
29 });
30});Production Checklist#
1## Pre-Migration
2- [ ] Backup database
3- [ ] Test migration on staging with production data
4- [ ] Review migration for locking queries
5- [ ] Estimate migration duration
6- [ ] Schedule during low-traffic period
7- [ ] Alert team
8
9## During Migration
10- [ ] Monitor database metrics
11- [ ] Watch application error rates
12- [ ] Be ready to rollback
13
14## Post-Migration
15- [ ] Verify data integrity
16- [ ] Check application functionality
17- [ ] Monitor for delayed issues
18- [ ] Update documentationCommon Pitfalls#
Lock Timeouts#
-- Set statement timeout to prevent long locks
SET statement_timeout = '30s';
-- Or use lock_timeout
SET lock_timeout = '10s';Large Table Alterations#
1-- For large tables, consider:
2-- 1. Create new table
3CREATE TABLE users_new (LIKE users INCLUDING ALL);
4ALTER TABLE users_new ADD COLUMN new_field VARCHAR(255);
5
6-- 2. Copy data in batches
7INSERT INTO users_new SELECT *, NULL FROM users WHERE id BETWEEN 1 AND 10000;
8
9-- 3. Swap tables
10BEGIN;
11ALTER TABLE users RENAME TO users_old;
12ALTER TABLE users_new RENAME TO users;
13COMMIT;
14
15-- 4. Drop old table later
16DROP TABLE users_old;Conclusion#
Safe database migrations require planning and discipline. Use the expand-contract pattern, test thoroughly, and always have a rollback plan. The extra effort prevents outages and data loss.
Remember: it's better to take three small migrations than one risky big one.