Back to Blog
databaseoptimizationperformancepostgresqlai tools

Database Optimization Using AI: From Query Analysis to Schema Design

How AI tools analyze database performance, suggest optimizations, and help design schemas that scale from day one.

B
Bootspring Team
Engineering
February 17, 2026
8 min read

Database performance issues are some of the most frustrating problems to debug. Queries that work fine in development crawl in production. Tables that seemed well-designed become bottlenecks. AI tools can help identify and fix these issues before they become emergencies.

AI-Powered Query Analysis#

Identifying Slow Queries#

AI analyzes query logs and identifies patterns:

1-- AI Analysis Report 2 3Top Slow Queries (Last 24 Hours): 4 51. GET /api/orders (avg: 2,340ms, calls: 15,234) 6 SELECT o.*, u.name, u.email, 7 array_agg(oi.*) as items 8 FROM orders o 9 JOIN users u ON o.user_id = u.id 10 JOIN order_items oi ON o.id = oi.order_id 11 WHERE o.status = 'pending' 12 GROUP BY o.id, u.id 13 ORDER BY o.created_at DESC 14 LIMIT 50; 15 16 Issues: 17 - Missing index on orders.status 18 - array_agg without limit causes memory pressure 19 - Sorting on non-indexed column 20 212. POST /api/search (avg: 1,890ms, calls: 8,432) 22 SELECT * FROM products 23 WHERE name ILIKE '%search_term%' 24 OR description ILIKE '%search_term%'; 25 26 Issues: 27 - ILIKE with leading wildcard prevents index use 28 - Full table scan on 2M rows 29 - Recommend: Full-text search or search service

Automatic EXPLAIN Analysis#

AI interprets EXPLAIN output in plain language:

EXPLAIN ANALYZE SELECT * FROM users JOIN orders ON users.id = orders.user_id WHERE orders.created_at > '2026-01-01' AND users.country = 'US';
AI Interpretation: Query Performance: Poor (1,234ms) Execution Flow: 1. Sequential scan on users (12M rows) - PROBLEM Filtering by country='US' returns 3M rows No index on country column 2. Nested loop join with orders - INEFFICIENT For each of 3M users, scanning orders table Total: 3M * index lookups = slow 3. Filter on created_at works well (using index) Root Cause: Missing index on users.country causing full table scan, followed by nested loop amplifying the problem. Recommended Fix: CREATE INDEX idx_users_country ON users(country); Expected Improvement: ~50x faster (25ms estimated) Alternative Approach: If country queries are common with date ranges: CREATE INDEX idx_orders_user_date ON orders(user_id, created_at); CREATE INDEX idx_users_country ON users(country); This enables merge join strategy.

Schema Design Assistance#

Analyzing Requirements#

AI translates requirements to schema design:

1Prompt: 2Design a database schema for an e-commerce platform with: 3- Products with variants (size, color) 4- Inventory tracking per variant per warehouse 5- Order history with line items 6- Customer reviews with ratings 7- Price history for analytics 8 9We use PostgreSQL and expect: 10- 100K products, 1M variants 11- 10M orders/year 12- High read volume on products 13- Analytics queries on price history

AI Output:

1-- Core Product Tables 2 3CREATE TABLE products ( 4 id UUID PRIMARY KEY DEFAULT gen_random_uuid(), 5 name VARCHAR(255) NOT NULL, 6 slug VARCHAR(255) UNIQUE NOT NULL, 7 description TEXT, 8 category_id UUID REFERENCES categories(id), 9 brand_id UUID REFERENCES brands(id), 10 base_price DECIMAL(10,2) NOT NULL, 11 is_active BOOLEAN DEFAULT true, 12 created_at TIMESTAMPTZ DEFAULT NOW(), 13 updated_at TIMESTAMPTZ DEFAULT NOW(), 14 15 -- Denormalized for read performance 16 avg_rating DECIMAL(3,2), 17 review_count INTEGER DEFAULT 0, 18 19 -- Full-text search 20 search_vector TSVECTOR GENERATED ALWAYS AS ( 21 setweight(to_tsvector('english', coalesce(name, '')), 'A') || 22 setweight(to_tsvector('english', coalesce(description, '')), 'B') 23 ) STORED 24); 25 26CREATE INDEX idx_products_category ON products(category_id) WHERE is_active; 27CREATE INDEX idx_products_search ON products USING GIN(search_vector); 28CREATE INDEX idx_products_slug ON products(slug); 29 30CREATE TABLE product_variants ( 31 id UUID PRIMARY KEY DEFAULT gen_random_uuid(), 32 product_id UUID NOT NULL REFERENCES products(id) ON DELETE CASCADE, 33 sku VARCHAR(100) UNIQUE NOT NULL, 34 attributes JSONB NOT NULL, -- {"size": "M", "color": "blue"} 35 price_modifier DECIMAL(10,2) DEFAULT 0, 36 weight_grams INTEGER, 37 is_active BOOLEAN DEFAULT true, 38 39 CONSTRAINT valid_attributes CHECK (jsonb_typeof(attributes) = 'object') 40); 41 42CREATE INDEX idx_variants_product ON product_variants(product_id) WHERE is_active; 43CREATE INDEX idx_variants_sku ON product_variants(sku); 44CREATE INDEX idx_variants_attributes ON product_variants USING GIN(attributes); 45 46-- Inventory with warehouse support 47 48CREATE TABLE warehouses ( 49 id UUID PRIMARY KEY DEFAULT gen_random_uuid(), 50 name VARCHAR(100) NOT NULL, 51 location POINT, 52 is_active BOOLEAN DEFAULT true 53); 54 55CREATE TABLE inventory ( 56 variant_id UUID NOT NULL REFERENCES product_variants(id), 57 warehouse_id UUID NOT NULL REFERENCES warehouses(id), 58 quantity INTEGER NOT NULL DEFAULT 0, 59 reserved INTEGER NOT NULL DEFAULT 0, 60 reorder_point INTEGER DEFAULT 10, 61 updated_at TIMESTAMPTZ DEFAULT NOW(), 62 63 PRIMARY KEY (variant_id, warehouse_id), 64 CONSTRAINT positive_quantity CHECK (quantity >= 0), 65 CONSTRAINT valid_reserved CHECK (reserved >= 0 AND reserved <= quantity) 66); 67 68-- Efficient total inventory lookup 69CREATE VIEW inventory_totals AS 70SELECT 71 variant_id, 72 SUM(quantity) as total_quantity, 73 SUM(quantity - reserved) as available 74FROM inventory 75GROUP BY variant_id; 76 77-- Price History (append-only for analytics) 78 79CREATE TABLE price_history ( 80 id BIGSERIAL PRIMARY KEY, 81 variant_id UUID NOT NULL REFERENCES product_variants(id), 82 price DECIMAL(10,2) NOT NULL, 83 effective_from TIMESTAMPTZ NOT NULL DEFAULT NOW(), 84 created_by UUID REFERENCES users(id) 85) PARTITION BY RANGE (effective_from); 86 87-- Create partitions by month 88CREATE TABLE price_history_2026_01 PARTITION OF price_history 89 FOR VALUES FROM ('2026-01-01') TO ('2026-02-01'); 90CREATE TABLE price_history_2026_02 PARTITION OF price_history 91 FOR VALUES FROM ('2026-02-01') TO ('2026-03-01'); 92-- Add more as needed 93 94CREATE INDEX idx_price_history_variant ON price_history(variant_id, effective_from DESC);

Index Recommendations#

AI suggests indexes based on query patterns:

1const indexAnalysis = { 2 existingIndexes: 12, 3 unusedIndexes: [ 4 { 5 index: 'idx_orders_legacy_status', 6 size: '234 MB', 7 lastUsed: 'never', 8 recommendation: 'DROP - never used, wastes space' 9 } 10 ], 11 missingIndexes: [ 12 { 13 query: 'SELECT * FROM orders WHERE user_id = ? AND status = ?', 14 frequency: '45,000/day', 15 currentCost: '450ms avg', 16 suggestion: 'CREATE INDEX idx_orders_user_status ON orders(user_id, status)', 17 estimatedImprovement: '95%' 18 }, 19 { 20 query: 'SELECT * FROM products WHERE category_id = ? ORDER BY created_at DESC', 21 frequency: '120,000/day', 22 currentCost: '120ms avg', 23 suggestion: 'CREATE INDEX idx_products_category_created ON products(category_id, created_at DESC)', 24 estimatedImprovement: '85%' 25 } 26 ], 27 duplicateIndexes: [ 28 { 29 indexes: ['idx_users_email', 'idx_users_email_lower'], 30 recommendation: 'Keep functional index, drop plain index' 31 } 32 ] 33};

Query Optimization#

AI-Suggested Rewrites#

1-- Original slow query 2SELECT 3 p.id, 4 p.name, 5 (SELECT COUNT(*) FROM orders WHERE product_id = p.id) as order_count, 6 (SELECT AVG(rating) FROM reviews WHERE product_id = p.id) as avg_rating 7FROM products p 8WHERE p.category_id = 'cat_123' 9ORDER BY order_count DESC 10LIMIT 20; 11 12-- AI-optimized version 13WITH product_stats AS ( 14 SELECT 15 product_id, 16 COUNT(*) as order_count 17 FROM orders 18 WHERE product_id IN ( 19 SELECT id FROM products WHERE category_id = 'cat_123' 20 ) 21 GROUP BY product_id 22), 23review_stats AS ( 24 SELECT 25 product_id, 26 AVG(rating) as avg_rating 27 FROM reviews 28 WHERE product_id IN ( 29 SELECT id FROM products WHERE category_id = 'cat_123' 30 ) 31 GROUP BY product_id 32) 33SELECT 34 p.id, 35 p.name, 36 COALESCE(ps.order_count, 0) as order_count, 37 rs.avg_rating 38FROM products p 39LEFT JOIN product_stats ps ON p.id = ps.product_id 40LEFT JOIN review_stats rs ON p.id = rs.product_id 41WHERE p.category_id = 'cat_123' 42ORDER BY order_count DESC 43LIMIT 20; 44 45-- Performance comparison: 46-- Original: 2,340ms (correlated subqueries execute per row) 47-- Optimized: 45ms (single scan of each table)

N+1 Query Detection#

AI identifies N+1 patterns in application code:

1// Detected N+1 pattern in: src/services/orders.ts 2 3// Current code (N+1): 4async function getOrdersWithCustomers() { 5 const orders = await db.order.findMany(); // 1 query 6 7 for (const order of orders) { 8 order.customer = await db.customer.findUnique({ 9 where: { id: order.customerId } 10 }); // N queries 11 } 12 13 return orders; 14} 15// Total queries: 1 + N (could be 1001 for 1000 orders) 16 17// AI-suggested fix: 18async function getOrdersWithCustomers() { 19 const orders = await db.order.findMany({ 20 include: { 21 customer: true 22 } 23 }); // 1-2 queries (JOIN or separate with IN clause) 24 25 return orders; 26} 27// Total queries: 1-2

Performance Monitoring#

AI-Driven Alerts#

1# Database performance monitoring config 2monitoring: 3 ai_analysis: true 4 5 alerts: 6 - name: slow-query-spike 7 condition: avg_query_time > baseline * 2 8 ai_action: analyze_query_plan_changes 9 10 - name: connection-pool-pressure 11 condition: active_connections > pool_size * 0.8 12 ai_action: identify_long_running_queries 13 14 - name: index-bloat 15 condition: bloat_ratio > 0.3 16 ai_action: recommend_maintenance_window 17 18 - name: disk-space 19 condition: disk_usage > 80% 20 ai_action: identify_large_tables_for_archival 21 22 daily_report: 23 - top_slow_queries 24 - index_usage_stats 25 - table_size_changes 26 - optimization_opportunities

Automatic Performance Reports#

1## Daily Database Performance Report 2 3### Overall Health: Good (8.2/10) 4 5### Query Performance 6- Avg response time: 23ms (baseline: 25ms) ✓ 7- 95th percentile: 89ms (baseline: 100ms) ✓ 8- Slow queries (>1s): 12 (down from 45 yesterday) ✓ 9 10### Issues Identified 11 121. **Table bloat: orders** 13 - Bloat ratio: 35% 14 - Dead tuples: 2.3M 15 - Action: VACUUM ANALYZE scheduled for tonight 16 172. **Growing query time: product search** 18 - Yesterday: 45ms avg 19 - Today: 67ms avg (+49%) 20 - Cause: Table grew 15%, statistics outdated 21 - Action: ANALYZE products; scheduled 22 23### Optimization Opportunities 24 251. Add composite index for common filter: 26 ```sql 27 CREATE INDEX idx_orders_status_date 28 ON orders(status, created_at DESC) 29 WHERE status IN ('pending', 'processing');

Estimated impact: 12,000 queries/day, -40ms each

Capacity Planning#

  • Current size: 234 GB
  • Growth rate: 1.2 GB/day
  • Estimated 80% capacity: 45 days
  • Recommendation: Plan storage expansion
## Best Practices ### 1. Regular Analysis ```bash # Weekly AI analysis bootspring db analyze \ --connection $DATABASE_URL \ --output ./reports/db-health.json \ --suggest-fixes

2. Pre-Deploy Checks#

1# CI/CD pipeline integration 2database-check: 3 script: 4 - bootspring db migration-analyze ./migrations/pending/ 5 # Checks for: 6 # - Missing indexes for new foreign keys 7 # - Potentially slow migrations 8 # - Breaking changes 9 # - Lock contention risks

3. Query Review#

1// Add to PR template 2const queryReviewChecklist = [ 3 'Queries have been EXPLAIN ANALYZEd', 4 'No N+1 queries introduced', 5 'Appropriate indexes exist or are added', 6 'Large table scans are avoided', 7 'Pagination is implemented for list queries' 8];

Conclusion#

AI transforms database optimization from a specialized skill to an accessible practice. Use it to:

  1. Identify slow queries and their root causes
  2. Design schemas that scale from the start
  3. Optimize queries with intelligent rewrites
  4. Monitor performance continuously
  5. Plan capacity before problems arise

Your database doesn't have to be a bottleneck.


Bootspring's database agents analyze your queries in real-time and suggest optimizations before performance issues impact users.

Share this article

Help spread the word about Bootspring