Serverless computing lets you run code without provisioning servers. You pay only for compute time used, and scaling is automatic. Here's how to leverage serverless effectively.
What is Serverless?#
Traditional:
You manage → Servers, OS, Runtime, Application, Scaling
Serverless:
You manage → Application code only
Provider manages → Everything else
Benefits:
- No server management
- Auto-scaling (including to zero)
- Pay per execution
- Built-in high availability
AWS Lambda Basics#
Hello World#
1// handler.ts
2import { APIGatewayProxyHandler } from 'aws-lambda';
3
4export const hello: APIGatewayProxyHandler = async (event) => {
5 return {
6 statusCode: 200,
7 body: JSON.stringify({
8 message: 'Hello, World!',
9 input: event,
10 }),
11 };
12};Serverless Framework Configuration#
1# serverless.yml
2service: my-api
3
4provider:
5 name: aws
6 runtime: nodejs18.x
7 region: us-east-1
8 memorySize: 256
9 timeout: 10
10
11functions:
12 hello:
13 handler: handler.hello
14 events:
15 - http:
16 path: /hello
17 method: get
18
19 processOrder:
20 handler: orders.process
21 events:
22 - sqs:
23 arn: !GetAtt OrderQueue.Arn
24 batchSize: 10Environment and Secrets#
1provider:
2 environment:
3 NODE_ENV: production
4 TABLE_NAME: !Ref UsersTable
5
6functions:
7 api:
8 handler: handler.api
9 environment:
10 API_KEY: ${ssm:/my-app/api-key}
11 DB_CONNECTION: ${ssm:/my-app/db-connection~true} # EncryptedEvent-Driven Architecture#
API Gateway + Lambda#
1import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
2
3export async function handler(
4 event: APIGatewayProxyEvent
5): Promise<APIGatewayProxyResult> {
6 const { httpMethod, path, body, pathParameters, queryStringParameters } = event;
7
8 try {
9 switch (`${httpMethod} ${path}`) {
10 case 'GET /users':
11 return await listUsers(queryStringParameters);
12 case 'GET /users/{id}':
13 return await getUser(pathParameters?.id);
14 case 'POST /users':
15 return await createUser(JSON.parse(body || '{}'));
16 default:
17 return { statusCode: 404, body: 'Not Found' };
18 }
19 } catch (error) {
20 console.error(error);
21 return { statusCode: 500, body: 'Internal Server Error' };
22 }
23}SQS Processing#
1import { SQSHandler, SQSRecord } from 'aws-lambda';
2
3export const processMessages: SQSHandler = async (event) => {
4 const results = await Promise.allSettled(
5 event.Records.map(processRecord)
6 );
7
8 // Return failed messages for retry
9 const failures = results
10 .map((result, index) => ({ result, record: event.Records[index] }))
11 .filter(({ result }) => result.status === 'rejected')
12 .map(({ record }) => ({ itemIdentifier: record.messageId }));
13
14 return { batchItemFailures: failures };
15};
16
17async function processRecord(record: SQSRecord): Promise<void> {
18 const message = JSON.parse(record.body);
19
20 switch (message.type) {
21 case 'ORDER_PLACED':
22 await handleOrderPlaced(message.data);
23 break;
24 case 'PAYMENT_RECEIVED':
25 await handlePaymentReceived(message.data);
26 break;
27 default:
28 console.warn('Unknown message type:', message.type);
29 }
30}S3 Event Processing#
1import { S3Handler } from 'aws-lambda';
2import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';
3
4const s3 = new S3Client({});
5
6export const processUpload: S3Handler = async (event) => {
7 for (const record of event.Records) {
8 const bucket = record.s3.bucket.name;
9 const key = decodeURIComponent(record.s3.object.key);
10
11 console.log(`Processing ${bucket}/${key}`);
12
13 const response = await s3.send(new GetObjectCommand({ Bucket: bucket, Key: key }));
14 const content = await response.Body?.transformToString();
15
16 // Process file content
17 await processFile(key, content);
18 }
19};Step Functions#
1# state-machine.yml
2StartAt: ValidateOrder
3States:
4 ValidateOrder:
5 Type: Task
6 Resource: !GetAtt ValidateOrderFunction.Arn
7 Next: ProcessPayment
8 Catch:
9 - ErrorEquals: [ValidationError]
10 Next: OrderFailed
11
12 ProcessPayment:
13 Type: Task
14 Resource: !GetAtt ProcessPaymentFunction.Arn
15 Next: FulfillOrder
16 Retry:
17 - ErrorEquals: [PaymentProviderError]
18 MaxAttempts: 3
19 BackoffRate: 2
20 Catch:
21 - ErrorEquals: [PaymentError]
22 Next: OrderFailed
23
24 FulfillOrder:
25 Type: Task
26 Resource: !GetAtt FulfillOrderFunction.Arn
27 Next: OrderComplete
28
29 OrderComplete:
30 Type: Succeed
31
32 OrderFailed:
33 Type: FailDatabase Patterns#
DynamoDB#
1import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
2import { DynamoDBDocumentClient, GetCommand, PutCommand, QueryCommand } from '@aws-sdk/lib-dynamodb';
3
4const client = new DynamoDBClient({});
5const docClient = DynamoDBDocumentClient.from(client);
6
7const TABLE_NAME = process.env.TABLE_NAME!;
8
9export async function getUser(userId: string) {
10 const result = await docClient.send(new GetCommand({
11 TableName: TABLE_NAME,
12 Key: { PK: `USER#${userId}`, SK: `PROFILE` },
13 }));
14 return result.Item;
15}
16
17export async function getUserOrders(userId: string) {
18 const result = await docClient.send(new QueryCommand({
19 TableName: TABLE_NAME,
20 KeyConditionExpression: 'PK = :pk AND begins_with(SK, :sk)',
21 ExpressionAttributeValues: {
22 ':pk': `USER#${userId}`,
23 ':sk': 'ORDER#',
24 },
25 }));
26 return result.Items;
27}Connection Management#
1// Reuse connections across invocations
2import { Pool } from 'pg';
3
4let pool: Pool | null = null;
5
6function getPool(): Pool {
7 if (!pool) {
8 pool = new Pool({
9 connectionString: process.env.DATABASE_URL,
10 max: 1, // Single connection for Lambda
11 });
12 }
13 return pool;
14}
15
16export async function handler(event: any) {
17 const db = getPool();
18 const result = await db.query('SELECT * FROM users WHERE id = $1', [event.userId]);
19 return result.rows[0];
20}Cold Starts#
Understanding Cold Starts#
Cold start: First invocation after idle period
- Initialize runtime
- Load code
- Initialize dependencies
- Run handler
Warm invocation: Subsequent requests
- Reuse everything
- Run handler only
Cold start factors:
- Runtime (Node.js < Python < Java)
- Package size
- VPC configuration
- Memory allocation
Mitigation Strategies#
1// 1. Minimize package size
2// Use esbuild or webpack to bundle
3
4// 2. Initialize outside handler
5import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
6
7// Initialized once per container
8const client = new DynamoDBClient({});
9
10export async function handler(event: any) {
11 // Handler runs on each invocation
12 return client.send(/* ... */);
13}
14
15// 3. Provisioned concurrency
16// serverless.yml
17functions:
18 api:
19 handler: handler.api
20 provisionedConcurrency: 5 # Keep 5 instances warm
21
22// 4. Keep warm with scheduled events
23functions:
24 api:
25 handler: handler.api
26 events:
27 - schedule:
28 rate: rate(5 minutes)
29 input:
30 warmer: trueCost Optimization#
Right-Sizing Memory#
1// Memory affects CPU allocation
2// More memory = more CPU = faster execution
3// Sometimes paying for more memory is cheaper
4
5// Test different configurations:
6// 128MB → 1000ms → $0.0000021
7// 512MB → 300ms → $0.0000025
8// 1024MB → 150ms → $0.0000025
9
10// Use AWS Lambda Power Tuning
11// https://github.com/alexcasalboni/aws-lambda-power-tuningAvoiding Over-Execution#
1// Batch processing
2export const handler: SQSHandler = async (event) => {
3 // Process multiple messages per invocation
4 const results = await Promise.all(
5 event.Records.map(record => processRecord(record))
6 );
7 return results;
8};
9
10// Avoid recursive triggers
11export const s3Handler: S3Handler = async (event) => {
12 for (const record of event.Records) {
13 const key = record.s3.object.key;
14
15 // Don't process files you write
16 if (key.startsWith('processed/')) {
17 continue;
18 }
19
20 // Write to different prefix
21 await processAndSave(key, `processed/${key}`);
22 }
23};When to Use Serverless#
Good fit:
✓ Variable/unpredictable traffic
✓ Event-driven workloads
✓ Microservices
✓ APIs with moderate traffic
✓ Scheduled tasks
✓ Quick prototypes
Poor fit:
✗ Constant high traffic (cost)
✗ Long-running processes (>15 min)
✗ Stateful applications
✗ WebSocket connections
✗ Heavy compute workloads
Conclusion#
Serverless simplifies operations and scales automatically, but requires thinking differently about architecture. Design for events, handle cold starts, and monitor costs.
Start with simple functions, then compose them into larger applications using Step Functions and event-driven patterns.