Rate Limiting

Bootspring API implements rate limiting to ensure fair usage and system stability.

Rate Limits by Plan#

PlanRequests/MinuteRequests/HourRequests/Day
Free201001,000
Pro601,00010,000
Team1205,00050,000
EnterpriseCustomCustomCustom

Rate Limit Headers#

Every API response includes rate limit information:

HTTP/1.1 200 OK X-RateLimit-Limit: 1000 X-RateLimit-Remaining: 999 X-RateLimit-Reset: 1705312800 X-RateLimit-Policy: 1000;w=3600

Header Descriptions#

HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when the limit resets
X-RateLimit-PolicyRate limit policy (requests;w=window_seconds)

Rate Limit Exceeded#

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

1HTTP/1.1 429 Too Many Requests 2Content-Type: application/json 3Retry-After: 60 4X-RateLimit-Limit: 1000 5X-RateLimit-Remaining: 0 6X-RateLimit-Reset: 1705312800 7 8{ 9 "error": { 10 "code": "rate_limited", 11 "message": "Rate limit exceeded. Please retry after 60 seconds.", 12 "retry_after": 60 13 } 14}

Endpoint-Specific Limits#

Some endpoints have additional rate limits:

EndpointLimitWindow
POST /auth/device/code51 minute
POST /auth/device/token201 minute
POST /v1/track10001 minute
GET /v1/agents1001 minute

Handling Rate Limits#

JavaScript/TypeScript#

1async function fetchWithRetry( 2 url: string, 3 options: RequestInit, 4 maxRetries: number = 3 5): Promise<Response> { 6 let retries = 0; 7 8 while (retries < maxRetries) { 9 const response = await fetch(url, options); 10 11 if (response.status === 429) { 12 const retryAfter = parseInt( 13 response.headers.get('Retry-After') || '60', 14 10 15 ); 16 17 console.log(`Rate limited. Retrying in ${retryAfter} seconds...`); 18 await sleep(retryAfter * 1000); 19 retries++; 20 continue; 21 } 22 23 return response; 24 } 25 26 throw new Error('Max retries exceeded'); 27} 28 29function sleep(ms: number): Promise<void> { 30 return new Promise((resolve) => setTimeout(resolve, ms)); 31}

Exponential Backoff#

1async function fetchWithExponentialBackoff( 2 url: string, 3 options: RequestInit, 4 maxRetries: number = 5 5): Promise<Response> { 6 let retries = 0; 7 let delay = 1000; // Start with 1 second 8 9 while (retries < maxRetries) { 10 const response = await fetch(url, options); 11 12 if (response.status === 429) { 13 const retryAfter = response.headers.get('Retry-After'); 14 15 // Use Retry-After if provided, otherwise use exponential backoff 16 const waitTime = retryAfter 17 ? parseInt(retryAfter, 10) * 1000 18 : delay; 19 20 console.log(`Rate limited. Waiting ${waitTime / 1000}s...`); 21 await sleep(waitTime); 22 23 delay *= 2; // Double the delay for next retry 24 retries++; 25 continue; 26 } 27 28 return response; 29 } 30 31 throw new Error('Max retries exceeded'); 32}

Python#

1import time 2import requests 3from requests.adapters import HTTPAdapter 4from urllib3.util.retry import Retry 5 6def create_session_with_retries(): 7 session = requests.Session() 8 9 retry_strategy = Retry( 10 total=3, 11 status_forcelist=[429, 500, 502, 503, 504], 12 allowed_methods=["HEAD", "GET", "OPTIONS", "POST"], 13 backoff_factor=1, 14 respect_retry_after_header=True 15 ) 16 17 adapter = HTTPAdapter(max_retries=retry_strategy) 18 session.mount("https://", adapter) 19 20 return session 21 22# Usage 23session = create_session_with_retries() 24response = session.get( 25 "https://api.bootspring.dev/v1/projects", 26 headers={"Authorization": "Bearer bs_xxx"} 27)

Monitoring Rate Limits#

Track Remaining Requests#

1class RateLimitTracker { 2 private remaining: number = Infinity; 3 private resetTime: number = 0; 4 5 updateFromResponse(response: Response): void { 6 const remaining = response.headers.get('X-RateLimit-Remaining'); 7 const reset = response.headers.get('X-RateLimit-Reset'); 8 9 if (remaining) { 10 this.remaining = parseInt(remaining, 10); 11 } 12 if (reset) { 13 this.resetTime = parseInt(reset, 10) * 1000; 14 } 15 } 16 17 shouldWait(): boolean { 18 return this.remaining <= 0 && Date.now() < this.resetTime; 19 } 20 21 getWaitTime(): number { 22 if (!this.shouldWait()) return 0; 23 return this.resetTime - Date.now(); 24 } 25 26 getRemainingRequests(): number { 27 return this.remaining; 28 } 29}

Proactive Rate Limiting#

1class RateLimiter { 2 private tokens: number; 3 private lastRefill: number; 4 private readonly maxTokens: number; 5 private readonly refillRate: number; // tokens per ms 6 7 constructor(requestsPerMinute: number) { 8 this.maxTokens = requestsPerMinute; 9 this.tokens = requestsPerMinute; 10 this.lastRefill = Date.now(); 11 this.refillRate = requestsPerMinute / 60000; 12 } 13 14 async acquire(): Promise<void> { 15 this.refill(); 16 17 if (this.tokens < 1) { 18 const waitTime = (1 - this.tokens) / this.refillRate; 19 await sleep(waitTime); 20 this.refill(); 21 } 22 23 this.tokens -= 1; 24 } 25 26 private refill(): void { 27 const now = Date.now(); 28 const elapsed = now - this.lastRefill; 29 this.tokens = Math.min( 30 this.maxTokens, 31 this.tokens + elapsed * this.refillRate 32 ); 33 this.lastRefill = now; 34 } 35} 36 37// Usage 38const limiter = new RateLimiter(60); // 60 requests per minute 39 40async function makeRequest() { 41 await limiter.acquire(); 42 return fetch('https://api.bootspring.dev/v1/projects', { 43 headers: { Authorization: 'Bearer bs_xxx' }, 44 }); 45}

Best Practices#

1. Cache Responses#

1import { LRUCache } from 'lru-cache'; 2 3const cache = new LRUCache<string, any>({ 4 max: 500, 5 ttl: 1000 * 60 * 5, // 5 minutes 6}); 7 8async function getProjects(): Promise<Project[]> { 9 const cached = cache.get('projects'); 10 if (cached) return cached; 11 12 const response = await fetch('/api/projects'); 13 const data = await response.json(); 14 15 cache.set('projects', data); 16 return data; 17}

2. Batch Requests#

1// Instead of multiple requests 2for (const id of projectIds) { 3 await fetch(`/api/projects/${id}`); 4} 5 6// Use batch endpoint 7await fetch('/api/projects/batch', { 8 method: 'POST', 9 body: JSON.stringify({ ids: projectIds }), 10});

3. Use Webhooks#

Instead of polling, configure webhooks for real-time updates:

// Instead of polling every minute setInterval(() => checkForUpdates(), 60000); // Configure webhook to receive updates // See /docs/api/endpoints/webhooks

4. Implement Request Queuing#

1class RequestQueue { 2 private queue: Array<() => Promise<any>> = []; 3 private processing = false; 4 private requestsThisMinute = 0; 5 private readonly maxPerMinute: number; 6 7 constructor(maxPerMinute: number) { 8 this.maxPerMinute = maxPerMinute; 9 setInterval(() => { 10 this.requestsThisMinute = 0; 11 }, 60000); 12 } 13 14 async add<T>(request: () => Promise<T>): Promise<T> { 15 return new Promise((resolve, reject) => { 16 this.queue.push(async () => { 17 try { 18 resolve(await request()); 19 } catch (error) { 20 reject(error); 21 } 22 }); 23 this.process(); 24 }); 25 } 26 27 private async process(): Promise<void> { 28 if (this.processing) return; 29 this.processing = true; 30 31 while (this.queue.length > 0) { 32 if (this.requestsThisMinute >= this.maxPerMinute) { 33 await sleep(1000); 34 continue; 35 } 36 37 const request = this.queue.shift(); 38 if (request) { 39 this.requestsThisMinute++; 40 await request(); 41 } 42 } 43 44 this.processing = false; 45 } 46}

Increasing Rate Limits#

Upgrade Your Plan#

Higher tier plans include increased rate limits. Visit Pricing to upgrade.

Enterprise Custom Limits#

For enterprise customers, we offer custom rate limits based on your needs. Contact sales@bootspring.dev.

Burst Allowance#

All plans include a burst allowance of 2x the per-minute limit for short spikes.