API requests are rate limited to prevent abuse and ensure service quality for all users.
Contents
- Rate Limit Tiers
- Rate Limit Headers
- Rate Limit Exceeded Response
- Best Practices
- Handling Rate Limits
Rate Limit Tiers
By Endpoint
| Endpoint | Limit | Window |
|---|---|---|
| Events API | 1,000 requests | per minute |
| Batch Events | 100 requests | per minute |
| Analytics API | 100 requests | per minute |
| Users API | 500 requests | per minute |
| Audiences API | 500 requests | per minute |
| Push Notifications | 500 requests | per minute |
| MCP Server | 100 requests | per minute |
By Plan
| Plan | Hourly Limit | Daily Limit |
|---|---|---|
| Free | 1,000 | 10,000 |
| Pro | 10,000 | 100,000 |
| Enterprise | Custom | Custom |
Rate Limit Headers
Every API response includes rate limit information in the headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 950
X-RateLimit-Reset: 1640000000
Header Descriptions
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window |
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the rate limit resets |
Rate Limit Exceeded Response
When you exceed the rate limit, you'll receive a 429 status code:
{
"error": "Too Many Requests",
"message": "Rate limit exceeded",
"code": "RATE_LIMIT_EXCEEDED",
"details": {
"limit": 1000,
"window": "1 minute",
"retry_after": 45
}
}
The retry_after field indicates how many seconds to wait before retrying.
Best Practices
1. Monitor Rate Limit Headers
Check remaining requests before making calls:
const response = await fetch(url, options);
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
if (remaining < limit * 0.1) {
console.warn(`Low rate limit: ${remaining}/${limit} remaining`);
}
2. Implement Exponential Backoff
Retry with increasing delays:
async function makeRequestWithBackoff(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
const waitTime = Math.min(retryAfter * 1000, Math.pow(2, i) * 1000);
console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
3. Use Batch Endpoints
Batch multiple operations into single requests:
// Instead of multiple single requests
for (const event of events) {
await trackEvent(event); // 100 requests
}
// Use batch endpoint
await trackEventsBatch(events); // 1 request
4. Cache Responses
Cache frequently accessed data:
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute
async function getCachedMetrics(appId, metric) {
const cacheKey = `${appId}:${metric}`;
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await fetchMetrics(appId, metric);
cache.set(cacheKey, { data, timestamp: Date.now() });
return data;
}
5. Distribute Requests
Spread requests over time:
class RateLimiter {
constructor(maxRequests, windowMs) {
this.maxRequests = maxRequests;
this.windowMs = windowMs;
this.requests = [];
}
async throttle() {
const now = Date.now();
this.requests = this.requests.filter(time => now - time < this.windowMs);
if (this.requests.length >= this.maxRequests) {
const oldestRequest = Math.min(...this.requests);
const waitTime = this.windowMs - (now - oldestRequest);
await new Promise(resolve => setTimeout(resolve, waitTime));
}
this.requests.push(Date.now());
}
}
const limiter = new RateLimiter(1000, 60000); // 1000 per minute
async function makeThrottledRequest(url, options) {
await limiter.throttle();
return fetch(url, options);
}
6. Use Webhooks
For real-time updates, use webhooks instead of polling:
// Instead of polling every second
setInterval(async () => {
const status = await getCampaignStatus(campaignId); // 60 requests/minute
}, 1000);
// Configure a webhook
await configureWebhook({
url: 'https://your-app.com/webhooks/campaign-status',
events: ['campaign.completed', 'campaign.failed']
});
Rate Limit Strategies
Token Bucket Algorithm
Implement client-side rate limiting:
class TokenBucket {
constructor(capacity, refillRate) {
this.capacity = capacity;
this.tokens = capacity;
this.refillRate = refillRate;
this.lastRefill = Date.now();
}
async consume(tokens = 1) {
this.refill();
if (this.tokens < tokens) {
const waitTime = ((tokens - this.tokens) / this.refillRate) * 1000;
await new Promise(resolve => setTimeout(resolve, waitTime));
this.refill();
}
this.tokens -= tokens;
}
refill() {
const now = Date.now();
const timePassed = (now - this.lastRefill) / 1000;
const tokensToAdd = timePassed * this.refillRate;
this.tokens = Math.min(this.capacity, this.tokens + tokensToAdd);
this.lastRefill = now;
}
}
const bucket = new TokenBucket(1000, 16.67); // 1000 tokens, refill at ~1000/min
async function makeRequest(url, options) {
await bucket.consume(1);
return fetch(url, options);
}
Monitoring Rate Limits
Track Usage
class RateLimitMonitor {
constructor() {
this.stats = {
requests: 0,
rateLimited: 0,
lastReset: Date.now()
};
}
recordRequest(response) {
this.stats.requests++;
if (response.status === 429) {
this.stats.rateLimited++;
}
const remaining = response.headers.get('X-RateLimit-Remaining');
const limit = response.headers.get('X-RateLimit-Limit');
console.log(`Rate limit: ${remaining}/${limit} remaining`);
if (this.stats.rateLimited > 0) {
console.warn(`Rate limited ${this.stats.rateLimited} times`);
}
}
}
Increasing Rate Limits
If you need higher rate limits:
- Upgrade your plan - Pro and Enterprise plans have higher limits
- Contact support - Request custom limits for your use case
- Optimize requests - Use batch endpoints and caching
Email support@appizer.com with:
- Your current usage patterns
- Expected request volume
- Use case description
- Business justification
Rate Limit FAQs
Q: Are rate limits per API key or per account? A: Rate limits are per API key.
Q: Do failed requests count toward rate limits? A: Yes, all requests (successful or failed) count toward rate limits.
Q: Can I check my current rate limit usage?
A: Yes, check the X-RateLimit-Remaining header on any API response.
Q: What happens if I consistently exceed rate limits? A: Consistent abuse may result in temporary API key suspension. Contact support to discuss your needs.
Q: Are there different limits for different HTTP methods? A: No, rate limits apply to all HTTP methods equally.
Support
For rate limit questions or custom limits:
- Email: support@appizer.com
- Include your API key and use case details