Detailed visualization of how API requests are processed through Appizer's system.infrastructure.
Complete Request Flow
API Request Lifecycle
From client request to database storage
Related Documentation:
Request Phases
1. Authentication
Every request must include a valid API key:
headers: {
'Authorization': 'Bearer sk_live_abc123...',
'Content-Type': 'application/json'
}
2. Validation
Request payload is validated against schema:
Checks performed:
- Required fields present
- Data types correct
- Values within allowed ranges
- Payload size within limits
// Validation rules
{
event: { type: 'string', required: true, maxLength: 100 },
userId: { type: 'string', required: false, maxLength: 255 },
properties: { type: 'object', required: false, maxSize: 10000 }
}
3. Rate Limiting
Requests are checked against rate limits:
Default limits:
- 1000 requests per minute
- 10,000 events per hour
- Burst capacity: 100 requests
4. Queueing
Valid requests are queued for async processing:
// Queue message format
{
id: 'msg_abc123',
timestamp: '2024-01-15T10:30:00Z',
payload: {
event: 'purchase_completed',
userId: 'user_123',
properties: { amount: 99.99 }
},
metadata: {
apiKey: 'sk_***',
ip: '192.168.1.1',
userAgent: 'Mozilla/5.0...'
}
}
5. Processing
Events are processed asynchronously:
Enrichment
Add geo-location, device info, and timestamps
Transformation
Normalize data formats and apply business rules
Validation
Final validation before storage
Storage
Write to database with indexes
Cache Update
Update real-time metrics cache
6. Response
Client receives immediate acknowledgment:
{
"success": true,
"eventId": "evt_abc123",
"timestamp": "2024-01-15T10:30:00Z"
}
Error Handling Flow
Performance Characteristics
Latency Breakdown
| Phase | Typical Latency | Notes |
|---|---|---|
| Authentication | 5-10ms | Cached lookups |
| Validation | 2-5ms | Schema validation |
| Rate limiting | 1-2ms | In-memory check |
| Queueing | 5-10ms | Message queue write |
| Total (sync) | 15-30ms | Client receives response |
| Processing | 50-100ms | Async, not blocking |
| Storage | 20-50ms | Database write |
| Total (async) | 70-150ms | Event fully processed |
Throughput
- Single event: 1,000 requests/sec
- Batch events: 25,000 events/sec
- Peak capacity: 100,000 events/sec
Retry Logic
Failed requests are automatically retried:
Retry configuration:
- Max retries: 4
- Initial delay: 1 second
- Backoff multiplier: 2x
- Max delay: 60 seconds
- Jitter: ±20%
Monitoring
Track request lifecycle metrics:
// Metrics collected
{
requestId: 'req_abc123',
phases: {
authentication: { duration: 8, status: 'success' },
validation: { duration: 3, status: 'success' },
rateLimiting: { duration: 1, status: 'success' },
queueing: { duration: 7, status: 'success' },
processing: { duration: 85, status: 'success' }
},
totalDuration: 104,
status: 'completed'
}
Best Practices
Client-Side
- Implement timeouts: Don't wait indefinitely
- Handle 202 responses: Request accepted, processing async
- Retry on failures: Use exponential backoff
- Batch when possible: Reduce request overhead
Server-Side
- Validate before sending: Catch errors early
- Use idempotency keys: Prevent duplicates
- Monitor latency: Track performance
- Log request IDs: Aid debugging