Sigma Auth implements comprehensive rate limiting to prevent abuse and ensure fair usage across all clients. Rate limits are applied per IP address and per authenticated user using a sliding window algorithm.
Rate Limit Tiers
OAuth Endpoints (Per IP)
Critical authentication endpoints have strict limits to prevent brute force attacks:
Endpoint | Limit | Window | Purpose |
---|---|---|---|
POST /token | 10 requests | 1 minute | Token exchanges |
GET /authorize | 20 requests | 1 minute | Authorization flows |
POST /sigma/authorize | 5 requests | 1 minute | Direct Bitcoin auth |
POST /api/auth/signin/sigma | 5 requests | 1 minute | Sign-in attempts |
API Endpoints (Per User)
Once authenticated, users have higher limits for regular API usage:
Endpoint | Limit | Window | Purpose |
---|---|---|---|
GET /userinfo | 100 requests | 1 minute | User information |
GET /api/profile | 60 requests | 1 minute | Profile data |
POST /api/profile/* | 30 requests | 1 minute | Profile updates |
GET /backup | 20 requests | 1 minute | Backup retrieval |
POST /backup | 10 requests | 1 minute | Backup storage |
Failed Authentication
Progressive delays are applied after failed authentication attempts:
Failed Attempts | Delay | Duration |
---|---|---|
3-5 failures | 30 seconds | Per IP |
6-10 failures | 2 minutes | Per IP |
11+ failures | 10 minutes | Per IP |
Rate Limit Headers
Sigma Auth includes rate limit information in response headers following RFC standards:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1705123456
X-RateLimit-Retry-After: 60
Retry-After: 60
Header Descriptions
X-RateLimit-Limit
: Maximum requests allowed in the time windowX-RateLimit-Remaining
: Requests remaining in current windowX-RateLimit-Reset
: Unix timestamp when the window resetsX-RateLimit-Retry-After
: Seconds to wait before retrying (only when rate limited)Retry-After
: Standard HTTP header for retry timing
Handling 429 Responses
When rate limits are exceeded, Sigma Auth returns a 429 Too Many Requests
response:
{
"error": "rate_limit_exceeded",
"error_description": "Too many requests. Please try again in 60 seconds.",
"retry_after": 60,
"request_id": "req_abc123"
}
Implementing Retry Logic
async function makeAuthenticatedRequest(url, options = {}) {
const maxRetries = 3;
let attempt = 0;
while (attempt < maxRetries) {
try {
const response = await fetch(url, {
...options,
headers: {
'Authorization': `Bearer ${getAccessToken()}`,
...options.headers
}
});
// Check rate limit headers
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const reset = parseInt(response.headers.get('X-RateLimit-Reset') || '0');
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('X-RateLimit-Retry-After') || '60');
if (attempt < maxRetries - 1) {
console.log(`Rate limited. Retrying after ${retryAfter} seconds...`);
await sleep(retryAfter * 1000);
attempt++;
continue;
} else {
throw new Error(`Rate limit exceeded. Try again in ${retryAfter} seconds.`);
}
}
// Log rate limit status for monitoring
if (remaining < 10) {
console.warn(`Rate limit low: ${remaining} requests remaining until ${new Date(reset * 1000)}`);
}
return response;
} catch (error) {
if (attempt === maxRetries - 1) throw error;
attempt++;
await sleep(1000 * attempt); // Exponential backoff
}
}
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
React Hook for Rate Limit Handling
import { useState, useCallback } from 'react';
function useRateLimitedRequest() {
const [isRateLimited, setIsRateLimited] = useState(false);
const [retryAfter, setRetryAfter] = useState(0);
const makeRequest = useCallback(async (url, options) => {
try {
setIsRateLimited(false);
const response = await fetch(url, options);
if (response.status === 429) {
const retrySeconds = parseInt(response.headers.get('X-RateLimit-Retry-After') || '60');
setRetryAfter(retrySeconds);
setIsRateLimited(true);
// Auto-clear rate limit status after retry period
setTimeout(() => {
setIsRateLimited(false);
setRetryAfter(0);
}, retrySeconds * 1000);
throw new Error(`Rate limited for ${retrySeconds} seconds`);
}
return response;
} catch (error) {
throw error;
}
}, []);
return { makeRequest, isRateLimited, retryAfter };
}
// Usage in component
function ProfileComponent() {
const { makeRequest, isRateLimited, retryAfter } = useRateLimitedRequest();
const loadProfile = async () => {
try {
const response = await makeRequest('/api/profile', {
headers: { Authorization: `Bearer ${accessToken}` }
});
const profile = await response.json();
setProfile(profile);
} catch (error) {
if (isRateLimited) {
setError(`Rate limited. Please wait ${retryAfter} seconds.`);
} else {
setError('Failed to load profile');
}
}
};
return (
<div>
{isRateLimited && (
<div className="bg-yellow-100 border border-yellow-400 text-yellow-700 px-4 py-3 rounded mb-4">
Rate limited. Please wait {retryAfter} seconds before trying again.
</div>
)}
<button onClick={loadProfile} disabled={isRateLimited}>
Load Profile
</button>
</div>
);
}
Environment-Specific Limits
Rate limits vary by environment to support development and testing:
Development (localhost)
- More lenient limits for easier testing
- OAuth endpoints: 50 requests/minute (vs 10-20 in production)
- Failed auth delay: 5 seconds (vs 30+ in production)
- API endpoints: Same as production
Production (auth.sigmaidentity.com)
- Strict security limits as documented above
- Geographic considerations: Limits are global, not per-region
- Burst allowance: 20% over limit for brief spikes
Testing Environments
Rate limits can be disabled entirely for automated testing:
// In test environment
process.env.DISABLE_RATE_LIMITING = 'true';
// Or use test-specific higher limits
process.env.RATE_LIMIT_MULTIPLIER = '10'; // 10x higher limits
Monitoring Rate Limits
Client-Side Monitoring
Track your application's rate limit usage:
class RateLimitMonitor {
constructor() {
this.limits = new Map();
}
recordResponse(endpoint, headers) {
const limit = parseInt(headers.get('X-RateLimit-Limit') || '0');
const remaining = parseInt(headers.get('X-RateLimit-Remaining') || '0');
const reset = parseInt(headers.get('X-RateLimit-Reset') || '0');
this.limits.set(endpoint, {
limit,
remaining,
resetTime: new Date(reset * 1000),
usage: ((limit - remaining) / limit * 100).toFixed(1)
});
}
getStatus(endpoint) {
return this.limits.get(endpoint) || null;
}
shouldThrottle(endpoint, threshold = 80) {
const status = this.getStatus(endpoint);
return status && parseFloat(status.usage) > threshold;
}
getResetTime(endpoint) {
const status = this.getStatus(endpoint);
return status ? status.resetTime : null;
}
}
const monitor = new RateLimitMonitor();
// Use with requests
async function monitoredRequest(url, options) {
const response = await fetch(url, options);
monitor.recordResponse(url, response.headers);
if (monitor.shouldThrottle(url)) {
console.warn(`Rate limit high for ${url}: ${monitor.getStatus(url).usage}%`);
}
return response;
}
Dashboard Metrics
Display rate limit status in your application:
function RateLimitDashboard({ monitor }) {
const [status, setStatus] = useState({});
useEffect(() => {
const interval = setInterval(() => {
setStatus({
profile: monitor.getStatus('/api/profile'),
userinfo: monitor.getStatus('/userinfo'),
backup: monitor.getStatus('/backup')
});
}, 1000);
return () => clearInterval(interval);
}, [monitor]);
return (
<div className="grid grid-cols-1 md:grid-cols-3 gap-4">
{Object.entries(status).map(([endpoint, info]) => info && (
<div key={endpoint} className="bg-white p-4 rounded-lg shadow">
<h3 className="font-semibold">{endpoint}</h3>
<div className="mt-2">
<div className="flex justify-between text-sm">
<span>Usage:</span>
<span>{info.usage}%</span>
</div>
<div className="w-full bg-gray-200 rounded-full h-2 mt-1">
<div
className={`h-2 rounded-full ${
parseFloat(info.usage) > 80 ? 'bg-red-500' :
parseFloat(info.usage) > 60 ? 'bg-yellow-500' : 'bg-green-500'
}`}
style={{ width: `${info.usage}%` }}
/>
</div>
<div className="text-xs text-gray-500 mt-1">
Resets: {info.resetTime.toLocaleTimeString()}
</div>
</div>
</div>
))}
</div>
);
}
Best Practices
Request Batching
Group multiple operations to reduce request count:
// Bad: Multiple requests
const profile = await fetch('/api/profile');
const settings = await fetch('/api/profile/settings');
const preferences = await fetch('/api/profile/preferences');
// Good: Single request with expanded data
const fullProfile = await fetch('/api/profile?expand=settings,preferences');
Caching Strategy
Cache responses to reduce API calls:
class CachedAPIClient {
constructor(cacheTTL = 60000) { // 1 minute default
this.cache = new Map();
this.cacheTTL = cacheTTL;
}
async get(url, options = {}) {
const cacheKey = `${url}_${JSON.stringify(options)}`;
const cached = this.cache.get(cacheKey);
if (cached && Date.now() < cached.expires) {
return cached.response;
}
const response = await fetch(url, options);
if (response.ok) {
this.cache.set(cacheKey, {
response: await response.clone().json(),
expires: Date.now() + this.cacheTTL
});
}
return response;
}
clearCache() {
this.cache.clear();
}
}
Graceful Degradation
Handle rate limits gracefully in your UI:
function GracefulComponent() {
const [data, setData] = useState(null);
const [isRateLimited, setIsRateLimited] = useState(false);
const [cachedData, setCachedData] = useState(null);
const fetchData = async () => {
try {
const response = await fetch('/api/data');
if (response.status === 429) {
setIsRateLimited(true);
// Show cached data if available
if (cachedData) {
setData({ ...cachedData, stale: true });
}
return;
}
const newData = await response.json();
setData(newData);
setCachedData(newData);
setIsRateLimited(false);
} catch (error) {
console.error('Fetch failed:', error);
}
};
return (
<div>
{isRateLimited && (
<div className="bg-yellow-50 border border-yellow-200 rounded p-3 mb-4">
<p className="text-yellow-800">
Showing cached data. New data will load shortly.
</p>
</div>
)}
{data && (
<div className={data.stale ? 'opacity-75' : ''}>
{/* Your data display */}
</div>
)}
</div>
);
}
Troubleshooting Rate Limits
Common Issues
"Unexpected 429 on first request"
- Check if your IP was previously rate limited
- Verify you're using the correct base URL
- Ensure no other processes are making requests from the same IP
"Rate limits too restrictive for my use case"
- Implement proper caching and request batching
- Consider using webhooks instead of polling
- Contact support for high-volume use cases
"Rate limit headers missing"
- Check that you're calling production endpoints
- Verify your request format is correct
- Some proxy/CDN layers may strip custom headers
Debug Rate Limit Issues
function debugRateLimit(response) {
console.group('Rate Limit Debug Info');
console.log('Status:', response.status);
console.log('Limit:', response.headers.get('X-RateLimit-Limit'));
console.log('Remaining:', response.headers.get('X-RateLimit-Remaining'));
console.log('Reset:', new Date(parseInt(response.headers.get('X-RateLimit-Reset')) * 1000));
if (response.status === 429) {
console.log('Retry After:', response.headers.get('X-RateLimit-Retry-After'), 'seconds');
console.log('Current Time:', new Date());
}
console.groupEnd();
}
Testing Rate Limits
// Test rate limit behavior
async function testRateLimit() {
const results = [];
for (let i = 0; i < 15; i++) {
try {
const response = await fetch('/api/test-endpoint');
results.push({
request: i + 1,
status: response.status,
remaining: response.headers.get('X-RateLimit-Remaining')
});
if (response.status === 429) {
console.log(`Rate limited after ${i + 1} requests`);
break;
}
} catch (error) {
results.push({ request: i + 1, error: error.message });
}
}
console.table(results);
}
Rate limiting ensures fair usage and prevents abuse while maintaining good performance for all users. By implementing proper retry logic, monitoring, and caching, your application can handle rate limits gracefully and provide a smooth user experience.