Rate Limiting
The Price2b API implements rate limiting to ensure fair usage and maintain service stability for all users. Rate limits vary by endpoint type and are designed to accommodate typical integration patterns.
Rate limit tiers
Different endpoint types have different rate limits based on their resource intensity:
| Endpoint Type | Limit | Window | Examples |
|---|---|---|---|
| Standard API calls | 60 requests | 1 minute | List products, get orders |
| Quote requests | 30 requests | 1 minute | Shipping quotes, DDP calculations |
| Bulk operations | 5 requests | 1 minute | Batch imports, bulk updates |
| Webhook submissions | 300 requests | 1 minute | Incoming webhooks |
| Analytics/Reports | 10 requests | 1 minute | P&L reports, comparisons |
Rate limit headers
Every API response includes headers to help you track your rate limit status:
Response headers
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1706684400
- Name
X-RateLimit-Limit- Type
- integer
- Description
The maximum number of requests allowed in the current window.
- Name
X-RateLimit-Remaining- Type
- integer
- Description
The number of requests remaining in the current window.
- Name
X-RateLimit-Reset- Type
- timestamp
- Description
Unix timestamp when the rate limit window resets.
Handling rate limits
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
Rate limit exceeded response
The response includes:
- Error message explaining the limit
retry_aftervalue in secondsRATE_001error code
Best practice: Implement exponential backoff with the retry_after value.
Response (429)
{
"success": false,
"message": "Too many requests. Please retry after 60 seconds.",
"error_code": "RATE_001",
"retry_after": 60
}
Implementation strategies
Exponential backoff
Implement exponential backoff to handle rate limits gracefully:
Exponential backoff
async function requestWithRetry(endpoint, options = {}, attempt = 1) {
const maxAttempts = 5
try {
const response = await fetch(`https://app.price2b.com/api/v1${endpoint}`, {
...options,
headers: {
'Authorization': `Bearer ${process.env.PRICE2B_TOKEN}`,
...options.headers,
},
})
if (response.status === 429) {
if (attempt >= maxAttempts) {
throw new Error('Max retry attempts exceeded')
}
const data = await response.json()
const retryAfter = data.retry_after || Math.pow(2, attempt)
console.log(`Rate limited. Retrying in ${retryAfter}s...`)
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000))
return requestWithRetry(endpoint, options, attempt + 1)
}
return response.json()
} catch (error) {
if (attempt < maxAttempts) {
const delay = Math.pow(2, attempt) * 1000
await new Promise(resolve => setTimeout(resolve, delay))
return requestWithRetry(endpoint, options, attempt + 1)
}
throw error
}
}
Best practices
Do
- Monitor
X-RateLimit-Remainingheaders - Implement exponential backoff
- Cache responses when possible
- Use bulk endpoints for multiple items
- Queue requests during high-volume operations
- Spread requests evenly over time
Don't
- Ignore rate limit responses
- Retry immediately after a 429
- Make unnecessary duplicate requests
- Poll endpoints continuously
- Exceed limits during testing
- Assume limits are per-endpoint (they're per-type)
Increasing rate limits
If you need higher rate limits for your integration:
- Optimize your requests — Use bulk endpoints and caching first
- Contact support — Enterprise plans include higher limits
- Use webhooks — Subscribe to events instead of polling
Rate limits are applied per API token. If you need to isolate limits for different parts of your application, create separate tokens.