API Rate Limiting

API Rate Limiting

To ensure the stability and performance of our services, rate limiting is implemented across all our APIs. This mechanism controls the number of API requests that can be made within a specific time period, helping to prevent abuse and ensuring fair usage for all clients. The specific method for rate limiting, as well as the actual rate limits for each API endpoint, are clearly defined on our API developer portal. We encourage developers to review these limits and design their integrations accordingly to avoid disruptions in service.

You got the control

Best Practices for Handling Rate Limits

To ensure reliable integration with OneStock APIs and avoid service disruptions, we strongly recommend implementing the following practices:

1. Implement Proper Error Handling

Your application should be prepared to handle HTTP 429 (Too Many Requests) responses gracefully:

// Example pseudo-code try { const response = await callOneStockAPI(); if (response.status === 429) { // Handle rate limit error handleRateLimitError(response); } } catch (error) { // Handle other errors }

2. Use Exponential Back-off Strategy

When a rate limit is hit, implement an exponential back-off strategy before retrying:

  • First retry: Wait 1 second

  • Second retry: Wait 2 seconds

  • Third retry: Wait 4 seconds

  • Continue doubling the wait time up to a maximum (e.g., 60 seconds)

This approach prevents overwhelming the API with immediate retries and allows rate limits to reset naturally.

3. Monitor and Log Rate Limit Errors

Implement comprehensive observability for rate limiting:

  • Log all 429 responses with timestamps and affected endpoints

  • Set up alerts if rate limit errors occur more than occasionally

  • Track patterns to identify which operations are most frequently hitting limits

  • Monitor trends over time to anticipate when you might need to optimize your integration

4. Optimize Your API Usage

Proactive measures to reduce the likelihood of hitting rate limits:

  • Cache responses where appropriate to reduce redundant API calls

  • Batch operations when the API supports it

  • Implement request queuing to control the rate of outgoing requests

  • Review the rate limits for each endpoint in the developer portal and design your integration accordingly

  • Spread requests over time rather than making bursts of calls

5. Design for Resilience

Your integration should be resilient to temporary rate limiting:

  • Don't fail entire processes due to a single rate limit error

  • Queue failed requests for retry rather than discarding them

  • Provide clear feedback to end-users when delays occur due to rate limiting

  • Have fallback mechanisms where possible

6. Contact Support for High-Volume Scenarios

If your legitimate use case regularly hits rate limits:

  • Contact OneStock support to discuss your requirements

  • Provide usage patterns and volume estimates

  • We may be able to adjust limits or suggest alternative approaches for your specific needs

Example Implementation

Here's a basic example of handling rate limits with retry logic:

import time import requests def call_api_with_retry(url, max_retries=5): retries = 0 wait_time = 1 while retries < max_retries: response = requests.get(url) if response.status_code == 200: return response.json() elif response.status_code == 429: print(f"Rate limit hit. Waiting {wait_time} seconds before retry...") time.sleep(wait_time) wait_time *= 2 # Exponential back-off retries += 1 else: raise Exception(f"API error: {response.status_code}") raise Exception("Max retries exceeded due to rate limiting")

Questions?

If you have questions about rate limiting or need assistance optimizing your integration, please contact our support team or consult the API developer portal for detailed rate limit specifications for each endpoint.