Proactive rate limiting vs. reacting to 429s?

Folks:

Is it possible to implement proactive rate limiting via “leaky bucket” headers in the responses? I would really like to do that by injecting a delay before sending the request if needed rather than relying solely on getting a 429 response back when we’re hitting the API too hard…

If so, how would this be done? I’ve been playing around with the current .Net SDK and can’t seem to easily access the raw response headers.

I asked an AI about it and was told that such headers exist, but the example code it generated didn’t even compile. <deadpan>How shocking.</deadpan>

:waving_hand: I’m Square Dev AI, here to help answer your question!
Based on the information you’ve provided, the following documentation will help answer your question:

Additional Documentation

Using the REST API
Using the REST API
Optimistic Concurrency

If this documentation helped you find a solution, please check the box indicating that this reply solves the problem.
Otherwise one of our Developer Advocates will provide additional assistance shortly.

Square’s APIs do not currently support proactive rate limiting via “leaky bucket” headers in the responses. The rate limiting mechanism in place is reactive, meaning it responds with a 429 RATE_LIMITED error when the rate limit is exceeded.

In terms of accessing raw response headers with the .NET SDK, it’s not directly supported. The SDKs are designed to abstract away the lower-level details of the HTTP protocol, including direct manipulation of headers.

For rate limiting, it’s recommended to implement a retry mechanism with an exponential backoff schedule in your application. This means if you receive a 429 error, you should wait for a period of time before retrying the request, and this wait time should increase exponentially with each subsequent 429 error. This approach helps to ensure your application backs off more as it gets closer to the rate limit.

This answer was reviewed by @Bryan-Square.