I’m sometime get 429 errors, but I’m not sure what is the limit.
How many call can I do per minutes?
A 429 error indicates too many requests in a short amount of time, which means you’re hitting our rate limit
Yes, I understand that it is because we make too many requests. I just want to know what the limit actually is and if it’s possible to increase it.
Your link goes to OpenAI documentation, I don’t understand the relation with Feathery REST API.
What rate are you looking for and what’s the use case?
Which OpenAI rate limit does Feathery’s REST API use? The different OpenAI models each have their own unique rate limit.
I’m using this API call: List All Data for a User – Feathery API Reference
Each time a user completes a form, our system is using that API to get the answers.
I’m still having trouble understanding why we are talking about OpenAI here. I’m not sure we are talking about the same thing.
They have multiple tiers, I have no idea how it translate to what we are paying for Feathery.
They also have multiple models and I have no idea which one we are using when calling the API mentioned above.
Can I have an update on this?
jumping in here - sorry, to clarify we have rate limits that are determined internally
Why are you using the API to fetch the data from a user after form completion? we have webhooks and logic-based triggers that you can use to push data to your systems programmatically rather than querying our API
We didn’t use the webhooks because it doesn’t have any retry mechanism. In case of failure, could be on our end or a simple network issue, our system will not get the data.
The forms we have on Feathery are critical and are also very long. We have to do everything we can so that our users don’t have to fill the forms multiple times.
Using webhooks, means that we need a backup solution in case of failure. That solution is the API.
To keep thing simple and faster to implement, we choose to only use the API.
If you need to retrieve the data asynchronously, then I’d recommend an async job that queries our batch endpoint here: Feathery API Reference
You can have it run once an hour or something.
No matter our internal rate limit, I don’t recommend you tie your API queries to your submission cadence. It makes it brittle against surges (which is exactly what our rate limits protect against, in addition to overuse of the endpoints).