Websocket rate limit exceeded

@corrineb, over the last couple of weeks I have been noticing that websocket connections have been tending to fail far more frequently due to the rate limit being exceeded:

 {'status_code': 5, 'status_message': 'AUTHORIZATION_REQUIRED - Rate limit exceed or your API Key or Oauth2 Access Token missing'}

I appreciate that I probably connect/disconnect from the websocket far more frequently than most users while I develop the PiConsole, but this appears to have only started happening quite recently. Has a reduction to the rate limit been implemented, or is something else going on? Cheers!

Most of that is caused by disconnecting without closing the connection. I had that issue and fixed it with a routine that closed the channel anytime the program exited. That took care of 95% of the exits.

1 Like

Pretty sure I have this covered, but let me check the code again. I could have done something silly!

@peter PM me the api key that you are using and we can take a look and see if there are some clues that could help you track things down.

3 Likes

I’ve seen the same thing with the public dev api key. Last week and the week before the rate limiting was really bad and sometimes took upwards of 5-10 minutes of 30 second retries to reconnect.

I wanted to ask, as I’m developing a UDP and WS collector, should I just ask for my own api key? This is ultimately going to be used for a home automation project of my own, but I’m going to open source it when I get things a little further along. I ask, as its really disruptive to be developing, restart my collector to test something, and then be stuck waiting for the rate limiting to chill. Plus, if I did something bad in my code, you’d know who to contact.

@jkf Yes! PM me and we can get you setup.

In light of this recent post, if I were to use a personal token instead of an api key to access the REST and WS services, would I have a separate rate limit from the public api key, or would I still hit the same rate limits?

Well, that’s a good question. First, the public API key will be going away soon (don’t worry, we’ll give you plenty of notice!). And you’ll be able to generate your own API keys. But the rate limit will be tied to the access token, rather the API key (unless you have an enterprise level API key). And that rate limit will be much more than anyone needs for personal use - the rate limit would only be there to prevent runaway scripts or DoS-type attacks.

To be clear, any personal access token you generate can only be used to access data from stations that you own (or have authorization to access). If you are building an app just for your personal use, you can put the access token right in the code. But if you’re building an app that others will use, you’ll want to enable a way for them to enter their own personal access token into it. Or use the Oauth interface, which would let them create their own access token by authenticating against their account within your app.

Apologies for limited, confusing information - more documentation is coming soon!

3 Likes

Thanks for all the updates David. Can I clarify whether the plan is to move away from using API keys that allow you to get data from any public station, towards personal access tokens that only allow you to access your own data?

The reason I ask is that I have been asked whether it is possible to make the PiConsole compatible with multiple stations that are not necessarily owned by the same person (i.e. the user who asked wants to use the console to display their station data, and then be able to switch the display to data from another nearby station owned by someone else). It sounds like this won’t be possible using personal access tokens, and if so, I won’t start the development.

Thanks for the update David! For my use case, it sounds like the personal access token is what I want. I am already planning to accept both access tokens and api keys, so when I open source it, other people can use which ever suits them.

I look forward to the doc updates that are coming!

Good question, Peter. The answer is “it depends” :slight_smile: API key usage will be consistent with the data access policy that we posted about a year ago, but haven’t begun fully enforcing yet. To summarize:

Normal API keys (which will be available to any user - no special agreement required) will require personal access tokens (or Oauth2 authentication) to access only the authenticated station owner’s data.

Enterprise API keys (which will be issued upon request and may require a special agreement with WeatherFlow) may still allow access to data from multiple stations, without a personal access token. These are taken on a case-by-case basis.

Please hold off development of that for now. We may want to convert your API key to “enterprise” with some special access that satisfy your use case. We will also be adding a “share” feature soon that allows station owners to share their stations with other users, which may also satisfy your use case, but the details are still being worked out.

6 Likes

@dsj @corrineb - Sorry for bringing an older thread back to life - but I’m starting to get these errors in some of my own Web services testing for my open source project. Another older thread talks about this mainly for a defensive nature. Is there a way to understand what the actual rate limit is? I’ve certainly seen this for too many socket connections but also just now see it for the entirety of an access token connection where I was pulling in historical data. I can certainly look at throttling some of these queries, but they’re not super crazy (in my opinion). Should I open a ticket to talk about these restrictions, or can it be shared in the forum?

Most errors are caused by not closing a socket and continuing to open new sockets. During your testing are you closing the socket before you exit the application?

The socket one is because I was actually opening and using more than a few as I was testing more than one data loader at a time across several different instances. It wasn’t specifically leaving one open but actually using more than one on purpose. Today’s case of that same message:

{"status_code":5,"status_message":"AUTHORIZATION_REQUIRED - Rate limit exceed or your API Key or Oauth2 Access Token missing"}

Is for both access types, REST and Socket. Because I open a socket and listen indefinitely and store all of the resulting JSON logs - there are times during testing that I’ll have multiple loggers running at the same time. But I also poll the REST API separately for the derived metrics and forecasts.

The access key still works as I can query stations. I just get rate limited when asking the REST API or Socket for metrics. It’s now stopped working for about 12 hours.

Thanks!

Hey @Lux4rd0,

We currently limit concurrent web socket connections to 10 / user. While taking a look at the metrics for your PATs, we discovered a bug with how we are counting open connections. This bug would cause us to potentially return the rate limiting message after 5 concurrent connections. Like all good bugs, this one is a little tricky. We don’t always return the rate limiting message when you hit 5 connections, but it is possible that we might.

To help you continue development while we sort this one out on our end, we have upped your web socket connection limit to 20. This way if you hit our bug, you will get your full number of connections.

Our REST rate limits are not connected to our web socket rate limits. For REST you can make 100 requests per minute. There is some burst capacity built into the system, but the general rule of thumb it to keep the number of REST requests per user to under 100 per minute.

If you have any questions let me know!

4 Likes

@corrineb Thank you so much for taking a look at this and sharing the details. For what I’m doing during my active testing - 10 is plenty. What I’ve seen is for PATs that have a lot of devices on them - I open a socket for each device. That was fine for when I only had one device :slight_smile: - But I’m going to refactor that design and use a single socket for all devices.

The REST side is also insightful. 100 requests per minute are certainly much higher than what I was using - I was making a single request for a bucket of time, processing - then asking for another bucket of time. So - really - just a few requests per minute. Wondering if there was another issue happening. I’ll keep an eye on the behavior and see if there’s a discernable pattern.

Can you share how long it takes for the rate limit to fall off? In my situation - it was blocked for over 12 hours.

Lastly - when you say per user - is that per access key? Or per IP address?

Thanks for all of your help!!

No problem @Lux4rd0.

Your refactor plan sounds good. Once a web socket connection is open you can send different listen_start messages to get data for multiple devices. A general rules is that an app should only need one open web socket connection.

For web socket, the rate limit should fall off as soon as you close one your open connections. REST should recover in 1-2 minutes after the requests drop below 100/minute. I’m not sure why you were blocked for over 12 hours. If you get another period of longer rate limiting let me know and we can take a look and see what is going on.

Each PAT that is generated is tied to a user. A user can have multiple PATs. We roll all rate limiting counts up to the user. For example, let’s say a user has 2 PATS. They are allowed to have 10 web socket connections open between the 2 PATs.

I hope this helps!

3 Likes

@corrineb Thank you for the explanation! I’ll see about the PAT that is currently blocked and ask the owner to contact support. Last I looked it’s still rate-limited after a week. :slight_smile:

Thanks @Lux4rd0. If he gets in touch with us we can look take a look at the usage for his account and see what is going on.