Reading large amount of entries from a channel starting with some date

1 ビュー (過去 30 日間)
Anton K
Anton K 2022 年 12 月 4 日
回答済み: Christopher Stapels 2022 年 12 月 5 日
I've tried reading entries from my channel (a kind of bulk read) to save them locally for analysis , and noticed two issues:
  1. if end date isn't specified, it defaults to current one
  2. if date range is used, the entries are limited (8000 for free accounts) starting from the end date
I don't know which rationale stands behind such implementation, but this "feature" causes a bit of inconvenience when reading entries back.
Let's consider a use case.
If I want to get all entries for a month-long period (like [2022-10-04 ... 2022-11-04)), I'd submit a request like this:
and then look at the last returned entry to get its timestamp to submit further requests to get rest of chunks.
However this request returns the entries in the range [2022-10-05T17:59:48Z .. 2022-11-03T23:59:24Z], which is quite unexpected because the first entry for this period has timestamp 2022-10-04T00:03:00Z (use one day request like to verify this).
From the response I can conclude that the start date is actually ignored (as redundant?) in this case, and the system returns 8000 entries before the end date. This is quite unnatural taking into account that the returned entries are sorted by date in ascending order.
But things become a bit more queer when I drop the end date from the request -
Guess which timestamp will be the first this time!
It is ... 2022-11-02T22:39:22Z ! Surprise! :)
And if you submit the request later you'll get first entry with a later timestamp, because the system ignores your start date and returns 8000 entries before the current date.
Such "feature" forces me to write crazy code, when updating local cache of entries - I have to ...
a) ... either read entries day by day (not to hit the cap of 8000 entries) in forward direction to catch up with the current date => increased amount of requests,
b) ... or read entries starting from the current date in backward direction => increased amount of memory usage (I can't save entries into the cache until I reach the last cached date)
Any plans to make this behavior a bit more natural? :)

回答 (1 件)

Christopher Stapels
Christopher Stapels 2022 年 12 月 5 日
I recommend choosing your daterange to read below the limit (say 5000 entries) then you can have a well determined output each time.
I understand your frustration with the datetime query parameters. I have also spent a long time debugging what happens with multiple parameters. I appreciate you taking the time to carefully describe what you see. Unfortunately, what seems natural for one use case is often unnatural in another, especially near a limit that is unrelated to the request parameters (number of entries versus datetime).
You can also use the web interface to download the entire data set, or use MATLAB to read the data (thingSpeakRead) and use the built in date manipulation functions to align the data nicely. We are also developing an interface to a service for reading larger blocks of data into MATLAB, which should be available in a future release.
If you are looping through the data, please be sure to add a delay (100-300 ms) between read calls so you do not slam the server.


その他の回答  ThingSpeak コミュニティ


Find more on Help and Support in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by