How to deal with very very large dataset? #1004
-
we have id 1-80,000 and each id is associated with 30,000 rows of data. I guess this will take ages to finish with tsfresh |
Beta Was this translation helpful? Give feedback.
Answered by
nils-braun
Feb 28, 2023
Replies: 1 comment
-
I have given some advice in the issue you created #1005. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
b-y-f
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have given some advice in the issue you created #1005.
In short: use another extractor setting, chunk up the data into windows (because some extractors do not scale linearly with the number of rows) or use a computing cluster/batch system.
Let's continue the discussion on the issue.