Reading data from large .CSV files
12 ビュー (過去 30 日間)
So I am trying to figure out a best and efficient way to achive as shown in pic. I have looked into datastore and other things but I can't seem to find right strategy which will allow me to accomplish this.
I have large csv filesets (each of them are ~10 Gbs) that have timestamps on them. I need to extract certain section based on times (see sample on the right), combine them and create .mat or .txt (or .csv) files.
What would be the best strategy to achieve this? If the file sizes were small, I could easily achieve that by loading and sorting but with massive files I cant seem to do it efficiently.
Athul Prakash 2021 年 1 月 22 日
The use of a datastore is what comes to mind at first - you could try using read to obtain a chunk of data at once and select required fields efficiently using logical indexing..
% something like..
r = read(ds); % 'ds' - tabularTextDataStore
r_sel = r(r.Time<t1 & r.Time>t2, :);
% Save 'r_sel' into another CSV, or add to a running variable.
It's not clear which factor would cause datastore operation to be too slow for your case.
Perhaps it may be faster if you could process one whole csv file at once, allocating all the data into different time slots and then moving on to the next CSV file. You could consider implementing Bucket Sort for the different time stamps.
Finally, you may consider using MapReduce. It's more flexible and powerful than using a datastore, but may require a learning curve. You may come up with the same solution you've tried before on MapReduce and find it working faster.
Documentation: Getting started with MapReduce
Hope it helps!