Why Fread a 2 GB file needs more than 8 GB of Ram?

4 ビュー (過去 30 日間)
Gabriel
Gabriel 2013 年 6 月 4 日
textscan is too slow.
Thus, I want to load a 2 GB file in RAM with fread (fast), then scan it.
Fread works well with small files, but if I try to fread(filename,'*char') a 2 GB file, RAM spikes for some reason over my 8 GB limit and I get out of memory.
Ideas?
  2 件のコメント
Jan
Jan 2013 年 6 月 4 日
Please post the full code, because there might be unexpected problems.
Gabriel
Gabriel 2013 年 6 月 4 日
Well, the code is simple:
fid = fopen(filename);
test = fread(fid, '*char');

サインインしてコメントする。

回答 (3 件)

Jan
Jan 2013 年 6 月 4 日
Reading a 2GB-file into a CHAR required 4GB of RAM, because Matlab uses 2-byte-chars. Then it is possible depending on the way you store the data, that the contents of a temporary array is copied, such that 8GB is the expected memory consumption. But actually I'd expect that this copy could be avoided, so it might be helpful, if you show us the code fragment.
  2 件のコメント
Gabriel
Gabriel 2013 年 6 月 4 日
Precisely, I expect it to require 4GB, yet watching system monitor, the whole things goes over 8GB and into swap.
I also get the copied into functions parts, etc. But shouldnt FREAD be able to load a 2 GB file into a 4GB char array without needing more than 8GB of Ram?
Jan
Jan 2013 年 6 月 4 日
編集済み: Jan 2013 年 6 月 4 日
I've seen an equivalent behavior for another FREAD implementation (not in Matlab): The required final size was not determined by FSEEK, but the file was read in chunks until the buffer was filled. Then the buffer was re-allocated with the double size. After the obvious drawbacks have been mentioned in a discussion, the author decided to replace the doubling method by a smarter Fibonacci sequence. :-)

サインインしてコメントする。


Iain
Iain 2013 年 6 月 4 日
As Jan implied, passing around variables often leads to memory duplication - 2GB arrays get COPIED when put into functions.
The Out of memory error normally comes up when matlab cannot find a single chunk of RAM big enough for a variable.
Use much smaller chunks of memory, and read the file in and parse it in chunks of, say, 64MB.
  2 件のコメント
Walter Roberson
Walter Roberson 2013 年 6 月 4 日
The arrays will only get copied if they are modified; otherwise the data pointer will point to the original storage.
Gabriel
Gabriel 2013 年 6 月 4 日
I think I did not express myself well, I apologize. Parsing is not the issue. I fully expect scanning functions to be memory hogs (relatively).
Fread on the other hand, I don't quite get why it needs so much overhead to load a 2GB+ file in the workspace?

サインインしてコメントする。


Gabriel
Gabriel 2013 年 6 月 4 日
編集済み: Gabriel 2013 年 6 月 4 日
In any case, I have found a workaround for textscanning large ascii files (4GB and beyond) that contain numbers
The trick is padding the numbers with PERL or SED before trying to read them into matlab. If you pad your numbers with leading 0s, every line has the same ammount of chars, thus FREAD is easy to execute in chunks.
ex:
While not eof
tmp = fread X lines
data = textscan(tmp)
process(data)
end
With this trick, I went from 3 MB/sec to 130 MB/sec for processing a file.

カテゴリ

Help Center および File ExchangeText Files についてさらに検索

タグ

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by