How do I reduce the memory imprint due to a GPU array?
1 回表示 (過去 30 日間)
古いコメントを表示
Hello,
I am finding that creating a GPU array creates a huge spike in MATLAB's memory usage:
Opening matlab:
20309 mcoughli 20 0 4620m 172m 66m S 0.0 0.0 0:02.85 MATLAB
So approximately 4.6 GB. When I create a gpuArray from the command line:
>> gpuArray(1);
It spikes dramatically:
20309 mcoughli 20 0 537g 605m 255m S 0.0 0.1 2:07.06 MATLAB to approximately 537GB.
Does anyone understand why this is happening / can be prevented? It creates problems when I attempt to run on smaller computing nodes. Running ulimit -v beforehand works to some extent, but it is more difficult to set when running parallel process.
Thank you,
Michael
0 件のコメント
採用された回答
Edric Ellis
2013 年 12 月 3 日
This is unfortunately likely to be due to loading all the GPU support libraries. These are quite large, and all get loaded when you first create a gpuArray. I'm afraid there's no workaround for this.
2 件のコメント
Joss Knight
2013 年 12 月 9 日
Have a look at installdir / bin / arch and list the contents by size - you'll see some obvious GPU libraries near the top e.g. npp, cublas, and cufft. To get good performance GPU runtime code is very non-general, but this means there are multiple implementations for every use case - add to that the overhead for supporting multiple compute architectures.
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で GPU Computing についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!