GPU Out of memory on device.

68 ビュー (過去 30 日間)
caesar
caesar 2018 年 3 月 16 日
コメント済み: Thyagharajan K K 2021 年 11 月 28 日
I am using the neural network toolbox for deep learning and I have this chronical problem when I am doing a classification. My DNN model has trained already and I keep receiving the same error during classification despite the fact that I used an HPC (cluster) that has Nvidia GeForce 1080, and my machine that has GeForce 1080Ti. the error is :
Error using nnet.internal.cnngpu.convolveForward2D Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'.
Error in nnet.internal.cnn.layer.util.Convolution2DGPUStrategy/forward (line 14)
Error in nnet.internal.cnn.layer.Convolution2D/doForward (line 332)
Error in nnet.internal.cnn.layer.Convolution2D/forwardNormal (line 278)
Error in nnet.internal.cnn.layer.Convolution2D/predict (line 124)
Error in nnet.internal.cnn.DAGNetwork/forwardPropagationWithPredict (line 236)
Error in nnet.internal.cnn.DAGNetwork/predict (line 317)
Error in DAGNetwork/predict (line 426)
Error in DAGNetwork/classify (line 490)
Error in Guisti_test_script (line 56)
parallel:gpu:array:OOM
Has anyone faced the same problem before?
ps: my test data contains 15000 images.
  1 件のコメント
Thyagharajan K K
Thyagharajan K K 2021 年 11 月 28 日
I had a similar problem. The main reason is due to the large number of learnable parameters. You can reduce the number of nodes in the fully connected network or you can reduce the size of the layer available just before the fully connected layer by incresing the stride value or you can reduce both.

サインインしてコメントする。

採用された回答

Joss Knight
Joss Knight 2018 年 3 月 17 日
Reduce the 'MiniBatchSize' option to classify.
  2 件のコメント
caesar
caesar 2018 年 3 月 17 日
well, the model I am trying to use has been already trained so how can I reduce the miniBatchsize? should I retrain the model on reduced MiniBatchSize in order to be able to do classification?

サインインしてコメントする。

その他の回答 (3 件)

Khalid Labib
Khalid Labib 2020 年 2 月 19 日
編集済み: Khalid Labib 2020 年 5 月 13 日
In "Single Image Super-Resolution Using Deep Learning" MatLab demonstration:
I tried clear my gpu memory ( gpuDevice(1) ) after each iteration and changed MiniBatchSize to 1 in "superResolutionMetrics" helper function, as shown in the following line, but they did not work (error: gpu out of memory):
residualImage =activations(net, Iy, 41, 'MiniBatchSize', 1);
1) To solve this problem you might use CPU instead:
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
I think this problem is caused by the high resolution of the test images, e.g. the second image "car2.jpg", which is 3504 x 2336.
2) A better solution is to use GPU for low resolution images, and CPU for high resoultion images by replacing "residualImage =activations(net, Iy, 41)" with:
sx=size(I);
if sx(1)>1000 || sx(2)>1000 %try lower values if it does not work e.g: if sx(1)>500 || sx(2)>500
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
else
residualImage =activations(net, Iy, 41);
end
3) The most efficient solution is to divide the image into smaller images (non-overlapping blocks or tiles), such that each small image has a size of 1024 or less in any of its dimension based on your GPU. So, you can use GPU for each of these small images without errors.
Then, apply your CNN on these small images using GPU. After that, you can combine the small images to form the size of original image.
  1 件のコメント
Rui Ma
Rui Ma 2020 年 4 月 22 日
編集済み: Rui Ma 2020 年 4 月 22 日
Thanks! It works! Although a little bit slow

サインインしてコメントする。


marie chevalier
marie chevalier 2019 年 6 月 4 日
編集済み: marie chevalier 2019 年 6 月 4 日
Hi,
I have a similar issue here, and the link given by Joss doesn't really help me to understand how to fix it.
I am working on the "Single Image Super-Resolution Using Deep Learning" MatLab demonstration.
I would like to use the pretrained network on my own images.
I get a similar error message when arriving at the line:
Iresidual = activations(net,Iy_bicubic,41);
I tried using the command line gpuDevice(1) and it didn't do anything.
I also tried changing the MiniBatchSize to 32 instead of the default 128 and got the same error.
Does anyone understand how to fix this problem?
  3 件のコメント
marie chevalier
marie chevalier 2019 年 6 月 26 日
It still doesn't work. I'm afraid this is due to something else.
I'm out of ideas at the moment, I did a little cleanup around my computer just to be safe but it didn't change much.
I'll try re-downloading the example again, maybe I changed something in it without noticing.
Akash Tadwai
Akash Tadwai 2019 年 12 月 17 日
@Joss Knight, It still doesn't work in my case. I was training alex net with a mini batch size of 1 but still MATLAB is giving the same error.
Alexnet

サインインしてコメントする。


Alvaro Lopez Anaya
Alvaro Lopez Anaya 2019 年 11 月 7 日
In my case, I had similar problems, despite of the fact that I have a gtx1080Ti.
As Joss said, reducing the MiniBatchSize solved my problem. It's all about the training options.

カテゴリ

Help Center および File ExchangeParallel and Cloud についてさらに検索

製品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by