MATLAB Answers

Out of memory issue on evaluating CNNs

83 ビュー (過去 30 日間)
ioannisbaptista 2019 年 6 月 21 日
コメント済み: ioannisbaptista 2019 年 6 月 21 日
The message "Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'." appears when I try to evaluate my trained CNN. I'm using a GeForce 1060M GTX 6GB RAM.
Here's a piece of my code:
testData = load('testROI.mat');
[test_imds, test_pxds] = pixelLabelTrainingData(testData.gTruth);
testDataSet = pixelLabelImageDatastore(test_imds, test_pxds);
unetPxdsTruth = testDataSet.PixelLabelData;
unetpxdsResults = semanticseg(test_imds, unet); % error is caused by this line
unetMetrics = evaluateSemanticSegmentation(unetpxdsResults, unetPxdsTruth);
The command gpuDevice() shows the results below:
CUDADevice with properties:
Name: 'GeForce GTX 1060'
Index: 1
ComputeCapability: '6.1'
SupportsDouble: 1
DriverVersion: 9.2000
ToolkitVersion: 9.1000
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 6.4425e+09
AvailableMemory: 5.0524e+09
MultiprocessorCount: 10
ClockRateKHz: 1670500
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
As you can see, there are more than 5GB of free memoy but, for some reason I don't understand, the out of memory problem happens. The curious thing is it doesn't happen with 500 images the training stage, but happens with 100 images in the test evaluating stage. It's important to emphasize that this evaluation atempt uses a pretrained CNN that I created in another moment, so the training data is not in the GPU memory while doing this.
Does anyone please knows what might be going on?

  0 件のコメント



Andrea Picciau
Andrea Picciau 2019 年 6 月 21 日
編集済み: Andrea Picciau 2019 年 6 月 21 日
The problem is that your GPU's 6GB memory is not enough to execute the semantic segmentation with the default settings. Reducing the mini-batch size from the default 126 should be enough in your case. Try changing the problematic line to the following:
unetpxdsResults = semanticseg(test_imds, unet, 'MiniBatchSize', 4);
You can try increasing that 4 to a larger value, but it wouldn't surprise me if 8 was the maximum your GPU could get to with your GPU.
You should also have a look at semanticseg's doc page and the name-value pairs in particular.
A last note on gpuDevice: when you get the out-of-memory error, MATLAB doesn't allocate the data. This is approximately what happens:
  • At rest, something between a few hundred MBs to a GB is allocated on your GPU memory. This is the space occupied by the CUDA libraries.
  • When you are running sematicseg with the default settings, some of the data MATLAB needs to allocate is way larger than your GPU's remaining 5GB memory.
  • MATLAB asks CUDA to allocate that data,
  • CUDA gives an error, saying that your GPU's memory is not large enough,
  • MATLAB informs you with the error message and doesn't allocate anything,
  • When you check with gpuDevice, you see 5GB are free.

  4 件のコメント

表示 1 件の古いコメント
ioannisbaptista 2019 年 6 月 21 日
Hello Andrea. First of all, thank you so much for answering me.
I'm glad to tell that this solution does work. I found it out just before you answer me.
The application was getting out of memory indeed: after starting, GPU peaks quickly reached limit before exception occured. I set the MiniBatchSize to 32, like this:
unetpxdsResults = semanticseg(test_imds, unet, 'MiniBatchSize', 32);
so it worked. Setting to 64 (assuming it has to be a 2^k value), GPU couldn't handle it as well.
About the notes on gpuDevice: the application starts with a few hundred MB used on the GPU, indeed. But after the error, the used memory remains 1.5~2.0 GB. So, it looks like something is allocated a bit, I just don'to know what.
Seizing the opportunity, I posted just part of my code, but the truth is I evaluate three CNNs (U-Net, SegNet and Dilated Convolutions Net. There are three command blocks like that in the same script (except for the data loading, this is done just once). Do you know if after evaluating a CNN the used memory is released for the next one?
Andrea Picciau
Andrea Picciau 2019 年 6 月 21 日
I'm happy to hear you solved the problem!
You're right, the thing is not actually as simple as I put it in my reply (that's why I said "approximately"!). In reality, MATLAB is doing many different GPU memory optimisations at the same time: it loads some CUDA libraries only when it needs to, it keeps a bit of GPU memory ready to speed-up allocations...that's why the GPU memory is a bit different before and after. In practice, you shouldn't be concerned. When you need your GPU memory, MATLAB is going to allocate as much of it as you want because it doesn't use any GPU memory for itself.
To ensure you make full use of your GPU memory, the main trick is not to use your GPU to drive a display at the same time as doing computations.
ioannisbaptista 2019 年 6 月 21 日
Got it, Andrea.
Thank you so much! Regards.


その他の回答 (0 件)




Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by