Train shallow network - Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'.

7 ビュー (過去 30 日間)
load dataset.mat
%The dataset has 2 variables, X and Y. In order to use them the transpose is required
target = Y.';
inputs = X.';
net1 = feedforwardnet(10);
net1.trainFcn = 'trainscg';
Xgpu = gpuArray(inputs);
Tgpu = gpuArray(target);
net2 = configure(net1,inputs,target);
net2 = train(net1,Xgpu,Tgpu,'useGPU','only','showResources','yes');
My code in Matlab is the one above. The problem comes with the last line as the GPU can't handle all the operation. I've seen in other questions that the batchSize can be altered so the GPU works better, however, I cannot find a way to do so with a shallow neural network (as I'm required to use train instead of trainNetwork).
The complete error trace that appears in the matlab command window is the following:
Error in nntraining.setup>setupPerWorker (line 126)
[net,X,Xi,Ai,T,EW,Q,TS,err] = nntraining.config(net,X,Xi,Ai,T,EW,configNetEnable);
Error in nntraining.setup (line 77)
[net,data,tr,err] = setupPerWorker(net,trainFcn,X,Xi,Ai,T,EW,enableConfigure);
Error in network/train (line 335)
[net,data,tr,err] = nntraining.setup(net,net.trainFcn,X,Xi,Ai,T,EW,enableConfigure,isComposite);
Error in ejemplo2 (line 16)
gpuDevice() shows the following:
>> gpuDevice(1)
ans =
CUDADevice with properties:
Name: 'Quadro P5000'
Index: 1
ComputeCapability: '6.1'
SupportsDouble: 1
DriverVersion: 10.2000
ToolkitVersion: 10
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.7180e+10
AvailableMemory: 1.4279e+10
MultiprocessorCount: 20
ClockRateKHz: 1733500
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
I wold like to know how can i fix this. I'm traying to see the diferences between working only on CPU or using GPU
  2 件のコメント
Divya Gaddipati
Divya Gaddipati 2019 年 12 月 3 日
Could you mention the MATLAB version that you are using?
Jorge Aarón Morán Holguín
Jorge Aarón Morán Holguín 2019 年 12 月 4 日
I have updated the information. The release is R2019a.

サインインしてコメントする。

採用された回答

Divya Gaddipati
Divya Gaddipati 2019 年 12 月 6 日
This could happen if your dataset is huge. In which cases it is preferable to train the network in mini-batches.
Classical neural networks, such as feedforward nets, do not have support for mini-batches. This can be worked around the following ways:
1) Manually implement the training in mini-batches. For this, split your dataset in mini-batches. For example, you can split your “Xgpu” and “Tgpu” into mini-batches like "mini_Xgpu{i}" and "mini_Tgpu{i}". Then set the default number of training epochs in the algorithm to 1 and have two loops: one for the desired number of epochs and another one for the iterations. Here's a rough sketch of the code for your reference.
net = feedforwardnet(10);
net.trainFcn = 'trainscg';
net.trainParam.epochs = 1;
% nEpochs – total number of epochs
% nIterations – depends on the number of training samples
for e=1 : nEpochs
for I = 1 : nIterations
net = train(net, mini_Xgpu{i}, mini_Tgpu{i}, 'useGPU', 'only');
end
end
2) Use preexisting deep learning functionalities. For that, you would have to transform your feedforward net into a simple deep learning network that only has 1 input layer, 1 fully connected layer, 1 custom layer and 1 output classification layer. Define the custom layer as the tansig activation function function of feedforward nets. This would reproduce a standard feedforward net.
Please refer to the following link for more information about creating layers:https://www.mathworks.com/help/deeplearning/ug/define-custom-deep-learning-layers.html
This approach automatically uses stochastic gradient descent as the training algorithm, which works with mini-batches of data.
Hope this helps!

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

製品


リリース

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by