When using GPU with neural net, I run out of shared memory per block; is there a way to handle?
3 ビュー (過去 30 日間)
古いコメントを表示
I want to train a neural net with several hundred images (75x75 pixels, or 5625 elements each). This works in native Matlab. When I try to train using 'useGPU' I get the error "The shared memory size for a kernel must be a positive integer, and must not exceed the device's limit on the amount of shared memory per block (49152 bytes)." coming from nnGPU.codeHints. The code:
net1=feedforwardnet(10);
xg=nndata2gpu(inputMatrix);
tg=nndata2gpu(targetMatrix);
net2=configure(net1,inputMatrix,targetMatrix);
net2=train(net2,xg,tg);
Is there a way to tell the neural net training system to process the training in smaller chunks? Or some other smarter way to do this?
0 件のコメント
回答 (1 件)
Mark Hudson Beale
2013 年 6 月 19 日
編集済み: Mark Hudson Beale
2013 年 7 月 5 日
I was able to reproduce your error. In MATLAB 13a the nndata2gpu array transformation is no longer required and if gpuArray is used (instead of nndata2gpu) the required amount of shared memory will be reduced.
d = gpuDevice
d.MaxShmemPerBlock
Using 13a and gpuArray I was able to train the following random problem on a mobile GPU with these specs: NVIDIA GeForce GT 650M 1024 MB in MATLAB 13a.
x = rand(5626,500);
t = rand(1,500);
X = gpuArray(x);
T = gpuArray(t);
net = feedforwardnet(10);
net = configure(net,x,t);
net.trainFcn = 'trainscg';
net = train(net,X,T);
I hope that helps!
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!