Trying to include learning rate and momentum in sgdmupdate function under multiple GPUs
1 回表示 (過去 30 日間)
古いコメントを表示
I am working on modifying the example given in
"https://www.mathworks.com/help/deeplearning/ug/train-network-in-parallel-with-custom-training-loop.html?searchHighlight=sgdmupdate%20multiple%20gpu&s_tid=srchtitle_sgdmupdate%2520multiple%2520gpu_3".
The modification is that I am trying to incorporate learning rate and momentum into the sgdmupdate function.
That is,
[dlnet.Learnables,workerVelocity] = sgdmupdate(dlnet.Learnables,workerGradients,workerVelocity, learning rate, momentum);
instead of
[dlnet.Learnables,workerVelocity] = sgdmupdate(dlnet.Learnables,workerGradients,workerVelocity); (line 108 in the example).
However, that modification results in the following error in the case of using 4 GPUs (no error with a single GPU)

Would there be anyone who could help me on this ?
Thanks very much !!!
3 件のコメント
Joss Knight
2022 年 3 月 10 日
Also make sure all your gradients are finite with something like assert(all(cellfun(@(x)all(isfinite(x(:))),workerGradients))). Sometimes when you mess with the learn rate you can get NaNs and infinities in the gradients.
回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で GPU Computing についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!