フィルターのクリア

Why appear NAN in the Mini-batch-loss and Mini-batch-RMSE when Train a Convolutional Neural Network for Regression

14 ビュー (過去 30 日間)
Iam used same code steps in following link but modified with my work
https://www.mathworks.com/help/nnet/examples/train-a-convolutional-neural-network-for-regression.html
traindata=rtrain_csiq;
Y = rscore;
testdata=utest_csiq;
layers = [ ...
imageInputLayer([256 256 1])
convolution2dLayer(12,25)
reluLayer
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions('sgdm','InitialLearnRate',0.001, ... 'MaxEpochs',15);
net = trainNetwork(traindata,Y,layers,options)
predictedTest = predict(net,testdata);
but the output as following
pls how can solve that..Thanks

回答 (1 件)

Amy
Amy 2017 年 8 月 31 日
Hi Ismail,
Sometimes this can happen if your data includes many regressors and/or large regression response values. This leads to larger losses that can become NaNs.
Two possible solutions:
  1. Try a lower initial learning rate.
  2. Normalize the responses (the variable Y in your example) so that the maximum value is 1. You can use the normc function to do this.
  2 件のコメント
Ismail T. Ahmed
Ismail T. Ahmed 2017 年 9 月 2 日
編集済み: Ismail T. Ahmed 2017 年 9 月 2 日
thanks Amy I applied your suggestion but still give me NaN I used 0.0001 for initial learning rate and used normc function of Y.
AlexanderTUE
AlexanderTUE 2017 年 9 月 4 日
Hi Amy, hi Ismail,
I has a similar problem in the past. It seems that the use of a single convolution connected layer is not enough for such big images sizes. I used three Conv layers with intial weigths. Please see the following QA https://de.mathworks.com/matlabcentral/answers/337587-how-to-avoid-nan-in-the-mini-batch-loss-from-traning-convolutional-neural-network
Alex

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by