- Try a lower initial learning rate.
- Normalize the responses (the variable Y in your example) so that the maximum value is 1. You can use the normc function to do this.
Why appear NAN in the Mini-batch-loss and Mini-batch-RMSE when Train a Convolutional Neural Network for Regression
14 ビュー (過去 30 日間)
古いコメントを表示
Iam used same code steps in following link but modified with my work
https://www.mathworks.com/help/nnet/examples/train-a-convolutional-neural-network-for-regression.html
traindata=rtrain_csiq;
Y = rscore;
testdata=utest_csiq;
layers = [ ...
imageInputLayer([256 256 1])
convolution2dLayer(12,25)
reluLayer
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions('sgdm','InitialLearnRate',0.001, ... 'MaxEpochs',15);
net = trainNetwork(traindata,Y,layers,options)
predictedTest = predict(net,testdata);
but the output as following
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/166105/image.png)
pls how can solve that..Thanks
0 件のコメント
回答 (1 件)
Amy
2017 年 8 月 31 日
Hi Ismail,
Sometimes this can happen if your data includes many regressors and/or large regression response values. This leads to larger losses that can become NaNs.
Two possible solutions:
2 件のコメント
AlexanderTUE
2017 年 9 月 4 日
Hi Amy, hi Ismail,
I has a similar problem in the past. It seems that the use of a single convolution connected layer is not enough for such big images sizes. I used three Conv layers with intial weigths. Please see the following QA https://de.mathworks.com/matlabcentral/answers/337587-how-to-avoid-nan-in-the-mini-batch-loss-from-traning-convolutional-neural-network
Alex
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!