Why Training Set accuracy decrease dramatically after stopping the trainNetwork?
2 ビュー (過去 30 日間)
古いコメントを表示
After stopping manually trainNetworktrainNetwork, the validation error dropped dramatically:
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/199854/image.png)
I tested the Training Set accuracy, and got also about 60%:
predY = classify(net,xTrain);
Any ideas what I'am doing wrong?
4 件のコメント
Don Mathis
2019 年 1 月 23 日
What is your network architecture? Does it contain dropoutLayers and later BatchNormlization layers?
回答 (1 件)
Don Mathis
2019 年 2 月 8 日
Maybe your minibatch size is too small. The accuracy drop may be due to batchnormalization layers getting finalized, during which time the mean and variance of the incoming activations of each batchnorm layer are computed using the whole training set. If those full-batch statistics don't match the minibatch statistics very well, the finalized batchnorm layers will not be performing a very good normalization.
3 件のコメント
Don Mathis
2019 年 2 月 11 日
You could try increasing the batch size iteratively to see whether that fixes the problem. I would try exponentially increasing: 1000, 2000, 4000, 8000, etc. Or you can just try the largest amount that will fit in your GPU memory right away.
Don Mathis
2019 年 2 月 11 日
Also: Why does your plot show "Iterations per epoch: 1"? Were you using miniBatchSize=30000 in that run?
What are you passing to trainingOptions()?
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!