Why does the neural network training end before reaching the specified maximum number of epochs?
1 回表示 (過去 30 日間)
古いコメントを表示
MathWorks Support Team
2017 年 11 月 9 日
編集済み: MathWorks Support Team
2021 年 9 月 7 日
Why does the neural network training end before reaching the specified maximum number of epochs?
This is how I am setting the training option:
options = trainingOptions('sgdm', 'MiniBatchSize',miniBatchSize,'MaxEpochs',4000)
But, it looks like the training ended without reaching the max epoch. Is this normal? And what will actually affect the total epoch number in the training?
採用された回答
MathWorks Support Team
2021 年 8 月 18 日
編集済み: MathWorks Support Team
2021 年 9 月 7 日
There are many parameters that can cause a neural network to stop training.
As you may know, an epoch is the full pass of the training algorithm over the entire training set. In general, the training will stop before reaching the specified maximum number of epochs to avoid overfitting to the data, thus improving the network generalization. That is, the training will stop if the results of the cross validation are not getting any better (within some tolerance).
Please refer to the following link for more information on the early stopping behavior to improving generalization of the network:
0 件のコメント
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!