Why do I see a drop (or jump) in my final validation accuracy when training a deep learning network?
7 ビュー (過去 30 日間)
古いコメントを表示
MathWorks Support Team
2019 年 2 月 19 日
編集済み: MathWorks Support Team
2019 年 2 月 19 日
Why do I see a drop (or jump) in my final validation accuracy when training a deep learning network?
採用された回答
MathWorks Support Team
2019 年 2 月 19 日
If the network contains batch normalization layers, the final validation metrics are often different from the validation metrics evaluated during training. This is because the network undergoes a 'finalization' step after the last iteration to compute the batch normalization layer statistics on the entire training data, while during training the batch normalization statistics are computed from the mini-batches.
If in addition to batch normalization layers the network contains dropout layers, the interaction between these two layers can aggravate this issue, as described here: https://arxiv.org/abs/1801.05134
If one removes the batch normalization (and dropout) layers from the network, the 'final' accuracy should be the same as the last iteration accuracy.
Increasing the size of the mini-batches can also alleviate this issue, since the statistics from a larger mini-batch may be better estimates of the entire training data statistics.
0 件のコメント
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!