フィルターのクリア

Insuring reproducibility in training YOLOv2 in the Deep Learning Toolbox

1 回表示 (過去 30 日間)
Michael Younger
Michael Younger 2020 年 4 月 30 日
回答済み: Ryan Comeau 2020 年 5 月 10 日
I'm using the YOLOv2 network in the Deep Learning Toolbox. We are seeing significant variations in testing results running the same training code more than once.
Is it possible to insure reproducibility in training? If so, what options/flags would need to be set to insure reproducible training?
One option I see already is to set the "Shuffle" option to "none" (its default is "once").
But are there other flags/random seeds that I need to set to insure repeatability?
Thanks!
  2 件のコメント
Mohammad Sami
Mohammad Sami 2020 年 4 月 30 日
編集済み: Mohammad Sami 2020 年 4 月 30 日
You can try using rng with a seed as the first step.
I could not find a direct documentation for the training deep learning models, but i am assuming that this applies to training deep learning models as well.
https://www.mathworks.com/help/matlab/math/generate-random-numbers-that-are-repeatable.html
Michael Younger
Michael Younger 2020 年 4 月 30 日
Interesting; thank you!

サインインしてコメントする。

回答 (1 件)

Ryan Comeau
Ryan Comeau 2020 年 5 月 10 日
Hello,
What you are experiencing is very normal for deep learning. The process of network initialization involves assigning initial weights to each of your layers and activation functions. These initial weights can be fixed by fixing the random seed for initialization as mentioned in the comments above. This may not resolve your problem however. The algorithm which minimizes your loss function is called stochastic gradient descent. A stochastic gradient descent is by definition not deterministic, which means there will always be some variance in your results. This should be seen as a good thing however, we don't want to get stuck in a local minima, which is likely to occurr if our algorithm was deterministic.
If you want to see the performance of deep learning being as deterministic as possible, set the mini batch size to 1. This will remove the ability to not get stuck in local minima and you will see a drop in performance.
The shuffle option you are describing is to shuffle the order of data so that your mini-batches do not always have the same data in them.
Lastly, if you do want to have "consistent" training results, simply redefine what consistent means in this case. Run your training 10 times and the results which occurrs the most frequently will be your replicable results.
Hope this helps,
RC

カテゴリ

Help Center および File ExchangeCustom Training Loops についてさらに検索

製品


リリース

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by