train neural net with prior solution

3 ビュー (過去 30 日間)
Dilip
Dilip 2024 年 11 月 28 日
編集済み: Matt J 2024 年 11 月 28 日
Net training finished with 10000 epochs. Need to strat where it finished.

回答 (1 件)

Matt J
Matt J 2024 年 11 月 28 日
編集済み: Matt J 2024 年 11 月 28 日
Your post is under-detailed and does not tell us how the network and training are implemented. If I assume you are using trainnet, e.g.,
you can simply run the training again, giving as the second input argument your pre-existing, partially trained network.
  4 件のコメント
Dilip
Dilip 2024 年 11 月 28 日
When I do this the initial Performance is much larger than the final Performance of the previous net. Is this a concern or should one not worry about this. Many Thanks.
Matt J
Matt J 2024 年 11 月 28 日
編集済み: Matt J 2024 年 11 月 28 日
This method of resuming training is not optimal. The optimal method is using checkpoint saves, as explained at the link I gave you. But since you did not set checkpoints, the training algorithm does not have everything it needs to resume gracefully.
Even though you have the network weights and biases, there is no record of prior algorithm state variables, like learning rate schedules and momentum, etc... Therefore, the algorithm will need time to reconverge. You still might end up saving iterative effort as compared to starting from scratch, but next time you should use checkpoints. Or, consider moving to the Deep Learning Toolbox, which does give you finer control of algorithm variables.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeSequence and Numeric Feature Data Workflows についてさらに検索

タグ

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by