フィルターのクリア

Stopping conditions for DQN training

7 ビュー (過去 30 日間)
Zonghao zou
Zonghao zou 2020 年 10 月 18 日
回答済み: Madhav Thakker 2020 年 11 月 25 日
Hello all,
I am currently playing around with DQN trainning. I am trying to find a systemic way to stop the trainning process rather than to stop it mannually. However, for my trainning process, I have no idea what the end rewards will be and I don't have a target point to reach. Therefore, I do not know when to stop.
Is there a way for me to stop DQN agent without those information and guarentee some type of convergence?
Thanks for helping!

回答 (1 件)

Madhav Thakker
Madhav Thakker 2020 年 11 月 25 日
Hi Zonghao zou,
One possible parameter to consider when stopping training is Q-Values. If the Q-Values are saturated, it means that no learning is happening in the network. You can perhaps look at your Q-values and decide a threshold, to perform early-stopping in the network. You don't need the end-reward or target-point to perform early stopping based on Q-Values.
Hope thi helps.

タグ

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by