フィルターのクリア

Custom Training Loop with configured DDPG Agent

10 ビュー (過去 30 日間)
Allmo
Allmo 2022 年 5 月 30 日
回答済み: Poorna 2023 年 8 月 30 日
Hello,
is there the possibility to do a custom training loop with a already configured DDPG agent?
Background: I want to check after each episode whether the average reward has reached a new maximum. When a new maximum is reached, the agent should be saved to a mat file, otherwise not in order to reduce the amount of data. In the training options, I can only set a limit for when the agent should be saved. But then all agents are saved as soon as the limit is exceeded.
Thanks!
Best regards,
allmo

採用された回答

Poorna
Poorna 2023 年 8 月 30 日
Hi,
I understand that you would like to save the agent after every episode based on whether the new episode's reward is greater than the existing average reward. You can achieve this by using “rlDataLogger”.
Create a new “FileLogger” object as shown below:
fileLgr = rlDataLogger();
Then, you can do this:
fileLgr.EpisodeFinishedFcn = @myEpisodeFinishedFcn;
where myEpisodeFinishedFcn is your custom function which implements the logic to conditionally save the agent to disk.
Hope this helps!

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeTraining and Simulation についてさらに検索

製品


リリース

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by