How to show the loss change of critic network or actor network when training with DDPG algorithm

11 ビュー (過去 30 日間)
蔷蔷 汪
蔷蔷 汪 2022 年 2 月 22 日
回答済み: Poorna 2023 年 9 月 29 日
How to show the loss change of critic network or actor network when training with DDPG algorithm?

回答 (1 件)

Poorna
Poorna 2023 年 9 月 29 日
Hi,
I understand that you would like to view/show the change in the loss values of actor/critic networks of a DDPG agent during training.
You can achieve this by utilizing the MonitorLogger functionality. Follow these steps:
  1. Create a “monitor” object using trainingProgressMonitor” function:
%create a monitor object
monitor = trainingProgressMonitor();
2. Create a “logger” object using the “rlDataLogger” function with the “monitor” as input:
%create a logger
logger = rlDataLogger(monitor);
3. Use the AgentLearnFinishedFcn callback property of the monitor object to log the losses. Create a custom callback function that receives a structure containing the actor and critic losses, as well as other useful information. Customize the callback function to extract and return the data you want to log.
4. At the end of the training, you can access the logged data for further analysis or visualization.
For more information on these functions, please refer to the following documentation:
Hope this Helps.

カテゴリ

Help Center および File ExchangeTraining and Simulation についてさらに検索

製品


リリース

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by