Reinforcement learning DDPG action fluctuations

4 ビュー (過去 30 日間)
Tech Logg Ding
Tech Logg Ding 2020 年 11 月 17 日
コメント済み: sungho park 2022 年 2 月 23 日
Upon attempting to train the path following control example in MATLAB, the training process generated the behviour shown in the picture.
  1. The steering angle is constantly fluctuating.
  2. The acceleration is also constantly flucutating.
  3. The reward convergence is very noisy and seems to jump between a high reward and low reward.
The example from here shows that it should have converged already and the actions should be smooth.
What could be causing this issue? This also happened for other projects I used. One method I used was to penalise the fluctuation in the reward function using this term inspired by a paper published by Wang et. al:
10*[ (d/dt(current_action) * d/dt(previous_action) < 0]
Please let me know how to avoid this problem. Thank you very much!
  2 件のコメント
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2020 年 11 月 17 日
Hello,
One clarification - the scope signals you are showing on the right, are you getting these during training or after training?
Tech Logg Ding
Tech Logg Ding 2020 年 11 月 17 日
Hi,
Thank you for the reply.
It was during training. However, upon completion, it still fluctuates with a smaller magnitude and frequency. I did not save the image so I can't post it here. The example in the link also shows fluctuations in the steering angle.

サインインしてコメントする。

採用された回答

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2020 年 11 月 22 日
Hello,
During training, DDPG explores the action space by adding noise to the output of the actor (see step 1 here). That explains the variance during training.
Even after training you may see small variations in the actor output for observations that are different but close enough. After all you are effectively using a function approximator to approximate a nonlinear relationship between inputs (observations) and outputs (actions). If you want to get the policy to be more accurate near the setpoint, you could consider training further near the values of interest.
Also, the result you get on your machine may differ from the one posted in the documentation. Please see this post for an explanation.
Hope that helps
  1 件のコメント
sungho park
sungho park 2022 年 2 月 23 日
for me after training, the actor output is always constant. can you explain why?

サインインしてコメントする。

その他の回答 (0 件)

製品


リリース

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by