MATLAB Answers

DDPG Agent: Not stabilizing creating an unstable model

13 ビュー (過去 30 日間)
Rajesh Siraskar
Rajesh Siraskar 2019 年 12 月 16 日
Dear MATLAB,
Am training a DDPG agent on randomly set straight lines (levels) and later testing on a benchmark waveform. Shouldn't the training stablize over time and create a stable model? At 960 episodes the saved agent seems to perform better than at 2180 episodes. Both agents saved for avg.rewards over 50 episodes and > 25 K. Also the difference between model saved at 940 versus 960 episodes seems drastic.
In the picture below are the Episode Manager showing the avg.rewards (over 50 episodes) going up and down several times. One would expect it to look like the dark green line, stablizing over time? What change can I make to create a stable model?
Action space: 1.0 to 10.0, continuos
Test wave-form: 2000 seconds long
Training sample time and simulation length: Ts: 1 and Tf=250
Hyper-parameters: Learning Rates Critic = 1e-03, Actor = 1e-04 | Gamma (discount) = 0.95, Batch size = 64
Neurons: Obsv. path: FC1 = 64, FC2 = 24 and Actor path FC1 = 24
DDPG Noise Variance = 0.1, VarianceDecayRate = 1e-5 (Have tried Noise Variance 0.45 too and decay at 1e-3, 1e-4 etc.)
(For a higher res. image please see attached)
V.9.94.4_MATLAB_16-Dec-2019.jpg

  0 件のコメント

Sign in to comment.

回答 (1 件)

Rajesh Siraskar
Rajesh Siraskar 2019 年 12 月 20 日
Based on several rounds of training, my personal observation is that RL will converge initially to an optimal expected value.
Any training beyond that simply seems to not help. I think it is important to stop when we realize that it has reached the optimum.

  1 件のコメント

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2020 年 1 月 27 日
+1 on that. It could for example be the case that you reach a point in training where you have a decent policy, but exploration of the agent leads the search somewhere else (pros and cons of sample-based gradients).

Sign in to comment.

製品


リリース

R2019a

Translated by