Action Clipping and Scaling in TD3 in Reinforcement Learning
8 ビュー (過去 30 日間)
古いコメントを表示
Hello,
I am trying to tune my TD3 agent to solve my custom environment. The environment has two actions in the following range: the first one in [0 10] and the second one in [0 2*PI) (rlNumericSpace).
I am following this example architecture---
https://in.mathworks.com/help/reinforcement-learning/ug/train-td3-agent-for-pmsm-control.html
Now I have the following questions.
- Since tanh is [-1 1], should I use the scaling layer at the actor network's end? maybe with the following values
scalingLayer('Name','ActorScaling1','Scale',[5;pi],'Bias',[5;pi])];
2. How to setup Exploration noise and Target policy noise? I mean, what should be their variance values? Well, not precisely tuned, but a competent range given I have more than one action and the provided action range is not in [-1 1] ?
3. How do I clip those values to fit inside the action bound? I dont see any such option in rlTD3AgentOptions
I see all the TD3 examples (and most RL examples in general) action's range is b/n [-1 1]. I am confused about modifying the parameters when the action space is not within [-1 1], like in my case.
Thanks.
0 件のコメント
採用された回答
Emmanouil Tzorakoleftherakis
2020 年 12 月 11 日
Hello,
In general, for DDPG and TD3, it is good practice to include the scalingLayer as the last actor layer to scale/shift the actor actions within desired range.
To your questions:
1) You should use the scalingLayer yes. To specify different scale/bias values for your two outputs, have a look at this example.
2) This section provides some tips on how to set up exploration variance, e.g. "It is common to have Variance*sqrt(Ts) be between 1% and 10% of your action range".
3) The upper and lower limit options in rlNumericSpec as well as the scalingLayer will ensure your actions are within desired range before exploration noise is added. After adding noise however, it is possible that your actions will go out of range which is also why it's often necessary to account for that on the environment side. If you are using Simulink, add for example a saturation block. In MATLAB add an if statement and clip the actions if out of range.
Hope that helps
3 件のコメント
Emmanouil Tzorakoleftherakis
2020 年 12 月 12 日
In the step function yes. You can just add an if statemeng, or use "max" or "min"
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!