Why my RL Agent action still passing the upper and lower limit ?

33 ビュー (過去 30 日間)
ardi ferdyhana
ardi ferdyhana 2021 年 6 月 7 日
編集済み: Azmi Yagli 2023 年 9 月 5 日
I am using Policy Gradient Agent, I want that my action only in range 0 - 100 and i already set up my UpperLimit to 100, and LowerLimit to 0. But as you can see -scope display 3-, my action still can passing the limit. How can i fix that ?
  2 件のコメント
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2021 年 6 月 9 日
which one is the action here? How does your actor network look like?
denny
denny 2021 年 12 月 7 日
I have solve my similar problem.
actInfo =rlNumericSpec([ 1],'UpperLimit',0.0771,'LowerLimit',-0.0405)
it means the minimum value is -0.0405, the maximum value is -0.0405+0.0771*2.
but your output is -1000 to 1000, I also donot know it.

サインインしてコメントする。

回答 (2 件)

denny
denny 2021 年 11 月 18 日
I have the same problems. How to solve it?

Azmi Yagli
Azmi Yagli 2023 年 9 月 5 日
編集済み: Azmi Yagli 2023 年 9 月 5 日
If you look at rlNumericSpec, you can see this on LoweLimit or UpperLimit section.
DDPG, TD3 and SAC agents use this property to enforce lower limits on the action. When using other agents, if you need to enforce constraints on the action, you must do so within the environment.
So if you use other algorithms you can use saturation, but it didn't work for me.
You can try discretize actions of your agent so it can have boundaries.
Or you can give negative reward, if your agent exceeds limits for action.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by