Modifying the control actions to safe ones before storing in the experience buffer during SAC agent training.

4 ビュー (過去 30 日間)
Hello everyone,
I am implementing a safe off-policy DRL SAC algorithm. Using an iterative convex optimization algorithm moves actions into a safe region. However, this algorithm is applied in the environment. Therefore, the existing rlSACAgent still store unsafe actions in the buffer, and the agent cannot learn the modified actions. Therefore, the iterative algorithm will be supplied with unlearned actions and takes more time to converge. My question is:
How can I store the modified actions in the experience buffer instead of the unsafe ones?
Illustrative figure:
Many thanks for your help.

採用された回答

Ahmed R. Sayed
Ahmed R. Sayed 2022 年 9 月 21 日
I found the solution: You need to use the Simulink environment and the RL Agent block with the last action port.

その他の回答 (0 件)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by