Training Reinforcement Learning Agents --> Use ResetFcn to delay the agent's behaviour in the environment

2 ビュー (過去 30 日間)
I would like to train my RL Agent in an environment which is represented by an FMU block in Simulink.
Unfortunately whenever a simulation starts I experience some brief natural oscillations in the states before the system reaches the ideal stedy state for the training.
I would like to tell my agent to wait for the steady state to be reached every time, before starting any experience related to the training.
I know that ResetFcn can be called at the beginning of each simulation, but this is usually used to change parameters in the blocks before the simulation starts; is it possible to use it for my specific purposes instead, i.e. to let some time buffer between the beginning of the simulation and the beginning of my agent's action?
If this is not possible, are there other suitable ways to overcome this problem?

採用された回答

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2024 年 1 月 24 日
You can place the RL Agent block inside a triggered subsystem and set the agent's sample time to -1 (see e.g. here). Then set this subsystem to be executed whenever it makes sense for your problem.

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by