How to run the simulink model when implementing custom RL training?

1 回表示 (過去 30 日間)
Hello, I am developing a custom training of RL DQN agent based on the link, however, how should I adapt it to the simulink environment?
Especially for the code below, when applying an action to the environment, the step is not applicable for a simulink model. How should I solve this issue? Thanks in advance.
% Apply the action to the environment
% and obtain the resulting observation and reward.
[nextObs,reward,isdone] = step(env,action{1});

採用された回答

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023 年 5 月 25 日
The way to do it would be to use runEpisode
  2 件のコメント
Yihao Wan
Yihao Wan 2023 年 5 月 25 日
Thanks for the reply. The runEpisode would run all the steps rather than single step right? Then how should I modify the rest single step training iterations? I am refering to this example.
Thanks a lot.
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023 年 5 月 25 日
The example you are showing is model-based RL, it's different from what you mentioned at the beginning.
With runEpisode you have the flexibility of running the entire episode and learning after, or learning at every step. For that you can use the processExperiencefcn shown in the doc. This example shows how you can implement it.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by