Different Sample time for RL environment

6 ビュー (過去 30 日間)
Syed Adil Ahmed
Syed Adil Ahmed 2024 年 6 月 11 日
コメント済み: Syed Adil Ahmed 2024 年 6 月 13 日
Hello Everyone,
I am trying to build a RL agent using DQN currently. My environment model is composed of Nonlinear equations which have faster dynamics around 1ms. I was wondering if it possible to have my RL agent to work at 10ms and let the environment run at 1ms. This will obviously mean that for 10 timesteps the environment will be feeding at a constant control action from the RL agent.
I know I can code this up if I create the agent and environment myself, but currently I'm taking advantage of MATLAB's Reinforcement Learning Toolbox in MATLAB editor and I would save a lot of time if this is possible in the toolbox itself.
Thanks.

採用された回答

Kartik Saxena
Kartik Saxena 2024 年 6 月 13 日
Hi,
When creating a custom environment in MATLAB for use with the Reinforcement Learning Toolbox, you can define a 'step' function that advances the environment's state based on the agent's action. To simulate the environment at a faster rate than the agent operates, modify the 'step' function to perform multiple updates (10 updates of 1ms each) for each call to the 'step' function.
Refer to the following code snippet:
for i = 1:10
% Update the environment state based on the action
% Assume updateEnvironment is a function that updates the environment state
% for a 1ms timestep given the current state and action
loggedSignals = updateEnvironment(loggedSignals, action);
end
Hope it helps!
  1 件のコメント
Syed Adil Ahmed
Syed Adil Ahmed 2024 年 6 月 13 日
Thanks a lot. That is not the solution I was expecting, but sometimes its great to see small things (like a for loop) making the difference.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

製品


リリース

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by