Moving variables between episodes

4 ビュー (過去 30 日間)
Danial Kazemikia
Danial Kazemikia 2024 年 8 月 6 日
回答済み: Amish 2024 年 8 月 6 日
To use matlab for RL, I have defined the action and observation space and the agent in a .m file, which also calls a reset function and step function also defined in .m files, and not in simulink. How can I move this variables when Matlab is still running the function train(agent,env)? I want to normalize all discounted rewards across all episodes.

採用された回答

Amish
Amish 2024 年 8 月 6 日
Hi Danial,
I understand that you want to modify the variables when the function train(agent,env) is running in MATLAB.
Editing variables during the execution of a function like train(agent, env) in MATLAB can be challenging because the function typically runs in a blocking manner, meaning it does not return control to the MATLAB command prompt until it completes. Therefore, usually it is not possible to modify the variables when the function is executing.
However, there are some other strategies that you may try exploring:
  1. Use Global Variables: These variables can be accessed and modified from any function. These can be used to store the rewards across episodes. This will then allow you to access and update the rewards by calling different functions.
  2. Callback Functions: Define callback functions that can be triggered to modify the variables during the training process.
  3. You can always try to define your own custom training loop. This will give you proper control over the training process and enable you to modify variables as needed.
Hope this helps!

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

タグ

製品


リリース

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by