フィルターのクリア

Different observation matrix in reinforcement learning episode

5 ビュー (過去 30 日間)
mohsan niaz
mohsan niaz 2022 年 2 月 7 日
回答済み: Poorna 2023 年 9 月 29 日
Hi Everyone,
I want to train an agent in a deep-Q reinforcement learning setting. But for every episode i want the agent to oberserve or read a different dimension in a large matrix stored in a .mat file. In otherwords i want to the agent to read a different row or a colomn of a matrix in every new episode of training.
Can anyone guide how can this be done in RL tool box in Matlab. I am also attaching a screenshot of the Simulink environment for reinforcement learning ?
Regards
Mohsan.

回答 (1 件)

Poorna
Poorna 2023 年 9 月 29 日
Hi Mohsan,
I understand that you would like the observation at the start of each episode to be a random or predefined sequence of a row in the observation matrix you have.
To achieve this, you can use the "ResetFcn" callback property of the environment in your model. The reset function sets the environment to an initial state and computes the initial value of the observation.
You can create a custom callback function that contains the logic to select the required observation from the observation matrix and returns it. You can then set this function to the “ResetFcn” callback property of the environment. This function will be called by the "train" function at the beginning of each training episode.
For more information on how to use the "ResetFcn" callback property, please refer to the following MATLAB documentation https://www.mathworks.com/help/reinforcement-learning/ref/rl.env.rlfunctionenv.html
Hope this Helps!

製品


リリース

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by