Reinforcement learning/Experiecne buffer/Simulink

3 ビュー (過去 30 日間)
hieu nguyen
hieu nguyen 2023 年 5 月 5 日
コメント済み: hieu nguyen 2023 年 5 月 6 日
I am trying to create a experience buffer for my DDPG algorithm in Simulink. However, I can't find anyway or blocks to help me create a experience buffer to store (state,action,reward,next state) in simulink.
I have tried to create a experience buffer by rl.util.ExperienceBuffer in Matlab function but here is my code and the error
I hope you can help me deal with this problem. Thank you verry much!

回答 (1 件)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023 年 5 月 5 日
Why do you want to create your own buffer? If you are using the built-in DDPG agent, the buffer is created automatically for you. In any case, in 22a we added a feature that allows you to create your own experience buffer (see here). You can potentially use this to manually modify the experience buffer of a built-in agent.
  1 件のコメント
hieu nguyen
hieu nguyen 2023 年 5 月 6 日
Actually, I want create my own DDPG agent with my own neural network structure and my own optimizer algorithm in Simulink. I have tried to do that with built-in DDPG agent and custom agent but it is quite difficult so I decide to built everything by myself in Simulink with Matlab function.Now, the only problem I have is creating a experience buffer to store and batch data.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by