Reinforcement Learning Toolbox - Experience Buffer Samples

1 回表示 (過去 30 日間)
Hans-Joachim Steinort
Hans-Joachim Steinort 2019 年 9 月 17 日
The simulation I'm running has a fixed-step solver with a fixed-step-size of 5e-4. The sample time of my DQN-Agent (and the corresponding S-function for the reward-signal) is 0.25.
How is it possible that after a simulation time of 20 seconds I have a BufferLength of ~1600 samples? I hope you can enlighten me...
Bonus question:
Is it possible to look into the ExperienceBuffer? As impressed as I am by the RL-Toolbox, I would really prefer it not to be such a blackbox in most cases.

採用された回答

Raunak Gupta
Raunak Gupta 2019 年 9 月 20 日
Hi,
Since it is mentioned that DQN Agent is used, I am assuming that rlDQNAgentOptions is used for setting up the agent properties. The ExperienceBufferLength can be specified for storing that many experiences from training the agent. Also, there is a parameter SaveExperienceBufferWithAgent which can be set to true for saving the Experience buffer while training. The experience upto the limit of ExperienceBufferLength will be stores in a rlDQNAgent Object.
You may look for other Training Options here.
  1 件のコメント
Hans-Joachim Steinort
Hans-Joachim Steinort 2019 年 9 月 20 日
Thank you for your answer!
I found a bug in my simulation setup so that the Buffer was filled way to fast. I fixed this with a delay block inheriting a certain sample-time.
Yet it is not pissible to look inside the buffer to view what kind of (s,a,r,s')-touples are stored in there.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

製品


リリース

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by