PPO minibatch size for parallel training with variable number of steps

5 ビュー (過去 30 日間)
Federico Toso
Federico Toso 2024 年 2 月 23 日
I'm training a PPO Agent in sync parallelization mode.
Because of the nature of my environment, the number of steps is not the same for each episode, but can vary (sometimes wildly). Quoting from the reference for PPO Agent Options:
"When the agent is trained in parallel, ExperienceHorizon is ignored, and the whole episode is used to compute the gradients"
I don't fully understand the way in which the experiences collected during the episodes are divided into minibatches of the selected size, before the learning phase begins. Specifically, suppose that
  • I have 2 parallel syncronous workers: the first one collects 30 experiences, the second one collects 70 experiences for a specifice couple of episodes
  • The set of my minibatches has been set to 32
How are the experiences divided in minibatches?
As I understand it:
  • Each worker sends its own experiences to the client --> So the client gaters 30 + 70 = 100 experiences
  • These 100 experiences are divided into three groups of 32 (= minibatch size) each; 4 experiences are discarded, since 3 x 32 = 96
Is my reasoning correct? If so, I guess that if I want to limit the number of discarded experiences, the best way would be to decrease the minibatch size as much as possible (since I cannot know in advance the total number of experience available in every iteration)

回答 (1 件)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2024 年 2 月 26 日
No data will be discarded actually. As of R2023b, the 4 experiences that are left in your example form their own minibatch and are used that way. Note that this behavior may change in the future.

カテゴリ

Help Center および File ExchangeSequence and Numeric Feature Data Workflows についてさらに検索

タグ

製品


リリース

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by