Reinforcement Learning with Parallel Computing

8 ビュー (過去 30 日間)
PB75 2021 年 8 月 9 日
コメント済み: PB75 2021 年 8 月 9 日
Hi All,
I have been training a TD3 RNN agent on my local PC for montrhs now, due to the long training period due to the performance of my PC I have been saving the buffer, so can I can reload the pretrained agent to restart training.
I now have access to my University HPC server, so can now use parallel computing to speed up the training process.
However, now when I attempt run the restart training with the pretrained agent, now with parallel computing on the HPC server, (which has prevously been running on my local PC with no issues with NO parallel computing) it flags the following issue.
Do I need to start with a fresh agent now I am using parallel computing?
Also is the following code to start parallel computing correct?
% trainingOpts.UseParallel = true;
% trainingOpts.ParallelizationOptions.Mode = 'async';
% trainingOpts.ParallelizationOptions.DataToSendFromWorkers = 'Experiences';

回答 (1 件)

Drew Davis
Drew Davis 2021 年 8 月 9 日
As of R2021a, the RL Toolbox does not support parallel training with RNN networks.
You can still reuse your current experience buffer for training new networks by replacing the actor and critic for TD3
agent.AgentOptions.ResetExperienceBufferBeforeTraining = false;
Your snippet to setup TD3 parallel training looks good.
Hope this helps
  1 件のコメント
PB75 2021 年 8 月 9 日
Hi Drew,
Thanks for your reply, so I cannot use LSTM layers with parallel training?


Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by