MATLAB crashes when using Reinforcement Learning Toolbox to train an agent using Parallel Computing.

8 ビュー (過去 30 日間)
I am running the Reinforcement Learning toolbox to train an agent using parallel computing.
When I use 20 cores (+4*16GB gpu) it runs well but when 32cores / 36cores / 40 cores are used, MATLAB 2020a crashes.
Why is the crash happening?

採用された回答

MathWorks Support Team
MathWorks Support Team 2020 年 7 月 30 日
MATLAB might crash while attempting to train a reinforcement learning agent in parallel with ten or more workers. The crash is due to a communication race condition between the client and worker processes.
You can avoid this error by updating MATLAB to R2020a Update 3.
As a workaround, to bypass the communication race condition for PG, DQN, DDPG, TD3, and PPO agents, use synchronous parallel training and configure the workers to wait until the end of the episode before sending data to the host. To do so, configure your rlTrainingOptions object as shown in the following code:
>> trainOptions = rlTrainingOptions;
>> trainOptions.UseParallel = true;
>> trainOptions.ParallelizationOptions.Mode = "sync";
>> trainOptions.ParallelizationOptions.StepsUntilDataIsSent = -1;
Using StepsUntilDataIsSent = -1 is not supported for AC agents. To avoid a communication race condition for these agents, consider using a PPO agent with experience-based parallel training or a PG agent with gradient-based parallel training.

その他の回答 (0 件)

製品


リリース

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by