GPU Training RL Toolbox on R2022a

23 ビュー (過去 30 日間)
Berk Agin
Berk Agin 2022 年 3 月 28 日
回答済み: Valerio 2024 年 7 月 26 日
Hello everyone,
I am trying to train my agent with using Reinforcement Learning Toolbox Matlab2022a. Unfortunately, I couldn't run the training due to some packages were updated. rlRepresentationOptions was contain 'UseDevice','gpu' option, but the package has been changed to rlOptimizerOptions without any 'UseDevice' option. How can I run my training with using gpu? Thank you in advance.
  3 件のコメント
Berk Agin
Berk Agin 2022 年 3 月 31 日
Hi Kaustubh,
Thank you for your answer. I'll be looking forward to the solution. Have a nice day.
Stefan
Stefan 2023 年 12 月 13 日
This is still broken...

サインインしてコメントする。

回答 (2 件)

Aashita Dutta
Aashita Dutta 2022 年 3 月 31 日
Hello Berk,
According to my understanding, you are trying to train agent using Reinforcement Learning Toolbox MATLAB 2022a. Due to package upgrades, rlRepresentationOptions is not recommended, instead rlOptimizerOptions is used to train agent.
Could you please try the following workaround to train using GPU:
criticOptions = rlOptimizerOptions('LearnRate',1e-03,'GradientThreshold',1);
criticOptions.UseDevice = 'gpu';
Please follow the instructions given in this documentation- How to training agent using GPU example to get more information.
  4 件のコメント
Abolfazl Nejatian
Abolfazl Nejatian 2022 年 7 月 27 日
but as i check there is no property 'UseDevice' for the rlOptimizerOptions!
could you please help me on this matter.
Bradley Fourie
Bradley Fourie 2022 年 8 月 17 日
編集済み: Bradley Fourie 2022 年 8 月 17 日
Hi everyone,
I would like to add my 2 cents since the Matlab R2022a reinforcement learning toolbox documentation is a complete mess.
I think I have figured it out:
  • Step 1: figure out if you have a supported GPU with
availableGPUs = gpuDeviceCount("available")
gpuDevice(1)
  • Step 2: When creating your actor and critic use the following to select the GPU (yours might differ but the UseDevice is what is important here)
actor = rlDiscreteCategoricalActor(actorNetWork,oinfo,ainfo,'UseDevice','gpu');
critic = rlValueFunction(criticNetwork,oinfo,'UseDevice','gpu');
  • Step 3: Create your optimizer as usual
actorOpts = rlOptimizerOptions('LearnRate',3e-4,'GradientThreshold',1);
criticOpts = rlOptimizerOptions('LearnRate',3e-4,'GradientThreshold',1);
From what I can gather (with my last two braincells), the function call to use a GPU has moved to the newer actor and critic constructors. However, some staff guidance would be HIGHLY appreciated.

サインインしてコメントする。


Valerio
Valerio 2024 年 7 月 26 日
Hi guys, I solved in this way:
% define the agent
myagent = rlPPOAgent(obsInfo, actInfo, initopts, agent_opt);
% try to change it into gpu
myactor = getActor(myagent);
mycritic = getCritic(myagent);
myactor.UseDevice = 'gpu';
mycritic.UseDevice = 'gpu';
in this way I could use the rlOptimizerOptions function.
hoping this might be useful, have a good day.

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by