Why does the SAC trainning stop at the first episode? What can trigger it?
5 ビュー (過去 30 日間)
古いコメントを表示
I am trainning an SAC Agent for a path following mobile robot in matlab with 2 different PI controllers one for the linear velocity controll and the other for the angular velocity. I connected the parameters Ki and Kp of both Controllers to the SAC Agent. I decided to define the Reward as (Reward = -0.1*(abs(Error_Linear)+abs(Error_Angular))) and the stopping condition (Is_done = (abs(Error_Linear)+abs(Error_Angular))<1). I am not understanding what triggers the trainning process to stop at the first episode.
0 件のコメント
回答 (1 件)
Ayush Aniket
2024 年 11 月 14 日
編集済み: Ayush Aniket
2024 年 11 月 14 日
Hi Renaldo,
The reason for the agent training stopping after first episode could be due to the "Training termination" condition specified as the StopTrainingCriteria argument in the rlTrainingOptions function. Refer to the following documentation link to read about the argument:
One similar example can be found here: https://www.mathworks.com/matlabcentral/answers/1779640-reinforcement-learning-agent-stops-training-unexpectedly
If this is not the issue, please share the script you are using.
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Training and Simulation についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!