Question regarding DDPG PMSM FOC control example
1 回表示 (過去 30 日間)
古いコメントを表示
Mohamed Hannan Sohail
2021 年 3 月 3 日
コメント済み: Emmanouil Tzorakoleftherakis
2021 年 3 月 8 日
Trying to do PMSM control similar to the DDPG model used but I have modelled the motor in terms of dq frame(vd, vq as input, id,iq,speed as output).
Is there a need to discretize the entire environment with different sampling times and use IIR filters if I am not going to use PWM? Or does the DDPG agent require the environment to be discrete?
0 件のコメント
採用された回答
Emmanouil Tzorakoleftherakis
2021 年 3 月 5 日
All RL agents in Reinforcement Learning Toolbox operate at fixed discrete-time intervals by default. However, you do not need to do anything particular to discretize your Simulink model. In fact your model can run at variable integration step and the "sample time" parameter of the agent will determine how frequently the RL Agent block will be executed.
2 件のコメント
Emmanouil Tzorakoleftherakis
2021 年 3 月 8 日
The FOC example is computationally expensive because the agent sample time is very small. Training speed is a combination of many design choices, including how long it takes for the model to simulate.
Even with parallelization, there is no guarantee of linear scaling in training time. I would still create a technical support case so that we can take a look at the crashing issue
その他の回答 (0 件)
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!