2 out of 7 Observations Defined in MATLAB DDPG Reinforcement Learning Environment. Are the rest given random values?
3 ビュー (過去 30 日間)
古いコメントを表示
Huzaifah Shamim
2020 年 6 月 25 日
回答済み: Emmanouil Tzorakoleftherakis
2020 年 7 月 2 日
After reading up one Deep Deterministic Policy Gradient, I found this example on MATLAB:
My question is the following: In DDPG, we plug in the Observation to our Actor to get our actions. The observations in the MATLAB environment are 7: x, y, dx, dy, sin, cos, dtheta. However, only x and y are assigned in the beginning. Does that mean that the rest are given random values before placed in the Critic Network? If my understanding is wrong, could someone please explain to me what is occurring in this model? Thank You
0 件のコメント
採用された回答
Emmanouil Tzorakoleftherakis
2020 年 7 月 2 日
Hello,
I am assuming you are referring to the initialization of x and y inside the "flyingRobotResetFcn" function. Basically, if you are using a Simulink model as your environment (like in this case), there is no need to initialize any of the observations in your problem. The initial conditions are directly decided by values in your Simulink blocks. However, it is good practive to try and change the initial conditions of every episode so that the agent gets exposed to different scenarios. Reinforcement Learning Toolbox lets you do that using the reset function mechanism. So what is happening here is that we are using the reset function to change x0 and y0 and let the remaining observations to the values determined in the Simulink model.
Hope that helps.
0 件のコメント
その他の回答 (0 件)
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!