How to extract the trained actor network from the trained agent in Matlab environment? (Reinforcement Learning Toolbox)
2 ビュー (過去 30 日間)
古いコメントを表示
When the agent is successfully trained using DDPG in Matlab environment, if I want to verify the agent, the following codes should be executed according to the tutorial of MathWorks:
simOptions = rlSimulationOptions('MaxSteps',50);
experience = sim(env,agent,simOptions);
Unfortunately, it is not flexible enough for my program. I hope I can extract the trained actor network from the trained agent so that I can obtain the actions by directly inputting the observation vector to the actor network in each sampling step of my robot program for more complex tasks. However, I can’t seem to find the trained actor network from the following variables in the workspace:
Is there a way to extract the trained actor network? If so, how to call the extracted actor network (e.g., what are the I/O formats of the network)?
0 件のコメント
採用された回答
Anh Tran
2020 年 6 月 5 日
You can collect the actor (or policy) from the trained agent with getActor. Then, you can use the actor to predict the best action from an observation wtih getAction.
% get actor representation
actor = getActor(agent);
% actor predicts an action given an observation
action = getAction(actor, observation)
0 件のコメント
その他の回答 (0 件)
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!