- Use getActor, getCriitic functions to gather the actor and critic representations from the trained agent.
- Use getLearnableParameterValues function to get the weights and biases of the neural network representation.
how can I display the trained network weights in reinforcement learning agent?
6 ビュー (過去 30 日間)
古いコメントを表示
Hello,
I trained a DDPG agent by using reinforcement learning in Reinforcement Learning Toolbox.
I wanted to know the trained weight in the agentm, so after the train was finished I checked the agent variables in work space.
However, I couldn't fine any values of the weights in the variables not even 'agent' and 'evn' variable.
I know it is possible to check weights of network in Neural Network Toolbox, but is it able to access to the weights in Reinforcement Learning Toobox?
What should I do?
0 件のコメント
回答 (1 件)
Anh Tran
2020 年 2 月 21 日
編集済み: Anh Tran
2020 年 2 月 21 日
Hi Ru SeokHun,
In MATLAB R2019b and below, there is a 2-step process:
See the code below to get the parameters of the trained actor. You can compare these values with those of an untrained agent. Assume you have DDPG agent named 'agent'
% get the agent's actor, which predicts next action given the current observation
actor = getActor(agent);
% get the actor's parameters (neural network weights)
actorParams = getLearnableParameterValues(actor);
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!