Where is the actual storage location of the RL agent's weights.

2 ビュー (過去 30 日間)
Dmitriy Ogureckiy
Dmitriy Ogureckiy 2023 年 6 月 29 日
When I trained RL agent, I have got several Agents files such that:
After I loaded one of them to the workspacec I get this data:
But in saved_agent that is used in the simulation next there is no some weights or information that can be used in SImulink modeling.
--------
My question is: Where is exact location of weights of Networks or how the simulation happens ?
--------
P.S. For example I want implements this trained RL NEtwork on real robot, how I make this without weights ?

回答 (1 件)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023 年 7 月 5 日
Hello,
You can implement the trained policy with automatic code generation, e.g. with MATLAB Coder, Simulink Coder and so on. You don't have to know the weights for that, the code is generated automatically. The following two links provide additional info:
That said, if you still want to take a look at the trained weights, you need to extract the neural network from the agent. You can do this as shown here:
Hope this helps
  2 件のコメント
Dmitriy Ogureckiy
Dmitriy Ogureckiy 2023 年 7 月 17 日
Thank you, but my question was how this functions getcritic and getactor get the weight from the agnet, if it don't contain they ?
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2023 年 7 月 17 日
編集済み: Emmanouil Tzorakoleftherakis 2023 年 7 月 17 日
The agent "includes" the neural networks, which "include" the weights. Just because you can see the weights from the neural network object does not mean you can view them from the agent object. It depends how the code of the classes is structured. The examples I shared show how you can get the weights from the agent.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by