How to save trained Q-Network by RL-DQN?

4 ビュー (過去 30 日間)
一馬 平田
一馬 平田 2021 年 10 月 31 日
回答済み: Abhiram 2025 年 6 月 12 日
I would like to load the trained Q-Network in rlQValueRepresetation.
How can I save the pre-trained Q-network.
I know that DQN agent can be saved with rlTrainingOptions. but I could not confirm pre-trained Q-network.
Due to my lack of confirmation, if it is possible to save pre-trained Q-Network in rlTrainingOptions, could you please tell me how to load the Q-Network?

回答 (1 件)

Abhiram
Abhiram 2025 年 6 月 12 日
To save and load a trained Q-Network in rlQValueRepresentation, the Q-Network can be extracted from the agent and be saved as a MAT file. Code snippets for saving and loading a Q-Network are given:
% Extract Q-network from trained agent
qRep = getCritic(agent);
% Save the Q-network to a file
save('savedQNetwork.mat','qRep');
% Load the Q-network from file
load('savedQNetwork.mat','qRep');
% Rebuild agent from loaded Q-network (assuming agent options are available)
agentFromLoadedQ = rlDQNAgent(qRep, agentOpts);
For more information on the “save”, “load”, “rlDQNAgent” and “getCritic” functions, refer to the MATLAB Documentation:
Hope this helps!

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

製品


リリース

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by