How to see actions when using the train() function in RL tool box.
3 ビュー (過去 30 日間)
古いコメントを表示
rtn
2022 年 5 月 24 日
回答済み: Emmanouil Tzorakoleftherakis
2022 年 6 月 2 日
Hi!
I use the train() function for my matlab env. I would like to also see the action input into the step function at each of the training steps. Is that possible? So far I only get the agent.
clear all
clc
close all
%load agent1.mat
ObservationInfo = rlNumericSpec([4 1501]);
ObservationInfo.Name = 'CartPole States';
ObservationInfo.Description = 'pendulum_force, cart position, cart velocity';
%ActionInfo = rlNumericSpec([1 1]);
ActionInfo = rlNumericSpec([2 1],'LowerLimit',[-1*pi/2;-1*pi],'UpperLimit',[pi/2;pi]);
ActionInfo.Name = 'CartPole Action';
env = rlFunctionEnv(ObservationInfo,ActionInfo,'myHapticStepFunction','myHapticResetFunction');
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
rng('shuffle')
%%
clc
load TD3_Haptic_MIMO
agent = TD3_Haptic_MIMO;
%Extract the deep neural networks from both the agent actor and critic.
actorNet = getModel(getActor(agent));
criticNet = getModel(getCritic(agent));
%Display the layers of the critic network, and verify that each hidden fully connected layer has 128 neurons
criticNet.Layers
% figure()
% plot(layerGraph(actorNet))
% figure()I wou
% plot(layerGraph(criticNet))
temp = getAction(agent,{rand(obsInfo.Dimension)})
%%
maxepisodes = 140;
maxsteps = 1;
trainOpts = rlTrainingOptions(...
'MaxEpisodes',maxepisodes,...
'MaxStepsPerEpisode',maxsteps,...
'ScoreAveragingWindowLength',5,...
'Verbose',false,...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',1.5, 'SaveAgentCriteria',"EpisodeReward",'SaveAgentValue',1.4',...
'SaveAgentDirectory', pwd + "\Haptic\run1");
% LOOK INTO SAVING AGENT VALUE
% https://www.mathworks.com/help/reinforcement-learning/ug/train-reinforcement-learning-policy-using-custom-training.html
% https://www.mathworks.com/help/reinforcement-learning/ug/train-reinforcement-learning-agents.html
%https://www.mathworks.com/help/reinforcement-learning/ref/rltrainingoptions.html
trainingStats = train(agent,env,trainOpts);
0 件のコメント
採用された回答
Emmanouil Tzorakoleftherakis
2022 年 6 月 2 日
Hello,
To log action data throughout an episode, you would need to do so from inside the step function of your environment. You can save actions into a mat/csv file as needed from there.
Hope this helps
0 件のコメント
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Agents についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!