Reinforcement Learning Toolbox Error in R2022a

5 ビュー (過去 30 日間)
PB75
PB75 2022 年 6 月 8 日
回答済み: Yogesh Kumar 2023 年 12 月 31 日
Hi,
I have been using the RL toolbox within R2021a, using a TD3 agent, with a fully connect network (NON LSTM) to control a PMSM, with no issue.
I recently updated my install to R2022a, now my RL code (which runs OK in R2021a) flags the following error when running the code in R2022a, which does not make sense as it is a non LSTM network. The output shows:
observationInfo =
rlNumericSpec with properties:
LowerLimit: [20×1 double]
UpperLimit: [20×1 double]
Name: [0×0 string]
Description: [0×0 string]
Dimension: [20 1]
DataType: "double"
actionInfo =
rlNumericSpec with properties:
LowerLimit: -Inf
UpperLimit: Inf
Name: [0×0 string]
Description: [0×0 string]
Dimension: [2 1]
DataType: "double"
env =
SimulinkEnvWithAgent with properties:
Model : MyPMLSM_RL_Single_Vel
ResetFcn : []
UseFastRestart : on
'SequenceLength' option value must be greater than 1 for agent using recurrent neural networks.
rl.agent.util.checkOptionFcnCompatibility(this.AgentOptions,actor);
this = setActor(this,actor);
Error in rlTD3Agent (line 107)
Agent = rl.agent.rlTD3Agent(Actor, Critic, AgentOptions);
The actor and agent options and code are below, has anyone encountered this error when updating to R2022a with code originally written and used in R2021a?
Thanks in advance
Patrick
actorNet = [
sequenceInputLayer(numObservations,'Normalization','none','Name','observation')
fullyConnectedLayer(200,'Name','ActorFC1')
reluLayer('Name','ActorRelu1')
fullyConnectedLayer(100,'Name','ActorFC2')
reluLayer('Name','ActorRelu2')
fullyConnectedLayer(numActions,'Name','ActorFC3')
tanhLayer('Name','ActorTanh1')
];
actorOptions = rlRepresentationOptions('Optimizer','adam','LearnRate',2e-4,...
'GradientThreshold',1);
actor = rlDeterministicActorRepresentation(actorNet,observationInfo,actionInfo,...
'Observation',{'observation'},'Action',{'ActorTanh1'},actorOptions);
Ts_agent = Ts;agentOptions = rlTD3AgentOptions("SampleTime",Ts_agent, ...
"ExperienceBufferLength",2e6, ...
"TargetSmoothFactor",0.005, ...
"MiniBatchSize",312, ...
"SaveExperienceBufferWithAgent",true);

回答 (3 件)

PB75
PB75 2022 年 7 月 12 日
Hi,
Can I get some help on the issue of this post? I cannot run my RL code thatw as created and runs in R2021a in R2022a. The error says it is an LSTM network, which it is not.
Any help would be great.
Thanks

PB75
PB75 2023 年 6 月 16 日
Hi,
I have an error when I run my RL code original done in R2021a, but flags an error in R2022a, any help would be great.

Yogesh Kumar
Yogesh Kumar 2023 年 12 月 31 日
I had a similar issue. It seems the issue is due to the use of sequenceInputLayer in the neural network. Please use some other layer for input and this issue should get resolved.

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by