回答済み
How could I realize from the graph of "reinforcement learning episode manager" in DDPG or rlTD3Agent that the network is learning well?
You should be seeing the average reward curve going upwards (not monotonically) which is not the case here.

約1年 前 | 0

| 採用済み

回答済み
How do i define and extract the MPC constraints?
The getconstraint function is supposed to return mixed input/output constraints. One use case would be that someone creates the ...

約1年 前 | 0

回答済み
RL Agent Action Limits doesn't working.
Please take a look at the DDPG algorithm and specifically step 1 here. DDPG promotes exploration by adding noise on top of the a...

約1年 前 | 0

回答済み
I received this error in DDPG "Model input sizes must match the dimensions specified in the corresponding observation and action info specifications."
The easiest way to discover your error yourself is to use the default agent feature have use the network architecture that's aut...

約1年 前 | 0

回答済み
Error with using loggedSignals in the reset function while creatin RL environment using custom functions
The very first line in our reset function is LoggedSignals.currentTrial = LoggedSignals.currentTrial + 1; How would the functi...

約1年 前 | 0

| 採用済み

回答済み
Understanding Entropy Loss for PPO Agents Exploration
Hi, In PPO, the goal of training is to strike a balance between the entropy term and fine tuning the probabilities for all avai...

約1年 前 | 0

回答済み
How to reduce noise from SAC RL-agent?
Hi, There is an agent option that achieves exactly what you want and outputs the mean output values after training: sacagent.U...

約1年 前 | 0

回答済み
Reinforcement Learning - SAC with hybrid action spaces
Hi, Unfortunately, hybrid action spaces are not currently supported out of the box. One potential workaround is to use multi-ag...

約1年 前 | 0

| 採用済み

回答済み
How to set a variable learning rate in Reinforcement Learning Toolbox?
Hi, This is not supported as of R2023b but the developmet team is aware and will look into supporting this capability in the fu...

約1年 前 | 0

| 採用済み

回答済み
How can i set the constraints on states rather than input and output in mpc?
For linear MPC, you can add constraints on inputs and outputs only. You can either set the desired states to also be outputs (if...

約1年 前 | 0

| 採用済み

回答済み
How do we know that the PI controller can be modeled using a single neuron?
The network used to model the PI controller is exactly this one actorNet = [ featureInputLayer(numObs) fullyConnected...

約1年 前 | 0

回答済み
Using a Simulink dynamic motorcycle in a driving scenario trajectory.
Here is a video that shows how to do that using Model Predictive Control Toolbox: https://www.mathworks.com/videos/understandin...

約1年 前 | 0

回答済み
question about external action of DDPG
The loss function does not change. What happens is that the experience buffer is populated with the action from the external sig...

約1年 前 | 0

| 採用済み

回答済み
use a linear state space model in NLMPC
Is there a reason you want to convert to nonlinear MPC? If you get a good solution with linear MPC, going to nonlinear will only...

約1年 前 | 0

回答済み
Integral MPC in Simulink
Why don't you use the MVRate constraint instead of adding the term in the cost function? https://www.mathworks.com/help/mpc/ug...

約1年 前 | 0

回答済み
Why reinforcement learning has different results of action between sim() and getAction()?
Hi, Which release are you using? We tried in R2023a and R2023b with UseExplorationPolicy =0 and getAction and sim provide the s...

約1年 前 | 0

回答済み
Epsilon greedy policy for DQN
You can use the formula here to calculate the epsilon value

約1年 前 | 1

| 採用済み

回答済み
Reinforcement learning agent for mixed action space.
Reinforcement Learning Toolbox does not support agents with both continuous and discrete actions. Can you share some more detail...

約1年 前 | 0

回答済み
can i decide the RL agents actions
It seems like the paper you saw uses some logic to implement the behavior you mention. You could do the same with an if statemen...

約1年 前 | 0

回答済み
The agent can learn the policy through the external action port in the RL Agent so that the agent mimics the output of the reference signal
It seems the agent started learning how to imitate the existing controller but needs more time. What does the Episode Manager lo...

約1年 前 | 0

回答済み
How to import a model built by comsol in the reinforcement learning designer
I haven't used Comson before but it seems that you may be able to co-simulate your model with Simulink. In that case, the proces...

約1年 前 | 0

回答済み
Multi-Agent Reinforcement learning
As of R2023b, you can do multi-agent reinforcement learning using MATLAB environments. Please take a look at this example and R2...

1年以上 前 | 0

回答済み
How can i scale the action of DDPG agent in Reinforcement Learning?
DDPG training works by adding noise on top of the actor output to promote exploration. In that case you may see constraint viola...

1年以上 前 | 0

回答済み
How I can define eight discrete actions in RL section
The implementation shown here is one option. Hope that helps

1年以上 前 | 0

回答済み
In SImulink, during DDPG training to regulate the CO2 concentration the environment is not simulating. I can see only the variables specified in function is updating.
Hello, I would start by taking a look at the output of the agent. If the agent output does not make sense, the environment will...

1年以上 前 | 0

回答済み
5G Handover with Reinforcement Learning, mismatch of input channels and observations in reinforcement learning representation
I suspect you did not set up your critic network properly. If you share that code snippet we can take a closer look. An alternat...

1年以上 前 | 0

回答済み
Implementing mpctools package (from Rawlings group) in Simulink
I cannot comments on mpctools, but if your objective is to use IPOPT in Simulink, Model Predictive Control Toolbox allows you to...

1年以上 前 | 0

回答済み
I want to print out multiple actions in reinforcement learning
Hi, If you want to create an agent that outputs multiple actions, you need to make sure the actor network is set up accordingly...

1年以上 前 | 0

回答済み
Issue with Q0 Convergence during Training using PPO Agent
It seems you set the training to stop when the episode reward reaches the value of 0.985*(Tf/Ts)*3. I cannot comment on the valu...

1年以上 前 | 2

| 採用済み

回答済み
Where is the actual storage location of the RL agent's weights.
Hello, You can implement the trained policy with automatic code generation, e.g. with MATLAB Coder, Simulink Coder and so on. Y...

1年以上 前 | 0

さらに読み込む