- Try increasing the maximum value of the accumulate_reward subsystem in the RL-Agent Simulink block to allow for larger reward values.
- Experiment with the value of 'MaxStepsPerEpisode', which will result in less frequent updates to the rewards.
- Additionally, you can try adjusting the hyperparameters of your reinforcement learning algorithm to better fit your problem. For example, you can try reducing the learning rate or increasing the discount factor, which may help stabilize the learning process and prevent reward overflow.
- It may also be helpful to monitor the reward signal during training to identify any other issues that may be causing the overflow. You can do this by setting 'Verbose=true' in the 'rlTrainingOptions', which will display the reward and other metrics during training.
The RL-Agent's cumulative reward keeps overflowing
5 ビュー (過去 30 日間)
古いコメントを表示
Adapting the 'rlwatertank' example, my cumulative reward keeps overflowing.
The original example has a 'StopTrainingValue' of 800, reached before episode 200, but in my adapted example, I cannot get past a value of 128.
I'm pretty sure that the reason is due to an overflow in the 'accumulate_reward' subsystem in the 'RL-Agent' Simulink block which does not occur in the original example.
How do I fix this issue ?
0 件のコメント
回答 (1 件)
Ashu
2023 年 2 月 22 日
It is my understanding that you are trying to adapt the 'Water Tank Simulink Model' to train your agent and your cumulative rewards are overflowing.
I assume that you are using the default 'rlTrainingOptions' which is as follows
trainOpts = rlTrainingOptions(...
MaxEpisodes=5000, ...
MaxStepsPerEpisode=ceil(Tf/Ts), ...
ScoreAveragingWindowLength=20, ...
Verbose=false, ...
Plots="training-progress",...
StopTrainingCriteria="AverageReward",...
StopTrainingValue=800);
'StopTrainingCriteria' is set to "AverageReward" to stop training when the average reward over the last "ScoreAveragingWindowLength" (which is 20 episodes here) exceeds the 'StopTrainingValue' (which is 800.)
Now in your case, within 128 episodes the 'AverageRewards' overshoots the value of 800 over 20 consecutive episodes, hence stopping the training.
To overcome this you can try the following points -
Finally, it's worth noting that the choice of the 'StopTrainingValue' is problem-dependent and may need to be adjusted depending on the specific requirements of your application.
You can refer to the following documentation to learn more about Water Tank Reinforcement Learning Model
To learn more about creating a Simulink Environment and Training an Agent, refer this document
0 件のコメント
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!