Training a DDPG agent for MIMO system

11 ビュー (過去 30 日間)
TANUJA JOSHI
TANUJA JOSHI 2022 年 8 月 24 日
回答済み: Yash Sharma 2023 年 10 月 26 日
I am trying to train a DDPG agent for a MIMO system. The issue which I am facing is that the actor output i.e 'action values ' are very high (not in the action range ). These action values when given to the environment results in nan states. To solve this problem, i tried confining the action between the desired bounds by appling a tanh actuvation at the output layer of the actor and then scaled the action it to the actual bounds. Doing this, the action values are now in the range but the values are always on the higher bound and i am getting a constant action throughout. Not able to solve this issue now for a long time. PLease help me with this.
  1 件のコメント
Yi Zhao
Yi Zhao 2022 年 11 月 22 日
Hello, I have encountered the same problem. How did you solve it, please.

サインインしてコメントする。

回答 (1 件)

Yash Sharma
Yash Sharma 2023 年 10 月 26 日
Hi Tanuja,
I understand that you want to train a DDPG agent for MIMO system but the action values you are getting are always on the higher side with constant action throughput, here are a few things you can try to get the correct action values
  • Adjust the exploration rate or use different exploration strategies, such as epsilon-greedy or noise-based exploration. This can allow the agent to explore a wider range of actions and potentially discover better control strategies.
  • Learning rate: If the learning rate is too high, the weights in your neural network might be updating too drastically, which can lead to instability and constant action values. Try reducing the learning rate.
  • Reward function: The design of the reward function can significantly affect the learning of the agent. Make sure that your reward function is designed in such a way that it encourages the agent to learn the desired behaviour.
Please find links to below documentation which I believe will help you for further reference:
Hope this helps!

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by