Changing how DQN agent explores

2 ビュー (過去 30 日間)
Michael
Michael 2021 年 7 月 12 日
コメント済み: Michael 2021 年 7 月 13 日
Hi,
I'm using a DQN agent with epsilon-greedy exploration. The problem is that my agent sees state 1 99% of the time, so it never learns to act in other states. By the time it learns to get to state 2 from state 1, epsilon has already decayed significantly and the agent gets stuck taking a sub-optimal action in state 2. Is there a way to implement some other form of exploration, like using a Boltzmann distribution? Thanks for your time.
  2 件のコメント
Tanay Gupta
Tanay Gupta 2021 年 7 月 13 日
Can you give a brief description of the states and the respective transitions?
Michael
Michael 2021 年 7 月 13 日
Sure, thanks for the reply! My agent is observing the noise present in two waveforms and whether two boxes are on/off. Turning a box off gets rid of the noise in its associated waveform (actions are turn off box 1, turn off box 2, turn off both). The first state is the state where both boxes are on and the waveforms have the lowest level of noise. In this state, I want the action to be "do nothing." Unfortunately, my agent has to take a lot of steps each episode to reduce detection time of the noise. This means that my agent almost always turns off the boxes before it sees a high level of noise a few seconds into the simulation (it has to take approximately 100 "do nothing" actions before seeing the high level of noise. So a different way of exploring/possibly a different RL agent is needed.

サインインしてコメントする。

回答 (0 件)

カテゴリ

Help Center および File ExchangeTraining and Simulation についてさらに検索

製品


リリース

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by