Input normalization using a reinforcement learning DQN agent

3 ビュー (過去 30 日間)
Margarita Cabrera
Margarita Cabrera 2021 年 3 月 26 日
コメント済み: H. M. 2022 年 12 月 3 日
Hi all
I have built a DQN agent to solv a custom reinforcement problem.
Following mathworks examples I have no used any kind of normalization applied to the critic input.
In fact, as far as I could check, all examples of RL that use a DNN to create an actor or a critic especify 'Normalization', 'none' at the input layers of the Actor and Critic.
My question is, is it possible to use a normalization as for instance "zscore" at the input layers of a critic or of an actor when these are based on a DNN?
I'have tried to applied zscore normalization, but then, the agent does not work.
thanks

採用された回答

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis 2021 年 3 月 26 日
Hello,
Normalization through the input layers is not supported for RL training. As a workaround, you can scale the observations rewards on the environment side.
  2 件のコメント
Margarita Cabrera
Margarita Cabrera 2021 年 3 月 27 日
Ok, thanks
H. M.
H. M. 2022 年 12 月 3 日
@Emmanouil Tzorakoleftherakis
Could you explain more, the way you mentioned about normalization. I want to do it, but I can't figure it out.
Regards

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeReinforcement Learning についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by