REINFORCE algorithm- unable to compute gradients on latest toolbox version

8 ビュー (過去 30 日間)
Bhooshan V
Bhooshan V 2022 年 4 月 4 日
コメント済み: Bhooshan V 2022 年 4 月 6 日
I have been trying to implement the REINFORCE algorithm using custom training loop.
The LSTM actor network inputs 50 timestep data of three states. Therefore a state is of dimension 3x50.
For computing gradients, the input data in the forllowing format
num_states x batchsize x N_TIMESTEPS = (3x1)x50x50.
In Reinforcement Learning toolbox version 1.3, the following line works perfectly.
% actor- the custom actor network , actorLossFunction- custom loss fn, lossData- custom variable
actorGradient = gradient(actor,@actorLossFunction,{reshape(observationBatch,[3 1 50 50])},lossData);
However, when I run the same code in the latest RL toolbox version 2.2, I get the following error:
------------------------------------------------------------------------------------------------------------------------------------------------------
Error using rl.representation.rlAbstractRepresentation/gradient
Unable to compute gradient from representation.
Error in simpleRLTraj (line 184)
actorGradient= gradient(actor,@actorLossFunction,{reshape(observationBatch,[3 1 50 50])},lossData);
Caused by:
Error using extractBinaryBroadcastData
dlarray is supported only for full arrays of data type double, single, or logical, or for full gpuArrays of
these data types.
------------------------------------------------------------------------------------------------------------------------------------------------------
I tried tracing back to the error but it get more complicated. How do I get an error for a code that works perfectly on the earlier version of RL toolbox?

採用された回答

Joss Knight
Joss Knight 2022 年 4 月 5 日
編集済み: Joss Knight 2022 年 4 月 5 日
What is
underlyingType(observationBatch)
underlyingType(lossData)
?
  5 件のコメント
Anh Tran
Anh Tran 2022 年 4 月 5 日
Can you attached your script so we can better help?
Bhooshan V
Bhooshan V 2022 年 4 月 6 日
I found the issue. Apparently, the output of the neural network is a cell array and not a double type.
As a result of some sort of typecasting, the loss was of type cell array.
I found that we cannot convert a cell type to dlarray type using the dlarray() function which must have been used somewhere internally in the gradient() function.
example-
dlarray({3})
Error using dlarray
dlarray is supported only for full arrays of data type double, single, or logical, or for full gpuArrays of these data types.
I have resolved the error. Thank you for helping me realize this.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeSequence and Numeric Feature Data Workflows についてさらに検索

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by