Error: "Incorrect type of 'Z' for 'predict' in Layer 'samplelayer'. Expected 'gpuArray', but instead was 'single'." This error is shown during using a custom deep layer.

7 ビュー (過去 30 日間)
I have created a custom deep learning layer for my network for image classification. While using that layer with cpu the network is properly working during the training process, but when I utilize the gpu available in the workstation it is showing this error:
"Incorrect type of 'Z' for 'predict' in Layer 'samplelayer'. Expected 'gpuArray', but instead was 'single'."
'Z' is the variable used in the predict function which represents the output of the layer. The portion of the code which has error is shown below:
PE_1={PE_even PE_odd};
Z=single(cat(2,PE_1{:}));
Z=dlarray(Z,"CB");
If I change 'single' with 'gpuArray' the same error will show in reverse way. Can anyone help me with this? I have started the class function like this: "classdef sinusoidalPositionEncoding < nnet.layer.Layer ...
& nnet.layer.Formattable ...
& nnet.layer.Acceleratable"
Thanks for the help in advance

採用された回答

Joss Knight
Joss Knight 2024 年 8 月 22 日
編集済み: Joss Knight 2024 年 8 月 22 日

Hello. You need to return an array of the same data type and storage type as the input. Somehow your data is no longer on the GPU and no longer a dlarray, this means your network will not train unless you have implemented a backward function. Show us the rest of your layer code, including whether you are using the Formattable mixin, whether you have implemented a backward function, and whether you are training with trainnet or trainNetwork. Thanks.

Straightforwardly, making the output a single gpuArray dlarray ( dlarray(gpuArray(single(x)), "CB")) will fix this error, but probably there will be other errors and this will prevent your layer from working for CPU training so I expect it isn't the right solution.

  3 件のコメント
Joss Knight
Joss Knight 2024 年 8 月 24 日

Yeah. It's an odd one because your output doesn't depend on your input. I think the error message is wrong and the real problem is your layer cannot be traced. Try multiplying the output by ones(1,'like',X) to fix this.

A neater solution is to use colon(1,2,size(X,2),'like',X) for the odd and even numbers, then you shouldn't need any of the extra conversions except to add the labels.

Joss Knight
Joss Knight 2024 年 9 月 27 日
After further analysis, there are two different errors here.
If you make the output a CPU 'single' array, it errors during training because the output is expected to be a gpuArray; if you make the output a GPU 'single', it error during initialization because the output is expected to be a CPU array. During initialization we always pass CPU arrays through the custom layers.
So the correct solution is to use cast(1:2:size(X,2),'like',X) to get the correct behaviour for both cases.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by