How to convert 2D layer to 1D and from 1D to 2D?

9 ビュー (過去 30 日間)
Grzegorz Klosowski
Grzegorz Klosowski 2022 年 9 月 4 日
回答済み: Mandar 2022 年 12 月 13 日
Helo,
I want to create an autoencoder architecture with 2D input and output (matrix) but inside I need 1D (fullyConnectedLayer) as a latent layer. How to do it? In Layer Library of Deep Network Designer I cannot see the useful "bricks".
Regards,
Greg

回答 (2 件)

David Willingham
David Willingham 2022 年 9 月 6 日
Can you describe a little more about your application? I.e. what is your input, is it a matrix of signals or an image?
  2 件のコメント
Grzegorz Klosowski
Grzegorz Klosowski 2022 年 9 月 6 日
Input and output are 48x48 matrix. It is a single-channel image that consists of real numbers.
Grzegorz Klosowski
Grzegorz Klosowski 2022 年 9 月 6 日
編集済み: Grzegorz Klosowski 2022 年 9 月 6 日
And even this matrix. The same at the input and output. And in the middle it should be a 1D layer. I need to compress this image this way. Using the autoencoder precisely. In the latent layer, I want to have a vector consisting of, say, 256 neurons. How to change a dimension from 2D to 1D and from 1D to 2D inside a neural network (or an autoencoder)?

サインインしてコメントする。


Mandar
Mandar 2022 年 12 月 13 日
I understand that you want make an autoencoder network with specific hidden layer (latent layer) size.
You may refer the following documentation which shows how to create an autoencoder networks with specific hidden layer size, learning latent features and further stacking the latent layers to make a stacked autoencoder network with multiple hidden layers to learn effective latent features.

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by