Training a Variational Autoencoder (VAE) on sine waves
8 ビュー (過去 30 日間)
古いコメントを表示
Hi,
I am trying to run a variational autoencoder according to the script in https://se.mathworks.com/help/deeplearning/examples/train-a-variational-autoencoder-vae-to-generate-images.html
Basically, I am testing the autoencoder on sine waves. I have a training set and a testing set each having 100 sine waves of length 1100 samples (they are all similar). However, when I try to run the code, I get the following error:
Error using nnet.internal.cnn.dlnetwork/forward (line 194)
Layer 'fc_encoder': Invalid input data. The number of weights (17600) for each output feature must match the number of elements (204800) in each observation
of the first argument.
Error in dlnetwork/forward (line 165)
[varargout{1:nargout}] = forward(net.PrivateNetwork, x, layerIndices, layerOutputIndices);
Error in sampling (line 2)
compressed = forward(encoderNet, x);
Error in modelGradients (line 2)
[z, zMean, zLogvar] = sampling(encoderNet, x);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in ConvAE (line 57)
[infGrad, genGrad] = dlfeval(...
When I run the same code with XBatch = XTrain instead, I get the same error but with number of elements 440000 instead of 204800.
When I run the same code with XBatch = XTrain(idx,:) instead, I get the error: Index in position 1 exceeds array bounds (must not exceed 100).
Can anyone help? I have used the exact same Helper Functions as in the link.
Thanks!
latentDim = 50;
encoderLG = layerGraph([
imageInputLayer([1 1100],'Name','input_encoder','Normalization','none')
convolution2dLayer([1 100], 32, 'Padding','same', 'Stride', 2, 'Name', 'conv1')
reluLayer('Name','relu1')
convolution2dLayer([1 100], 64, 'Padding','same', 'Stride', 2, 'Name', 'conv2')
reluLayer('Name','relu2')
fullyConnectedLayer(2 * latentDim, 'Name', 'fc_encoder')
]);
decoderLG = layerGraph([
imageInputLayer([1 1 latentDim],'Name','i','Normalization','none')
transposedConv2dLayer([1 100], 32, 'Cropping', 'same', 'Stride', 2, 'Name', 'transpose1')
reluLayer('Name','relu1')
transposedConv2dLayer([1 100], 64, 'Cropping', 'same', 'Stride', 2, 'Name', 'transpose2')
reluLayer('Name','relu2')
transposedConv2dLayer([1 100], 32, 'Cropping', 'same', 'Stride', 2, 'Name', 'transpose3')
reluLayer('Name','relu3')
transposedConv2dLayer([1 100], 1, 'Cropping', 'same', 'Name', 'transpose4')
]);
encoderNet = dlnetwork(encoderLG);
decoderNet = dlnetwork(decoderLG);
executionEnvironment = "auto";
XTrain = sineTrain;
XTest = sineTest;
numTrainImages = 1100;
numEpochs = 50;
miniBatchSize = 512;
lr = 1e-3;
numIterations = floor(numTrainImages/miniBatchSize);
iteration = 0;
avgGradientsEncoder = [];
avgGradientsSquaredEncoder = [];
avgGradientsDecoder = [];
avgGradientsSquaredDecoder = [];
for epoch = 1:numEpochs
tic;
for i = 1:numIterations
iteration = iteration + 1;
idx = (i-1)*miniBatchSize+1:i*miniBatchSize;
XBatch = XTrain(:,idx);
XBatch = dlarray(single(XBatch), 'SSCB');
if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
XBatch = gpuArray(XBatch);
end
[infGrad, genGrad] = dlfeval(...
@modelGradients, encoderNet, decoderNet, XBatch);
[decoderNet.Learnables, avgGradientsDecoder, avgGradientsSquaredDecoder] = ...
adamupdate(decoderNet.Learnables, ...
genGrad, avgGradientsDecoder, avgGradientsSquaredDecoder, iteration, lr);
[encoderNet.Learnables, avgGradientsEncoder, avgGradientsSquaredEncoder] = ...
adamupdate(encoderNet.Learnables, ...
infGrad, avgGradientsEncoder, avgGradientsSquaredEncoder, iteration, lr);
end
elapsedTime = toc;
[z, zMean, zLogvar] = sampling(encoderNet, XTest);
xPred = sigmoid(forward(decoderNet, z));
elbo = ELBOloss(XTest, xPred, zMean, zLogvar);
disp("Epoch : "+epoch+" Test ELBO loss = "+gather(extractdata(elbo))+...
". Time taken for epoch = "+ elapsedTime + "s")
end
0 件のコメント
採用された回答
Joss Knight
2019 年 11 月 15 日
編集済み: Joss Knight
2019 年 11 月 15 日
It looks like your input data size is wrong. Your formatting says that the 4th dimension is the batch dimension, but actually it appears that the batch dim is the second dimension. You could try labelling as 'SBCS' instead, but I can't be sure because I don't know what's in sineTrain. It may be you need to permute your data or all your filters to match the convolution dimension to the correct input dimension.
8 件のコメント
Joss Knight
2019 年 11 月 21 日
I suggest you ask a new question, to see if anyone wants to help you with this new problem.
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!