Error occurred when using conditional GAN function

2 ビュー (過去 30 日間)
Yang Liu
Yang Liu 2024 年 3 月 25 日
コメント済み: Yang Liu 2024 年 3 月 26 日
Matlab version: R2023b
I have prepared the data and label as "test.mat", and create the conditional GAN structure by referring to the example. The script is listed below:
clear;
%% Load the data
% LSTM_Reform_Data_SeriesData1_20210315_data001_for_GAN;
load('test.mat')
%% Generator Network
numFilters = 4;
numLatentInputs = 100;
projectionSize = [2 1 63];
numClasses = 2;
embeddingDimension = 100;
layersGenerator = [
imageInputLayer([1 1 numLatentInputs],'Normalization','none','Name','Input_Noise')
projectAndReshapeLayer(projectionSize,numLatentInputs,'ProjReshape');
concatenationLayer(3,2,'Name','Concate1');
transposedConv2dLayer([3 2],8*numFilters,'Stride',1,'Name','TransConv1') % 4*2*32
batchNormalizationLayer('Name','BN1','Epsilon',5e-5)
reluLayer('Name','Relu1')
transposedConv2dLayer([5 3],4*numFilters,'Stride',1,'Name','TransConv2') % 8*4*16
batchNormalizationLayer('Name','BN2','Epsilon',5e-5)
reluLayer('Name','Relu2')
transposedConv2dLayer([5 3],2*numFilters,'Stride',1,'Name','TransConv3') % 12*6*8
batchNormalizationLayer('Name','BN3','Epsilon',5e-5)
reluLayer('Name','Relu3')
transposedConv2dLayer([3 3],numFilters,'Stride',1,'Name','TransConv4') % 14*8*4
batchNormalizationLayer('Name','BN4','Epsilon',5e-5)
reluLayer('Name','Relu4')
transposedConv2dLayer([1 1],1,'Stride',1,'Name','TransConv5')
];
lgraphGenerator = layerGraph(layersGenerator);
layers = [
imageInputLayer([1 1],'Name','Input_Label','Normalization','none')
embedAndReshapeLayer(projectionSize(1:2),embeddingDimension,numClasses,'EmbedReshape1')];
lgraphGenerator = addLayers(lgraphGenerator,layers);
lgraphGenerator = connectLayers(lgraphGenerator,'EmbedReshape1','Concate1/in2');
subplot(1,2,1);
plot(lgraphGenerator);
dlnetGenerator = dlnetwork(lgraphGenerator);
%% Discriminator Network
scale = 0.2;
Input_Num_Feature = [14 8 1]; % The input data is [14 8 1]
layersDiscriminator = [
imageInputLayer(Input_Num_Feature,'Normalization','none','Name','Input_Data')
concatenationLayer(3,2,'Name','Concate2')
convolution2dLayer([3 3],8*numFilters,'Stride',1,'Name','Conv1')
leakyReluLayer(scale,'Name','LeakyRelu1')
convolution2dLayer([3 3],4*numFilters,'Stride',1,'Name','Conv2')
leakyReluLayer(scale,'Name','LeakyRelu2')
convolution2dLayer([3 3],2*numFilters,'Stride',1,'Name','Conv3')
leakyReluLayer(scale,'Name','LeakyRelu3')
convolution2dLayer([3 1],numFilters/2,'Stride',1,'Name','Conv4')
leakyReluLayer(scale,'Name','LeakyRelu4')
convolution2dLayer([3 1],numFilters/2,'Stride',1,'Name','Conv5')
leakyReluLayer(scale,'Name','LeakyRelu5')
convolution2dLayer([3 2],1,'Name','Conv6')
leakyReluLayer(scale,'Name','LeakyRelu6')
convolution2dLayer([2 1],1,'Name','Conv7')
];
lgraphDiscriminator = layerGraph(layersDiscriminator);
layers = [
imageInputLayer([1 1],'Name','Input_Label','Normalization','none')
embedAndReshapeLayer(Input_Num_Feature,embeddingDimension,numClasses,'EmbedReshape2')];
lgraphDiscriminator = addLayers(lgraphDiscriminator,layers);
lgraphDiscriminator = connectLayers(lgraphDiscriminator,'EmbedReshape2','Concate2/in2');
subplot(1,2,2);
plot(lgraphDiscriminator);
dlnetDiscriminator = dlnetwork(lgraphDiscriminator);
%% Train model
params.numLatentInputs = numLatentInputs;
params.numClasses = numClasses;
params.sizeData = [Input_Num_Feature length(Series_Fused_Label)];
params.numEpochs = 1000;
params.miniBatchSize = 256;
% Specify the options for Adam optimizer
params.learnRate = 0.0002;
params.gradientDecayFactor = 0.5;
params.squaredGradientDecayFactor = 0.999;
executionEnvironment = "cpu";
params.executionEnvironment = executionEnvironment;
trainNow = true;
if trainNow
% Train the CGAN
[dlnetGenerator,dlnetDiscriminator] = trainGAN(dlnetGenerator, dlnetDiscriminator,Series_Fused_Expand_Norm_Input,Series_Fused_Label,params);
else
% Use pretrained CGAN (default)
load(fullfile(tempdir,'PumpSignalGAN','GANModel.mat')) % load data set
end
However, the error occurred when I tried to run the script. The screen shot for the error message in command window is attached as "pic1".
I step-debugged the error and located the error happenning when the system tried to process the series functions, attached in "pic2".
Can someone help to clarify? It seems that some functions related to GAN has not been included in the "Deep Network Designer" as a standard module.
  2 件のコメント
Yang Liu
Yang Liu 2024 年 3 月 25 日
PS:
The input data is in the form of 14*8*30779. One data slot is in the form of 14*8, and we have 30779 data slots in total. Correspondingly, we have 30779 labels as well.
The label is just in two cases: 0 or 1.
Yang Liu
Yang Liu 2024 年 3 月 26 日
I have reformed the input data to be 112*30779 (one step closer to the official example), but still suffered from the same error as pic1. This time, I even keep the same string name for the embedAndReshape layer.
The updated script is below, basically copied from the official example. I have uploaded the reformed data in test2.mat.
clear;
%% Download the data
load('test2.mat')
%% Generator Network
numFilters = 64;
numLatentInputs = 100;
projectionSize = [4 1 1024];
numClasses = 2;
embeddingDimension = 100;
layersGenerator = [
imageInputLayer([1 1 numLatentInputs],'Normalization','none','Name','in')
projectAndReshapeLayer(projectionSize,numLatentInputs,'proj');
concatenationLayer(3,2,'Name','cat'); % 4*1*1025
transposedConv2dLayer([5 1],8*numFilters,'Stride',1,'Name','TransConv1') % 8*1*512
batchNormalizationLayer('Name','BN1','Epsilon',5e-5)
reluLayer('Name','Relu1')
transposedConv2dLayer([7 1],4*numFilters,'Stride',1,'Name','TransConv2') % 14*1*256
batchNormalizationLayer('Name','BN2','Epsilon',5e-5)
reluLayer('Name','Relu2')
transposedConv2dLayer([2 1],2*numFilters,'Stride',2,'Name','TransConv3') % 28*1*128
batchNormalizationLayer('Name','BN3','Epsilon',5e-5)
reluLayer('Name','Relu3')
transposedConv2dLayer([2 1],numFilters,'Stride',2,'Name','TransConv4') % 56*1*64
batchNormalizationLayer('Name','BN4','Epsilon',5e-5)
reluLayer('Name','Relu4')
transposedConv2dLayer([2 1],1,'Stride',2,'Name','TransConv5') % 112*1*1
];
lgraphGenerator = layerGraph(layersGenerator);
layers = [
imageInputLayer([1 1],'Name','labels','Normalization','none')
embedAndReshapeLayer(projectionSize(1:2),embeddingDimension,numClasses,'emb')];
lgraphGenerator = addLayers(lgraphGenerator,layers);
lgraphGenerator = connectLayers(lgraphGenerator,'emb','cat/in2');
plot(lgraphGenerator);
dlnetGenerator = dlnetwork(lgraphGenerator);
%% Discriminator Network
scale = 0.2;
inputSize = [112 1 1];
layersDiscriminator = [
imageInputLayer(inputSize,'Normalization','none','Name','in')
concatenationLayer(3,2,'Name','cat')
convolution2dLayer([2 1],8*numFilters,'Stride',2,'Name','conv1')
leakyReluLayer(scale,'Name','lrelu1')
convolution2dLayer([2 1],4*numFilters,'Stride',2,'Name','conv2')
leakyReluLayer(scale,'Name','lrelu2')
convolution2dLayer([2 1],2*numFilters,'Stride',2,'Name','conv3')
leakyReluLayer(scale,'Name','lrelu3')
convolution2dLayer([2 1],numFilters,'Stride',2,'Name','conv4')
leakyReluLayer(scale,'Name','lrelu4')
convolution2dLayer([7 1],1,'Name','conv5')];
lgraphDiscriminator = layerGraph(layersDiscriminator);
layers = [
imageInputLayer([1 1],'Name','labels','Normalization','none')
embedAndReshapeLayer(inputSize,embeddingDimension,numClasses,'emb')];
lgraphDiscriminator = addLayers(lgraphDiscriminator,layers);
lgraphDiscriminator = connectLayers(lgraphDiscriminator,'emb','cat/in2');
plot(lgraphDiscriminator);
dlnetDiscriminator = dlnetwork(lgraphDiscriminator);
%% Train model
params.numLatentInputs = numLatentInputs;
params.numClasses = numClasses;
params.sizeData = [inputSize length(Series_Fused_Label)];
params.numEpochs = 1000;
params.miniBatchSize = 256;
% Specify the options for Adam optimizer
params.learnRate = 0.0002;
params.gradientDecayFactor = 0.5;
params.squaredGradientDecayFactor = 0.999;
executionEnvironment = "cpu";
params.executionEnvironment = executionEnvironment;
trainNow = true;
if trainNow
% Train the CGAN
[dlnetGenerator,dlnetDiscriminator] = trainGAN(dlnetGenerator, ...
dlnetDiscriminator,Series_Fused_Expand_Norm_Input2,Series_Fused_Label,params);
else
% Use pretrained CGAN (default)
load(fullfile(tempdir,'PumpSignalGAN','GANModel.mat')) % load data set
end

サインインしてコメントする。

採用された回答

Esther
Esther 2024 年 3 月 26 日
The embedAndReshapeLayer takes the raw label data and uses it to index into a weight matrix. The weight matrix is of size embeddingDimension-by-numClasses.
So to index into the second dimension of this matrix, the labels need to have values between 1 and numClasses.
However, the labels here are zeros or ones. The error occurs because embedAndReshapeLayer tries to index into the weight matrix using zeros.
To fix this issue, you could just shift your label values by one:
Series_Fused_Label = Series_Fused_Label+1;
I hope that helps.
  1 件のコメント
Yang Liu
Yang Liu 2024 年 3 月 26 日
Thank you so much, Esther!
It does work! I have shifted the label value from 0&1 to be 1&2, and now the conditional GAN is under training! Thanks a lot!
(Please ignore the picture on the left side, I haven't adjust that plot in the original function.
But the training process does run!)

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeSequence and Numeric Feature Data Workflows についてさらに検索

製品


リリース

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by