I'm having trouble with convolution1dLayer

5 ビュー (過去 30 日間)
nagihan yagmur
nagihan yagmur 2023 年 3 月 26 日
コメント済み: Matt J 2023 年 4 月 6 日
layers = [
featureInputLayer(24)
convolution1dLayer(5, 32, 'Padding', 'same')
batchNormalizationLayer
reluLayer
maxPooling1dLayer(2, 'Stride', 2)
convolution1dLayer(5, 64, 'Padding', 'same')
batchNormalizationLayer
reluLayer
maxPooling1dLayer(2, 'Stride', 2)
convolution1dLayer(5, 128, 'Padding', 'same')
batchNormalizationLayer
reluLayer
maxPooling1dLayer(2, 'Stride', 2)
dropoutLayer(0.5)
fullyConnectedLayer(5)
softmaxLayer
classificationLayer];
options = trainingOptions('adam', ...
'MaxEpochs', 20, ...
'MiniBatchSize', 128, ...
'ValidationData', {XVal, YVal}, ...
'ValidationFrequency', 50, ...
'Shuffle', 'every-epoch', ...
'Verbose', false, ...
'Plots', 'training-progress');
%Xtrain = 5000x24 Ytrain = 5000x1 Xtest = 5000x24 Ytest=50000x1
net = trainNetwork(XTrain, YTrain, layers, options);
% Doğruluk oranını hesapla
YPred = classify(net, XTest);
accuracy = sum(YPred == YTest) / numel(YTest);
fprintf('Doğruluk oranı: %0.2f%%\n', 100*accuracy);
I have a dataset of 15300 records with 24 features. (size 15300x24) My output dataset consists of 5 classes (15300x1). I am trying to classify with cnn. When I write the Layer, I encounter the following error:
Caused by:
Layer 2: Input data must have one spatial dimension only, one temporal dimension only, or one of each.
Instead, it has 0 spatial dimensions and 0 temporal dimensions.
I haven't been able to solve it.
  2 件のコメント
Walter Roberson
Walter Roberson 2023 年 3 月 26 日
Your XTrain is empty, somehow.
nagihan yagmur
nagihan yagmur 2023 年 3 月 26 日
編集済み: Walter Roberson 2023 年 3 月 26 日
clc;
clear all;
load veri_seti.mat
X = MyData.Inp;
Y = categorical(MyData.Out);
numClasses = 5;
layers = [ featureInputLayer(24,'Name','inputs')
convolution1dLayer(128,3,'Stride',2)
reluLayer() maxPooling1dLayer(2,'Stride',2)
batchNormalizationLayer()
convolution1dLayer(64,3,'Stride',1)
reluLayer()
maxPooling1dLayer(2,'Stride',2)
batchNormalizationLayer()
dropoutLayer(0.2)
convolution1dLayer(32,3,'Stride',1)
reluLayer()
batchNormalizationLayer()
convolution1dLayer(16,3,'Stride',1)
reluLayer()
batchNormalizationLayer()
dropoutLayer(0.2)
convolution1dLayer(8,3,'Stride',1)
reluLayer()
maxPooling1dLayer(2,'Stride',2)
globalMaxPooling1dLayer()
dropoutLayer(0.2)
batchNormalizationLayer()
fullyConnectedLayer(1024)
fullyConnectedLayer(1024)
softmaxLayer()
classificationLayer()];
% Öğrenme oranı ve diğer hiperparametreler
miniBatchSize = 128;
maxEpochs = 30;
initialLearningRate = 0.001;
learnRateDropFactor = 0.1;
learnRateDropPeriod = 10;
% Options nesnesi
options = trainingOptions('adam', ...
'MiniBatchSize', miniBatchSize, ...
'MaxEpochs', maxEpochs, ...
'InitialLearnRate', initialLearningRate, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', learnRateDropFactor, ...
'LearnRateDropPeriod', learnRateDropPeriod, ...
'Shuffle', 'every-epoch', ...
'Verbose', false, ...
'Plots', 'training-progress');
Error using vertcat
Dimensions of arrays being concatenated are not consistent.
Error in example (line 9)
layers = [ featureInputLayer(24,'Name','inputs')
I'm constantly between these two errors

サインインしてコメントする。

採用された回答

Matt J
Matt J 2023 年 4 月 4 日
編集済み: Matt J 2023 年 4 月 6 日
Tech Support has suggested 2 workarounds to me. The simplest IMO is to recast the training as a 2D image classification problem, where one of the dimensions of the image is a singleton. This requires the use of an imageInputLayer as well as converting convolutional and pooling layers to 2D form, also specifying one of the dimensions as a singleton.
load veri_seti
XTrain = reshape(MyData.Inp',24,1,1,[]); %Dimensions: 24x1x1xBatch
YTrain = reshape( categorical(MyData.Out),[],1); %Dimensions: Batchx1
layers = [ imageInputLayer([24,1],'Name','inputs') %<---Use imageInputLayer
convolution2dLayer([5,1], 32, 'Padding', 'same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer([2,1], 'Stride', [2,1])
convolution2dLayer([5,1], 64, 'Padding', 'same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer([2,1], 'Stride', [2,1])
convolution2dLayer([5,1], 128, 'Padding', 'same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer([2,1], 'Stride', [2,1])
dropoutLayer(0.5)
flattenLayer
fullyConnectedLayer(5)
softmaxLayer
classificationLayer];
%analyzeNetwork(layers);
options = trainingOptions('adam', ...
'MaxEpochs', 3, ...
'MiniBatchSize', 128, ...
'Verbose', false, ...
'Plots', 'training-progress','ExecutionEnvironment','cpu');
net = trainNetwork(XTrain, YTrain(:), layers,options);
  3 件のコメント
nagihan yagmur
nagihan yagmur 2023 年 4 月 6 日
I am very very grateful
Matt J
Matt J 2023 年 4 月 6 日
I'm glad, but please Accept-click the answer to indicate that it worked.

サインインしてコメントする。

その他の回答 (1 件)

Walter Roberson
Walter Roberson 2023 年 3 月 26 日
移動済み: Walter Roberson 2023 年 3 月 26 日
layers = [ featureInputLayer(24,'Name','inputs')
convolution1dLayer(128,3,'Stride',2)
reluLayer() maxPooling1dLayer(2,'Stride',2)
Notice you have two layers on the same line.
  3 件のコメント
Walter Roberson
Walter Roberson 2023 年 3 月 27 日
Please show
whos -file veri_seti.mat
whos X Y
nagihan yagmur
nagihan yagmur 2023 年 3 月 29 日
The dataset is attached.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by