what error in classification code ?

32 ビュー (過去 30 日間)
Aml omar
Aml omar 2021 年 10 月 28 日
回答済み: Walter Roberson 2021 年 10 月 28 日
clear all;
clc;
close all;
digitDataseypath= uigetdir('C:\','select dataset directory');
Error using matlab.internal.lang.capability.Capability.require (line 94)
Support for Java user interfaces is required, which is not available on this platform.

Error in uigetdir (line 52)
Capability.require(Capability.Swing);
imds = imageDatastore(digitDataseypath,'IncludeSubfolders',true,'LabelSource','foldernames');
%% Count number of images per label
% labelCount = countEachLabel(imds);
% minSetCount = min(labelCount{:,2}); % determine the smallest amount of images in a category
% % Use splitEachLabel method to trim the set.
% imds = splitEachLabel(imds, minSetCount);
% countEachLabel(imds);
% %Specify Training and Validation Sets
% labels = countEachLabel(imds); %Calculate the number of images in each category
% img2=zeros(224,224,3);
% for i=1:length(imds.Labels)
% img=readimage(imds,i);
% img1=imresize(img,[224 224]);
% c=length(size(img));
% if c==2
% img2=cat(3,img1,img1,img1);
% imwrite(img2,cell2mat(imds.Files(i)))
% elseif c==3
% imwrite(img1,cell2mat(imds.Files(i)))
%
% end
%
% end
%% Create training and validation sets
[imdsTraining_1, imdsValidation_1] = splitEachLabel(imds, 0.80,'randomize');
%% Use image data augmentation to handle the resizing
inputSize_1 = [64,64,3];
augimdsTraining_1 = augmentedImageDatastore(inputSize_1(1:2),imdsTraining_1,'ColorPreprocessing','rgb2gray');
augimdsValidation_1 = augmentedImageDatastore(inputSize_1(1:2),imdsValidation_1,'ColorPreprocessing','rgb2gray');
%[data,info] = read(augimdsTraining_1)
% inputSize = [64 64];
% augimdsTraining_1.ReadFcn = @(loc)imresize(imread(loc),inputSize);
% augimdsValidation_1.ReadFcn = @(loc)imresize(imread(loc),inputSize);
Layers = [
imageInputLayer([64 64 1],"Name","imageinput")
convolution2dLayer([3 3],8,"Name","conv_1","Padding","same")
batchNormalizationLayer("Name","bn1-1")
reluLayer("Name","relu_1")
maxPooling2dLayer([3 3],"Name","maxpool_1","Padding","same","Stride",[2 2])
convolution2dLayer([3 3],16,"Name","conv_2","Padding","same")
batchNormalizationLayer("Name","bn1-2")
reluLayer("Name","relu_2")
maxPooling2dLayer([3 3],"Name","maxpool_2","Padding","same","Stride",[2 2])
convolution2dLayer([3 3],32,"Name","conv_3","Padding","same")
batchNormalizationLayer("Name","bn1-3")
reluLayer("Name","relu_3")
maxPooling2dLayer([3 3],"Name","maxpool_3","Padding","same","Stride",[2 2])
convolution2dLayer([3 3],64,"Name","conv_4","Padding","same")
batchNormalizationLayer("Name","bn1-4")
reluLayer("Name","relu_4")
maxPooling2dLayer([3 3],"Name","maxpool_4","Padding","same","Stride",[2 2])
dropoutLayer(0.5,"Name","dropfinal")
fullyConnectedLayer(2,"Name","fc_1")
softmaxLayer("Name","softmax")
classificationLayer("Name","classoutput")];
options = trainingOptions('rmsprop',...
'MaxEpochs',50, ...
'ValidationData',augimdsValidation_1,...
'ValidationFrequency',2,...
'InitialLearnRate',1e-4, ...
'Verbose',false,...
'MiniBatchSize',128,...
'Plots','training-progress');
%% Train network
%baselineCNN = trainNetwork(augimdsTraining_1,Layers,options);
baselineCNN = trainNetwork(augimdsTraining_1,Layers,options);
predictedLabels_1 = classify(baselineCNN,augimdsValidation_1);
valLabels_1 = imdsValidation_1.Labels;
% acuracy for validation
baselineCNNAccuracy_1 = sum(predictedLabels_1 == valLabels_1)/numel(valLabels_1);
[m,order] = confusionmat(valLabels_1,predictedLabels_1);
figure
plotconfusion(valLabels_1,predictedLabels_1)
figure
cm = confusionchart(valLabels_1,predictedLabels_1);
Array = m;
size_mat = size(Array);
size_mat = mean(size_mat);
TN = m(1,1)
FP = m(1,2)
FN = m(2,1)
TP = m(2,2)
sum_mat = sum(sum(m));
Accuracy = (TP+TN)/sum_mat
Specificity = TN/(TN+FP)
Precision = TP/(FP+TP)
Recall = TP/(FN+TP)
F1_score = 2 * (( Precision * Recall) / (Precision + Recall))
%% 80
[XTest,YTest]=splitEachLabel(imds, 0.2,'randomize');
inputSize_1 = [64,64,1];
augimdsXTest = augmentedImageDatastore(inputSize_1(1:2),XTest,'ColorPreprocessing','rgb2gray');
augimdsYTest = augmentedImageDatastore(inputSize_1(1:2),YTest,'ColorPreprocessing','rgb2gray');
YPredicted = classify(baselineCNN,augimdsYTest);
Test= YTest.Labels
[m,order] = confusionmat( Test,YPredicted);
figure
plotconfusion( Test,YPredicted)
figure
Array = m;
size_mat = size(Array);
size_mat = mean(size_mat);
TN = m(1,1)
FP = m(1,2)
FN = m(2,1)
TP = m(2,2)
sum_mat = sum(sum(m));
Accuracy = (TP+TN)/sum_mat
Specificity = TN/(TN+FP)
Precision = TP/(FP+TP)
Recall = TP/(FN+TP)
F1_score = 2 * (( Precision * Recall) / (Precision + Recall))
%% Try to classify something else
img = readimage(imds,100);
actualLabel = imds.Labels(100);
%img1= rgb2gray(imresize(img, [64 64]));
img1= imresize(img, [64 64]);
predictedLabel = classify(baselineCNN,img1);
imshow(img);
title(['Predicted: ' char(predictedLabel) ', Actual: ' char(actualLabel)])
Error using DAGNetwork/calculatePredict>predictBatch (line 151)
Incorrect input size. The input images must have a size of [64 64 1].
Error in DAGNetwork/calculatePredict (line 17)
Y = predictBatch( ...
Error in DAGNetwork/classify (line 134)
scores = this.calculatePredict( ...
Error in SeriesNetwork/classify (line 502)
[labels, scores] = this.UnderlyingDAGNetwork.classify(X, varargin{:});
Error in update22 (line 141)
predictedLabel = classify(baselineCNN,img1);

回答 (1 件)

Walter Roberson
Walter Roberson 2021 年 10 月 28 日
Incorrect input size. The input images must have a size of [64 64 1].
That is telling you that your input images are some size other than 64 x 64 grayscale.
You are already using an augmented datastore, so you should add options to the augmentedImageDatastore() calls to resize the images to 64 x 64.

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

タグ

製品


リリース

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by