フィルターのクリア

Error when training Fast R-CNN network with roi input

4 ビュー (過去 30 日間)
Karl Mueller
Karl Mueller 2023 年 5 月 10 日
コメント済み: Karl Mueller 2023 年 5 月 11 日
When I begin to train a fast R-CNN network with the trainingData input datastore I receive an error
my network is:
I am only detecting one class, and all my ~6000 training images contain that class and I have marked it with a bounding box.
my code is:
clear
create_fast_RCNN_network_with_parameters_simpler_network;
load("ribeye_groundtruth_table.mat");
rng(1);
shuffledIndices = randperm(height(gTruth));
idx = floor(0.65 * height(gTruth));
trainingIdx = 1:idx;
trainingDataTbl = gTruth(shuffledIndices(trainingIdx),:);
validationIdx = idx+1 : idx + 1 + floor(0.25 * length(shuffledIndices) );
validationDataTbl = gTruth(shuffledIndices(validationIdx),:);
testIdx = validationIdx(end)+1 : length(shuffledIndices);
testDataTbl = gTruth(shuffledIndices(testIdx),:);
imdsTrain = imageDatastore(trainingDataTbl{:,'imageFilename'});
bldsTrain = boxLabelDatastore(trainingDataTbl(:,'RibEye'));
imdsValidation = imageDatastore(validationDataTbl{:,'imageFilename'});
bldsValidation = boxLabelDatastore(validationDataTbl(:,'RibEye'));
imdsTest = imageDatastore(testDataTbl{:,'imageFilename'});
bldsTest = boxLabelDatastore(testDataTbl(:,'RibEye'));
miniBatchSize = 14;
trainingData = combine(imdsTrain,bldsTrain);
validationData = combine(imdsValidation,bldsValidation);
testData = combine(imdsTest,bldsTest);
%this shows the first training data correctly!
data = read(trainingData);
I = data{1};
bbox = data{2};
annotatedImage = insertShape(I,'rectangle',bbox);
annotatedImage = imresize(annotatedImage,2);
figure
imshow(annotatedImage)
options = trainingOptions('sgdm',...
'MaxEpochs',10,...
'Momentum',0.9,...
'MiniBatchSize', miniBatchSize,...
'InitialLearnRate',1e-3,...
'LearnRateDropFactor', 0.1, ...
'LearnRateDropPeriod', 2, ...
'L2Regularization', 1e-5, ...
'CheckpointPath',tempdir,...
'ValidationData',validationData,...
'Shuffle','every-epoch', ...
'ValidationFrequency',220, ...
'Plots', 'training-progress');
[trainedDetector, info] = trainFastRCNNObjectDetector(trainingData, lgraph, options);
% after extracting region proposals from training datastore I receve the below error output:
testing trainingData with read yields:
data = read(trainingData)
data =
1×3 cell array
{576×720×3 uint8} {[73 43 486 277]} {[RibEye]}
Which seems correct.
I have also tested it with readall and I can see no problems.
but straight after training starts I receive this error:
*******************************************************************
Training a Fast R-CNN Object Detector for the following object classes:
* RibEye
--> Extracting region proposals from training datastore...done.
Input datastore returned more than one observation per row for network input 2.
iAssertExpectedMiniBatchSize(i, numObservations, expectedBatchSize);
iAssertDataContainsOneObservationPerRow(i, numObservations, inputSize);
[this.DataSize, this.ResponseSize] = iGetDataResponseSizesForMISO(exampleData, ...
nnet.internal.cnn.dispatcher.GeneralDatastoreDispatcher( ...
dispatcher = nnet.internal.cnn.dispatcher.DispatcherFactory.createDispatcherMIMO( ...
trainingDispatcher = iCreateTrainingDataDispatcher(ds, mapping, trainedNet,...
[network, info] = vision.internal.cnn.trainNetwork(...
[detector, ~, info] = fastRCNNObjectDetector.train(trainingData, lgraph, options, executionSettings, params, checkpointSaver);
What could be causing this error?
I was able to train this network with a minibatchsize of 8. However after I modified some of my convolutional layers it no longer works with minibatchsize of 8.

採用された回答

LeoAiE
LeoAiE 2023 年 5 月 11 日
The error message suggests that the input datastore is returning more than one observation per row for network input 2. This might be due to incorrect data preparation, or the network architecture not being compatible with the prepared data.
To troubleshoot the issue, I suggest you double-check your data preparation steps and verify if the input data and labels are correctly combined. Also, make sure the modified convolutional layers in the network architecture are compatible with the input data format.
Here are a few steps to check your data preparation:
  1. Verify the dimensions of the images in the imageDatastore are consistent.
  2. Check the bounding box annotations in the boxLabelDatastore and make sure they are in the correct format.
  3. Ensure that the combined datastores (trainingData, validationData, testData) have the correct structure.
If the data preparation steps are correct, it's possible that the modifications in the convolutional layers have changed the way the network expects input data. In this case, you may need to update the input data format or modify the network architecture to match the input data format.
If you still encounter issues, please provide more information about the modifications you made to the convolutional layers and any other changes you made to the network architecture. This will help to identify the root cause of the problem and provide a more accurate solution.
  2 件のコメント
Karl Mueller
Karl Mueller 2023 年 5 月 11 日
Thank you for your reply.
What do you mean by
"or the network architecture not being compatible with the prepared data."
I believe my network is not too complicated.
It accepts an image that is 720x576x3 which corresponds to the images produced by my machine vision camera.
This is the first time I have created and trained a Fast RCNN network, and I followed the example on https://au.mathworks.com/help/vision/ug/faster-r-cnn-examples.html#mw_51a5e174-51f2-4da8-892b-16d3ce2278e2
I added the ROI input and roiMaxPooling as specified in the example, however I am not completely aware of the nuances of this.
I believed the input simply needed to be in the format of n x 3 where the 3 columns are image, bounding box, and class.
I am also not clear about why minibatchsize would have any bearing on this at all?
Karl Mueller
Karl Mueller 2023 年 5 月 11 日
A quick update.
I analysed my network and I believe I discovered the problem.
The ROI input layer was located in the incorrect position, it was situated before the image input layer. It seems as if the order of layers is important to the way the data is applied to them from the datastore.
To fix the problem I removed and repositioned the ROI input layer to the correct position in the network. The problem was indeed "the network architecture not being compatible with the prepared data" like you suggested.
The network is training now.
Thank you for your help it was much appreciated.

サインインしてコメントする。

その他の回答 (0 件)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by