My goal is to train a pretrained mask rcnn on the TACO trash detection dataset of images. I want to match the annotations information with their corresponding images?

9 ビュー (過去 30 日間)
The data can be downloaded from Kaggle: https://www.kaggle.com/kneroma/tacotrashdataset
The code I was following can be found with this link in github: https://github.com/matlab-deep-learning/mask-rcnn
I want to train a pretrained maskr rcnn network on a trash dataset for detecting trash in the wild. So far I have this;
SetDir = fullfile('Training data');
Imds = imageDatastore(SetDir,'IncludeSubfolders',true,'LabelSource','foldernames');
annotationFile = jsondecode(fileread("annotations.json"));
save('Annotations.mat',"annotationFile")
%%
trainClassNames = {'Bottle', 'Can','Bottle cap'};
numClasses = length(trainClassNames);
imageSizeTrain = [800 800 3];
cocoAPIDir = fullfile("cocoapi-master","MatlabAPI");
addpath(cocoAPIDir);
unpackAnnotationDir = fullfile(SetDir,"annotations_unpacked","matFiles");
if ~exist(unpackAnnotationDir,'dir')
mkdir(unpackAnnotationDir)
end
  3 件のコメント
Clive Fox
Clive Fox 2024 年 2 月 23 日
And if I run the Live Example from github I just get an error "Unable to resolve the name helper.unpackAnnotations".
Clive Fox
Clive Fox 2024 年 2 月 23 日
I finally found it here, but honestly this needs a direct link adding to the example documentation as it is pretty obscurely hidden away.
Now to see if it actually works!
https://github.com/matlab-deep-learning/mask-rcnn/blob/main/src/%2Bhelper/unpackAnnotations.m

サインインしてコメントする。

回答 (1 件)

T.Nikhil kumar
T.Nikhil kumar 2023 年 9 月 27 日
Hello Eoghan,
I understand that you are trying to train a pretrained Mask R-CNN network on TACO trash dataset and you want to know how to match the annotations of each bounding box with its corresponding image according to an example you’re following.
As per your code, I can see that you have created an “imageDatastore” to store your image data from the “Training data” folder and have stored the annotation data as a struct in “Annotations.mat” file.
I would like to point out that, according to the “MaskRCNNTrainingExample.mlx” file in the GitHub code, the images are not stored in an “imageDatastore” object, but the images folder is indirectly used to store the image data in a “fileDatastore” object. Also, the annotations data is read using the functions of a “cocoAPI” class available in the MATLAB File Exchange platform.
I would suggest you to specify your image folder path in the “trainImgFolder” variable and the “.json” file of the annotations data in the ”annotationfile” variable and proceed with the rest of the code as it is.
The “helper” folder in the GitHub code contains certain functions that contribute to ultimately match the annotation data with the corresponding image.
You can refer to the “unpackAnnotations.m” function to understand how the COCO Annotation is converted to a “.mat” file, the “cocoAnnotationsFromID_preprocess.m” function to understand how to extract annotation data for a given image ID and the “cocoAnnotationMATReader.m” functon to help in combining the annotation data and image data.
I hope this helps!

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by