Poor results for neural-network based image segmentation

1 回表示 (過去 30 日間)
Greg
Greg 2023 年 10 月 18 日
編集済み: Greg 2023 年 10 月 19 日
I am trying to segment images of animal embryos, like this one:
I would like to extract the entire oval-shaped embryo without getting any of the background or the appendages that stick off of the embryo, like in this hand-drawn mask:
I have around 350 training images of embryos that have been hand-segmented like this one, and I had trained a small convolutional neural-network to try and segment these images automatically. The network has this structure:
opts = trainingOptions('sgdm', ...
'InitialLearnRate',1e-3, ...
'MaxEpochs',5, ...
'MiniBatchSize',4);
numFilters = 64;
%%
filterSize = 3;
numClasses = 2;
layers = [
imageInputLayer([500 1000 1])
convolution2dLayer(filterSize,numFilters,'Padding','same')
reluLayer()
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(filterSize,numFilters,'Padding','same')
reluLayer()
transposedConv2dLayer(4,numFilters,'Stride',2,'Cropping','same');
convolution2dLayer(1,numClasses);
softmaxLayer()
pixelClassificationLayer()
];
Training the network for with the settings above leads to an accuracy of around 94%, but when I actually look at its performace on the training images, it is not doing a good job with removing the appendages:
This problem persists for most of the images in the training set, and I haven't even tested it on validation data because it's performing so poorly on the training set. I can't manually chop off the appendages via image erosion, because the angle and length of the appendages changes, so I would need to set the image erosion parameters manually for each image, and I have hundreds of thousands of images.
What can I do to improve performance of the pixel classification network?
Thank you!

採用された回答

Matt J
Matt J 2023 年 10 月 18 日
I can't manually chop off the appendages via image erosion
bwlalphashape from this FEX download,
may help.
A=imread('https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514819/image.png');
A=im2gray(A);
B=imfill(A>135,'holes');
mask=~bwlalphaclose(~B,45);
imshow(imfuse(A,mask,'falsecolor'))
  5 件のコメント
Matt J
Matt J 2023 年 10 月 19 日
編集済み: Matt J 2023 年 10 月 19 日
Here's a smoother version,
Images=["https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514934/image.png",
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514939/image.png",
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514944/image.png",
"https://www.mathworks.com/matlabcentral/answers/uploaded_files/1514949/image.png"];
for i=1:numel(Images)
figure
getMask(Images{i});
end
function mask=getMask(Image)
A=imread(Image);
A=im2gray(A);
[m,n]=size(A);
B=bwareafilt(imfill(A<220,'holes'),1);
C=bwareafilt(~bwlalphaclose(~B,40),1);
b=bwboundaries(C); b=fliplr(b{1});
b=sgolayfilt(b,3,231,[],1);
mask=poly2mask(b(:,1), b(:,2),m,n);
mask=imerode(mask,strel('disk',21));
mask=imdilate(mask,strel('disk',22));
imshow(labeloverlay(A,mask,'Transparency',0.85,'Color','spring'));
end
Greg
Greg 2023 年 10 月 19 日
編集済み: Greg 2023 年 10 月 19 日
That looks amazing! Thank you, that solves my problem.

サインインしてコメントする。

その他の回答 (1 件)

Matt J
Matt J 2023 年 10 月 18 日
Maybe increase the fitting capacity of the network. Add another encoder/decoder layer and/or increase the filter size?

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by