How to remove unwanted portion from background?
4 ビュー (過去 30 日間)
古いコメントを表示
Ostu's thresholding method is good and easy. For some images it clearly idetified the object in interest, but some other images it lefts some unwanted portion. When dealing with more 1000 images running in batch and applying ostu's method is not giving any good outputs. How can I improve this algorithms or any other idea where after applying ostu's methods we can remove unwanted portion and that will applicable for all images.
0 件のコメント
回答 (5 件)
Image Analyst
2021 年 10 月 18 日
It often does not work well for images that do not have a nice well separated bimodal histogram. The triangle method works well for skewed histograms, like with a log-normal shape. I'm attaching my implementation.
imbinarize() has an 'adaptive' option that might work well for you.
7 件のコメント
Image Analyst
2022 年 8 月 18 日
@Zara Khan I don't know. I haven't done it. If none of the papers links I gave you uses that particular method (deep learning) and you don't want to use any of the successful methods that they developed, used and published, then you're on your own. I'm no further along than you are, and I don't plan on going into gesture research so don't wait on me.
yanqi liu
2021 年 10 月 27 日
sir,please check the follow code to get some information
clc; clear all; close all
urls={'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779463/img16.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779458/img6.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779453/img4.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779448/img2.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779443/img1.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779438/img1%20(2).png'};
for k = 1 : length(urls)
im = imread(urls{k});
if ndims(im) == 3
im = rgb2gray(im);
end
im2 = imadjust(im,stretchlim(im),[]);
bw = imbinarize(im2,'adaptive','ForegroundPolarity','dark','Sensitivity',0.85);
bw = bwareaopen(bw, 100);
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 15, 0));
bw = imfill(bw, 'holes');
bw = bwareaopen(bw, 500);
[L,num] = bwlabel(bw);
vs = [];
for i = 1 : num
bwi = bw;
bwi(L~=i) = 0;
vi = mean(double(im2(logical(bwi))));
vs(i) = vi;
end
[~,ind] = max(vs);
bw(L~=ind) = 0;
[r,c] = find(logical(bw));
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
figure; imshow(im, []);
hold on; rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
end
6 件のコメント
Image Analyst
2022 年 8 月 20 日
OK well that's what happens if people don't comment their code. What find does is to find non-zero (white) pixel locations and put their row and column coordinates into variables r and c.
Then by using min and max she assumes that there is just one blob so the min and max will give you the left column, right column, top row, and bottom row. Now to use the rectangle function you need to pass it a position array in the form [xLeft, yTop, width, height] so those are the 4 expressions she put into this line of code:
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
Now you can pass that in for the 'Position' argument to the rectangle() function like this:
rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
yanqi liu
2021 年 10 月 27 日
sir,use some basic method may be not the best choice,so may be consider some DeepLearning method,such as unet
clc; clear all; close all
urls={'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779463/img16.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779458/img6.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779453/img4.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779448/img2.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779443/img1.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779438/img1%20(2).png'};
for k = 1 : length(urls)
im = imread(urls{k});
if ndims(im) == 3
im = rgb2gray(im);
end
im2 = imadjust(im,stretchlim(im),[]);
bw = imbinarize(im2,'adaptive','ForegroundPolarity','dark','Sensitivity',0.85);
bw = bwareaopen(bw, 100);
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 12, 0));
bw = imfill(bw, 'holes');
bw = bwareaopen(bw, 500);
[L,num] = bwlabel(bw);
vs = [];
for i = 1 : num
bwi = bw;
bwi(L~=i) = 0;
vi = mean(double(im2(logical(bwi))));
vs(i) = vi;
end
[~,ind] = max(vs);
bw(L~=ind) = 0;
bw = logical(bw);
bw = imfill(bw, 'holes');
bw = imclose(bw, strel('disk', 15));
[r,c] = find(logical(bw));
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
im2 = im;
im2(~bw) = 0;
figure; imshow(im2, []);
%hold on; rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
end
16 件のコメント
Image Analyst
2022 年 8 月 23 日
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 12, 0));
It would have been good if she had commented all the lines of code especially since the algorithm is long and fairly complicated. She's using line because she wanted to filter preferentially along a certain direction rather than isotropically in all directions. The imopen will get rid of tendrils that are vertical and less than 9 pixels wide. The imclose will fill in gaps along horizontal edges of the blob.
Zara Khan
2022 年 8 月 27 日
1 件のコメント
Image Analyst
2022 年 8 月 27 日
I am not a gesture recognition researcher. But I know that any successful algorithm will not just be one page of code. You need to use a robust algorithm already developed by specialists in the area who have published their algorithms. Go here to see a list of them:
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!