remove boundary contours from the image
古いコメントを表示
Hi, I have an attached image, and I want to delete everything outside the middle circle.
From its binary data, I tried some implementations by replacing some 255 values with 0.
And I want to know if there is a specific way of removing specified contours.
Thank you for your time in advance.
Best regards
MB

回答 (1 件)
Image Analyst
2020 年 1 月 19 日
Do you know the diameter of the inner circle? If so, just use the FAQ to create a circle mask, and mask it away
mask(circleMask) = false;
Can I see the original image that you made this edge image from? Because it might be possible to get a mask using thresholding instead of edge detection. Edge detection is usually NOT the first thing you want to do to an image. For most images thresholding is the way to do, with phase contrast and DIC microscopy images being one exception where you might want to do edge detection.
35 件のコメント
Mammadbaghir Baghirzade
2020 年 1 月 19 日
Image Analyst
2020 年 1 月 19 日
編集済み: Image Analyst
2020 年 1 月 19 日
OK that might be a candidate for edge detection since the rgay levels inside the edge are the same as outside. How about if you threshold to get the dark surround then make a mask of that,
mask = grayImage < 50; % or whatever
and get the mode of the image inside that?
modeValue = mode(grayImage(~mask));
Then set the image in the corners to that gray level to make them be the same intensity as the outer circle.
grayImage(mask) = modeValue;
% Now do Canny edge detection.
What was the code you used to get the edge image? This will eliminate, or greatly reduce the edges due to the outer circle and you can then just concentrate on the inner circle, like maybe using bwconvhull() or something.
Mammadbaghir Baghirzade
2020 年 1 月 19 日
Image Analyst
2020 年 1 月 19 日
Have you tried imfindcircles()?
Mammadbaghir Baghirzade
2020 年 1 月 19 日
Image Analyst
2020 年 1 月 20 日
You just need to know a range of circle radii, not the radius exactly. I would also like that if I would mask out the outer circle. Do you know anything about the range? Like can the inner circle be anywhere from 0 to 100% of the outer circle? Or is it in some range, like 5% to 75% of the outer one?
Mammadbaghir Baghirzade
2020 年 1 月 20 日
Image Analyst
2020 年 1 月 20 日
How about if you just scan the image across columns and delete the first and last white pixel in each column? That would delete the outer one.
[rows, columns] = size(mask);
for col = 1 : columns
topRow = find(mask(:, col), 1, 'first');
if ~isempty(topRow)
mask(topRow, col) = false; % Erase top row.
bottomRow = find(mask(:, col), 1, 'last');
mask(bottomRow, col) = false; % Erase bottom row.
end
end
then call bwareaopen() to get rid of other, remaining small clutter blobs.
Mammadbaghir Baghirzade
2020 年 1 月 20 日
Image Analyst
2020 年 1 月 20 日
Very strange. Are you sure you were using the binary/logical edge image from the Canny process? And are you sure it's logical, not uint8 (gray scale)?
Mammadbaghir Baghirzade
2020 年 1 月 20 日
Mammadbaghir Baghirzade
2020 年 1 月 21 日
編集済み: Mammadbaghir Baghirzade
2020 年 1 月 21 日
Mammadbaghir Baghirzade
2020 年 1 月 21 日
Image Analyst
2020 年 1 月 21 日
Use imabsdiff()
Mammadbaghir Baghirzade
2020 年 1 月 21 日
Image Analyst
2020 年 1 月 21 日
No, you don't use edge detection anymore. You threshold.
mask = grayImage > someValue; % Binarize the image.
% Take 2 largest blobs
mask = bwareafilt(mask, 2); % Or use bwareaopen().
% Get the convex hull
chImage = bwconvhull(mask, 'union');
areaInPixels = nnz(mask)
Mammadbaghir Baghirzade
2020 年 1 月 21 日
Mammadbaghir Baghirzade
2020 年 1 月 21 日
Image Analyst
2020 年 1 月 21 日
To that subtraction image:
diffImage = imabsdiff(originalImage, referenceEmptyImage);
mask = diffImage > someValue; % Binarize the image.
% Take 2 largest blobs
mask = bwareafilt(mask, 2); % Or use bwareaopen().
% Get the convex hull
chImage = bwconvhull(mask, 'union');
areaInPixels = nnz(mask)
Mammadbaghir Baghirzade
2020 年 1 月 21 日
Image Analyst
2020 年 1 月 21 日
or you can just guess values of 1, 2, 3, 4, 5, etc. until you find one that looks good.
Mammadbaghir Baghirzade
2020 年 1 月 21 日
Image Analyst
2020 年 1 月 22 日
It looks like you are somehow still dealing with an RGB image. You need to make sure diffImage is a 2-D gray scale image, not an RGB image. If you do that, mask will be a 2-D binary/logical image and it should work. Right after you call imread(), just have this code:
% Get the dimensions of the image.
% numberOfColorChannels should be = 1 for a gray scale image, and 3 for an RGB color image.
[rows, columns, numberOfColorChannels] = size(grayImage)
if numberOfColorChannels > 1
% It's not really gray scale like we expected - it's color.
% Use weighted sum of ALL channels to create a gray scale image.
% grayImage = rgb2gray(rgbImage);
% ALTERNATE METHOD: Convert it to gray scale by taking only the green channel,
% which in a typical snapshot will be the least noisy channel.
grayImage = grayImage(:, :, 2); % Take green channel.
end
% Now it's gray scale with range of 0 to 255.
Mammadbaghir Baghirzade
2020 年 1 月 22 日
Mammadbaghir Baghirzade
2020 年 1 月 23 日
Image Analyst
2020 年 1 月 23 日
That gives you the number of white pixels. That can be considered as one way to measure the area. And maybe it's fine for you to discriminate between two slightly different images.
Mammadbaghir Baghirzade
2020 年 1 月 23 日
Image Analyst
2020 年 1 月 23 日
You could, but most people don't. Most everybody understands that when you give a distance, the units are pixels (if you're not spatially calibrating to real world units such as centimeters, like with my attached demo code), and that if you're talking about area, the units are also pixels, not pixels squared. It's just understood. People know what you're talking about from the context (length or area). I'm sure they'd also figure it out if you said pixels squared, but that would be unconventional terminology.
Mammadbaghir Baghirzade
2020 年 1 月 23 日
Image Analyst
2020 年 1 月 23 日
See attached demo.
Mammadbaghir Baghirzade
2020 年 1 月 23 日
Image Analyst
2020 年 1 月 23 日
Basically you need to define a spatial calibration factor, like 3 cm = 450 pixels or whatever. So you'd make a factor like
spatialCalibration = 3/450; % Factor to convert pixels to cm.
so if you multiply pixels * (cm/pixel) you get cm, because the pixels cancel out.
distanceInCm = distanceInPixels * spatialCalibration;
If you have an area then you need to multiply by that squared
areaInSquareCM = areaInPixels * spatialCalibration^2;
to get the area in square cm.
Mammadbaghir Baghirzade
2020 年 1 月 24 日
Image Analyst
2020 年 1 月 24 日
Yes. The default spacing, if nothing is specified otherwise, is 1 pixel. It interpolates it because chances are that the locations will not fall exactly on pixel centers (unless the line is perfectly along a row or column).
Mammadbaghir Baghirzade
2020 年 1 月 26 日
カテゴリ
ヘルプ センター および File Exchange で Images についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!





