How to remove the background/background noise from the images that comprise my video?
古いコメントを表示
Hi,
I have a video in which a cell moves along a channel...however, the background i.e. the channel is near the brightness of that of my cell, so when I convert the video into images of its frames, and turn them into grayscale images, I cannot isolate the cell from its background very clearly, though I can faintly see it move against the background.
I tried frame subtraction..after subtracting the first frame that does not contain the cell from the remaining, I still get a lot of background noise. *Thus, when I try locating the cell by trying to identify the maximum brightness spot, *I get the locations of different other points with the greater brightness than the cell.
Can you suggest what I can do?
採用された回答
その他の回答 (3 件)
Image Analyst
2011 年 11 月 2 日
0 投票
I'd suggest you post a few frames of your video somewhere so people can make informed suggestions.
19 件のコメント
Yagnaseni Roy
2011 年 11 月 3 日
Walter Roberson
2011 年 11 月 3 日
http://www.mathworks.com/matlabcentral/answers/7924-where-can-i-upload-images-and-files-for-use-on-matlab-answers
Yagnaseni Roy
2011 年 11 月 3 日
編集済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 4 日
編集済み: DGM
2023 年 2 月 13 日
Image Analyst
2011 年 11 月 4 日
I'm not sure what you want to keep. Do you want the blue stuff to be uniform and the lines to "pop out" and be really visible?
Yagnaseni Roy
2011 年 11 月 4 日
Yagnaseni Roy
2011 年 11 月 4 日
Yagnaseni Roy
2011 年 11 月 5 日
Image Analyst
2011 年 11 月 7 日
Around 1200, 250 on which image? And do you have a grayscale version of these images rather than this pseudocolored image? It looks like you used imagesc() which applies some strange colormap by default. Can you just save the image without the figure axes, tick marks, colormap, etc.
Yagnaseni Roy
2011 年 11 月 7 日
編集済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 8 日
Image Analyst
2011 年 11 月 8 日
Sorry - I still don't know which one you're trying to find. Can you annotate an image with an arrow pointing to the blob you want and tell me what's different about it than all the other blobs or dots, other than its location? Or is it the same and the only way you know it's there is that it's the only thing that moves from frame to frame?
Yagnaseni Roy
2011 年 11 月 9 日
編集済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 9 日
Image Analyst
2011 年 11 月 11 日
Why can't you just point to it with ginput. If you want, you can look in an area around the point to try to home in on it. It might be tedious for 480 frames but at least you'll get it done. You'd have had it done by now if you'd done that. I can't really do much more without the video and I don't really have time to do free private consulting for you, since this looks like it will take more than 10 minutes or so.
Yagnaseni Roy
2011 年 11 月 12 日
編集済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 12 日
編集済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 12 日
編集済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 12 日
Bjorn Gustavsson
2011 年 11 月 3 日
0 投票
In addition to medfilt2, you can also play around with wiener2 (Lee's sigma filter). Then there are a bunc of filter functions on the file exchange. One that I'd suggest you could try is bilateral or susan filters, there are a couple of those. Then there is a few nonlinear diffusion filters - they are really fancy and powerful but might be on the slow side when it comes to filtering a large number of frames. These two types of "more advanced" filters are my favourites outside of medfilt2 and wiener2, but tastes might vary.
Another point you might have to take into account (couldn't see your images so I'm really shooting in the dark here) is that you might have photo-respons non-uniformities - pixel-to-pixel variation in sensitivity. That is something you might be able to correct for by calculating the average of the ratio between medfilt2(frame,[5,5])./frame over all frames. That might give you a correction factor for PRNU, that might make filtering easier.
Then you might have a few bright or dark spots that eats up most of your grayscale. That you can overcome by setting the intensity limits manually or by doing some histogram clipping automatically.
I'm just tossing up a few ideas that I know can be useful, hopefully some of them helps.
3 件のコメント
Yagnaseni Roy
2011 年 11 月 4 日
Yagnaseni Roy
2011 年 11 月 4 日
Yagnaseni Roy
2011 年 11 月 4 日
編集済み: DGM
2023 年 2 月 13 日
i Venky
2011 年 11 月 4 日
0 投票
Use top hat or bottom hat. It is used to reduce illumination
5 件のコメント
Yagnaseni Roy
2011 年 11 月 4 日
編集済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 4 日
I think you got me wrong. You have performed tophat after you have processed the frames. You have to use imtophat before you process the frame. Sometimes the background intensities change at different points because it's doesn't have constant value throughout. So before you process the frames( say frame subtraction) try using imtophat to the frames. It would give you uniform intensity for the background. Now if you do frame subtraction the background of the two frames will have the same intensity and will clearly become zero.
I got the same problem once and I did top hat and it worked. I am not sure if it would work for you. Anyway give it a try.
Yagnaseni Roy
2011 年 11 月 7 日
移動済み: DGM
2023 年 2 月 13 日
Yagnaseni Roy
2011 年 11 月 11 日
移動済み: DGM
2023 年 2 月 13 日
カテゴリ
ヘルプ センター および File Exchange で Computer Vision with Simulink についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!