Image remapping using Pixel values
12 ビュー (過去 30 日間)
古いコメントを表示
How to code for mapping the pixel values other than black to a new coordinates for following image?
2 件のコメント
回答 (2 件)
Udit06
2023 年 9 月 8 日
Hi Imran,
I understand that you want to map the non-black pixels of your image to new coordinates. You can follow the following steps to do the same:
- Find non-zero pixel coordinates in the image.
- Define the transformation function that takes the row and column indices as inputs and returns the new row and column indices. Apply this transformation function to each non-zero pixel coordinate.
- Initialize a new image with the same size as the original image and set all pixels to black (zero values). Assign the non-zero pixel values from the original image to the new coordinates in the new image.
Following is the code implementation of the steps suggested above.
% Reading the image with a given filePath
image = imread(filePath);
% Find Non-Zero Pixel Coordinates
[row, col] = find(image ~= 0);
% Map Pixel Coordinates to New Coordinates where t is the transformation function
[new_row, new_col] = t(row, col);
% Create a New Image with Mapped Pixel Values
new_image = zeros(size(image));
num_nonzero_pixels = numel(row);
for i = 1:num_nonzero_pixels
new_image(new_row(i), new_col(i)) = image(row(i), col(i));
end
new_image=uint8(new_image);
% Display the New Image
imshow(new_image);
I hope this helps.
0 件のコメント
DGM
2023 年 9 月 9 日
編集済み: DGM
2023 年 9 月 9 日
This all started by asking what was probably the wrong question to begin with
... then continued by trying to undo exactly what was asked
... while ignoring the fact that arrays are prismatic
... all without clarifying why the pointless and impossible was presumed to be necessary.
This is the transformation as incompletely described in 1778250 and 1773635. It's just a rigid translation of rows. I'm not going to bother refining this, because there's no point.
% this is an image which has been filled in with black
% filling this image with black has accomplished nothing
inpict = imread('unnecessary.bmp');
% so instead of realizing that there was no purpose for the prior step
% we're going to keep this useless altered image and then try to get rid
% of the problem we created by shifting the image rows.
% what values replace the shifted values on the other side?
% it necessarily must be something, so why not use black again?
% why not just use the same black pixels?
mask = inpict ~= 0;
[~,c] = max(mask,[],2);
outpict = inpict;
for r = 1:size(inpict,1)
outpict(r,:) = circshift(inpict(r,:),[0 -c(r)]);
end
imshow(outpict,'border','tight')
This also accomplishes nothing. What was supposed to be accomplished? Nobody actually knows, but considering that the stated goal was variously to "remove the background", then this obviously -- as already explained -- was never an answer to the problem, even at the most superficial level.
Would a projective transformation (or some other interpolation) allow us to stretch the central region to fill a rectangular space, thereby eliminating the background?
% a different RGB image
inpict = imread('bluefing.png');
% select foreground region
[H,~,~] = rgb2hsv(inpict);
mask = H > 0.78 | H < 0.022;
% find mask extents in all rows
[~,c1] = max(mask,[],2);
nc = sum(mask,2);
c2 = c1 + nc - 1;
% interpolate each row like a maniac
% i'm going to do it this way simply as an extension of the prior example,
% not because i think it's a good idea.
outpict = im2gray(inpict); % i assume the output should be gray
outpict = im2double(outpict);
xout = linspace(0,1,size(outpict,2));
for r = 1:size(outpict,1)
xin = linspace(0,1,nc(r));
yin = outpict(r,c1(r):c2(r));
outpict(r,:) = interp1(xin,yin,xout);
end
% show the stretched out blurry finger
% are blurry, nonuniformly deformed features worth measuring? idk
imshow(outpict,'border','tight')
Yes, but that's not what was described, and I have to question what useful and unadulterated information would be left.
Would simple cropping work? What about rotating and then cropping? What about combinations of these operations? Again, nobody knows what's appropriate, because nobody has a clear description of the goal.
Nobody has asked whether it's necessary to make the finger fill the frame at all. If we assume that it is, what exactly should the output geometry be? Based on this other question, it's probably safe to assume that the input geometries are inconsistent and need to be made to match, so we probably shouldn't be generating our output to match the input geometry at all. Never is there any description of what the output geometry should be or how it's calculated if it's a derived quantity.
At this point, it's also not acceptably clear how this inconsistency should be resolved. Should the image be cropped or resized? If cropped, which portion gets retained? While 1783820 clearly says "resize", the prior questions have asked for things which don't help solve the problem, so I have to severely discount the suggestion.
This feels like a collage of XY problems and nothing much else.
1 件のコメント
Image Analyst
2023 年 9 月 9 日
Definitely an XY problem situation
A lot of times people ask for something they think they need but really don't. I see no reason to do what the original poster asked. No reason at all. I'd need more context and a justification of why moving black pixels is a requirement to do image analysis on the finger region. It's highly likely they can be left where they are and analysis can still take place.
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!