What does the output of imregcorr mean?

7 ビュー (過去 30 日間)
Eric
Eric 2015 年 3 月 11 日
コメント済み: Eric 2015 年 3 月 17 日
I am trying to understand the output of imregcorr and could use some help. Below is the code I am working with. I have my own function called RegisterViaReddy that uses the technique explained in the reference of imregcorr to register images that differ in translation and rotation (I wrote my code before imregcorr was released). Unfortunately I cannot post RegisterViaReddy, but I understand its behavior so hopefully its details are not relevant.
Here is the sample code I am working with:
%%Start with a clean workspace
clear all;close all;clc;%#ok
%%Load image
fixedFull = double(imread('cameraman.tif'));
rows = 30:226;
cols = rows;
fixed = fixedFull(rows,cols);
%%Specify motion parameters
theta = 5;%degrees
rowshift = 1.65;%pixels
colshift = 5.32;%pixels
%%Create rotated/translated image
RT = @(img,colshift,rowshift,theta) imrotate( imtransform(img, maketform('affine', [1 0 0; 0 1 0; colshift rowshift 1]), 'bilinear', 'XData', [1 size(img,2)], 'YData', [1 size(img,1)], 'FillValues', 0),theta,'crop'); %#ok
movingFull = RT(fixedFull, colshift, rowshift, theta);
moving = movingFull(rows,cols);
%%Show both images
figure;
imshowpair(moving,fixed,'montage');
%%Register images
[rowshift1, colshift1, theta1, imgReg] = RegisterViaReddy(fixed, moving);
tform1 = imregcorr(moving, fixed, 'rigid');
The function handle RT first translates an image and then rotates it. The resulting image is the same size as the input image. The outputs of my own RegisterViaReddy function are
>> [rowshift1, colshift1, theta1]
ans =
-1.7600 -5.1000 -5.3402
These are nearly the opposites of the known rowshift, colshift, and theta parameters. I wrote my code this way so that
RT(moving,colshift1,rowshift1,theta1);
generates something that looks like the fixed image.
I do not understand how to get these parameters from the output of imregcorr (tform1). I understand that acosd(tform1.T(1,1)) is 5.1799 and is hence the rotation angle. However, tform1.T is
0.9959 0.0903 0
-0.0903 0.9959 0
4.1423 -10.3337 1.0000
How do I extract meaningful translation parameters from this? I know I can generate something that looks like the fixed image using
imwarp(moving, tform1);
but the resulting array is 214x214 whereas fixed and moving are 197x197. Is there any way to get the translation offsets that I input from the output of imregcorr?
Thanks,
Eric
  1 件のコメント
Eric
Eric 2015 年 3 月 11 日
I define variables rows and cols above to create two images that are the same size and have translated/rotated fields-of-view. This simulates how a real imaging system performs when the pointing angle is changed slightly (to translate the image) and is rotated slightly.
-Eric

サインインしてコメントする。

採用された回答

Alex Taylor
Alex Taylor 2015 年 3 月 16 日
編集済み: Alex Taylor 2015 年 3 月 16 日
Eric,
I had been meaning to answer this question days ago and I finally came up for air. Hopefully this answer will still be of use.
There are several issues at play here:
1) The Image Processing Toolbox uses a different convention for the transformation matrix than many references you will find (they are a transponse of each other). The IPT convention is the transpose of many reference sources and is:
% Define a pure transformation, apply this transformation to input point (w,z) = (0,0)
tx = 1.65;
ty = 5.32;
T = [1 0 0; 0 1 0; tx ty 1];
w = 0;
z = 0;
xy = [w z 1]*T
This means that for a rigid transformation, the tform object returned by imregcorr is off the form:
tform = [cos(theta) sin(theta) 0; sin(theta) -cos(theta) 0; tx ty 1];
With the rotation matrix in the upper 2x2 and the translation in the last row.
2) In the operation you are synthetically applying to your input image and then attempting to recover, you apply a translation via imtransform and THEN you perform a rotation by using imrotate.
The transformation matrix returned by imregtform is an affine transformation consisting of a linear portion A (the upper 2x2 which includes rotation and scale) and an additive portion b (the last row which applies the translation.
In an affine transformation, the linear part of the transformation is applied prior to the additive part of the transformation: T(x) = Ax + b;
All of this together means that the transformation returned by imregtform has a different interpretation than your implementation of Reddy. This assumes the order of operations used to transform the moving was rotation followed by translation, not the other way around as you have defined it.
Another way of seeing this is to note that transformations can be composed by matrix multiplication, and that matrix multiplication is non-commutative, meaning that the order in which you apply rotation/translation matters.
theta = 5;
Trotate = [cosd(theta) -sind(theta) 0; sind(theta) cosd(theta) 0; 0 0 1];
Translate = [1 0 0; 0 1 0; 1.65 5.32 1];
Translate*Trotate
Trotate*Translate
3) Take a look at the example for imregtform. It uses imwarp to apply a given affine transformation matrix and then demonstrates how this transformation can be recovered, effectively doing exactly what your code is trying to do.
With the subscripting you are doing along with your transformation operations, it's not immediately obvious to me what the effective affine transformation matrix that you are applying to your input image is, so I'm not sure what the expected result is. Unfortunately, I don't have time to do this right now.
4) If you want to use imwarp to apply a geometric transformation to an image and reference the output image to the coordinate system of another image, then you will need to use the 'OutputView' option of imwarp to guarantee that the output image is the same size as the fixed image. Again, this is shown in the examples for imregtform.
5) If you want to look at the result that imregtform is giving as a registration and compare it to your implementation, here is the code to do that based on your example:
% Start with a clean workspace
clear all;close all;clc;%#ok
% Load image
fixedFull = double(imread('cameraman.tif'));
rows = 30:226;
cols = rows;
fixed = fixedFull(rows,cols);
% Specify motion parameters
theta = 5;%degrees
rowshift = 1.65;%pixels
colshift = 5.32;%pixels
% Create rotated/translated image
RT = @(img,colshift,rowshift,theta) imrotate( imtransform(img, maketform('affine', [1 0 0; 0 1 0; colshift rowshift 1]), 'bilinear', 'XData', [1 size(img,2)], 'YData', [1 size(img,1)], 'FillValues', 0),theta,'crop'); %#ok
movingFull = RT(fixedFull, colshift, rowshift, theta);
moving = movingFull(rows,cols);
% Show both images
figure;
imshowpair(moving,fixed,'montage');
% Register images
tform1 = imregcorr(moving, fixed, 'rigid');
movingReg = imwarp(moving,tform1,'OutputView',imref2d(size(fixed)));
figure, imshowpair(movingReg,fixed);
  3 件のコメント
Alex Taylor
Alex Taylor 2015 年 3 月 16 日
編集済み: Alex Taylor 2015 年 3 月 16 日
Eric,
Take a modified form of the example from the imregcorr help. This shows how to synthetically apply a desired translation and rotation using imwarp, and then how to recover this operation using imregcorr. I'm not going to use code formatting below because it's making the lines wrap funny.
fixed = imread('cameraman.tif');
theta = 8;
S = 1.0;
tx = 4.2;
ty = 6.7;
tformFixedToMoving = affine2d([S.*cosd(theta) -S.*sind(theta) 0; S.*sind(theta) S.*cosd(theta) 0; tx ty 1]);
moving = imwarp(fixed,tform,'OutputView',imref2d(size(fixed)));
% Add a bit of noise uniform noise to moving to make the problem harder.
moving = moving + uint8(10*rand(size(moving)));
tformMovingToFixedEstimate = imregcorr(moving,fixed);
figure, imshowpair(fixed,moving,'montage');
% Apply estimated geometric transform to moving. Specify 'OutputView' to
% get registered moving image that is the same size as the fixed image.
Rfixed = imref2d(size(fixed));
movingReg = imwarp(moving,tformMovingToFixedEstimate,'OutputView',Rfixed);
figure, imshowpair(fixed,movingReg,'montage');
tformFixedToMovingEst = invert(tformMovingToFixedEstimate);
tformFixedToMovingEst.T
angleErrorInDegrees = acosd(tformFixedToMovingEst.T(1,1)) - theta
translationError = [tx ty] - tformFixedToMovingEst.T(3,1:2)
Eric
Eric 2015 年 3 月 17 日
Alex,
Thanks again for taking the time to look at this. The difficulty I would have in using this technique for simulations and comparing registration accuracies is that the moving image is zero over the portion of the field-of-view that does not overlap with that of the fixed image. In real-world imagery those pixels have information about the extended field-of-view. By zeroing out those pixels you make the registration problem considerably easier. Hence I was trying to use my "rows" and "cols" variables to create two arrays with different fields-of-view but accurate information everywhere rather than zero where the fields-of-view did not intersect.
Thanks again,
Eric

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeGeometric Transformation and Image Registration についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by