What is the rigth use of MATLAB built-in functions for visual odometry?

3 ビュー (過去 30 日間)
Canberk Suat Gurel
Canberk Suat Gurel 2018 年 3 月 26 日
I am working on a visual odometry code in MATLAB. I am using the following example ( estimateEssentialMatrix ) to obtain the essential matrix. You can open this example typing:
openExample('vision/EstimateEssentialMatrixFromAPairOfImagesExample')
to the Command Window. Then, I used
[relativeOrientation,relativeLocation] = relativeCameraPose(E,cameraParams,inlierPoints1,inlierPoints2);
[rotationMatrix,translationVector] = cameraPoseToExtrinsics(relativeOrientation,relativeLocation);
to recover the rotational matrices and translation vector. Then, I concatenated and plotted the translation vector (which indicates the location of the camera),
T_t = T_t + R_t * translationVector';
R_t = R_t * rotationMatrix';
location = vertcat(location,[T_t(1),T_t(3)]);
plot3(location(:,1),zeros(size(location(:,1),1),1), location(:,2))
where initially,
R_t = eye(3);
T_t = [0;0;0];
location = [0,0];
But, I am not getting the right result. I presume the issue is with the cameraParams object that contains the parameters of the camera.
I used a function ( ReadCameraModel) that was provided along with the data-set to obtain the camera intrinsics and undistortion LUT. The i/o of the function are as follows:
% ReadCameraModel - load camera intrisics and undistortion LUT from disk
%
% [fx, fy, cx, cy, G_camera_image, LUT] = ReadCameraModel(image_dir, models_dir)
%
% INPUTS:
% image_dir: directory containing images for which camera model is required
% models_dir: directory containing camera models
%
% OUTPUTS:
% fx: horizontal focal length in pixels
% fy: vertical focal length in pixels
% cx: horizontal principal point in pixels
% cy: vertical principal point in pixels
% G_camera_image: transform that maps from image coordinates to the base frame of the camera.
% For monocular cameras, this is simply a rotation.
% For stereo camera, this is a rotation and a translation to the left-most lense.
% LUT: undistortion lookup table. For an image of size w x h, LUT will be an
% array of size [w x h, 2], with a (u,v) pair for each pixel. Maps pixels
% in the undistorted image to pixels in the distorted image
I used the following code to get the cameraParams object:
[fx, fy, cx, cy, G_camera_image, LUT] = ReadCameraModel('./stereo/centre','./model');
K = [fx 0 cx;0 fy cy;0 0 1]; %Intrinsic Matrix of the camera
cameraParams = cameraParameters('IntrinsicMatrix',K);
Is this the right way of getting the cameraParams object? If yes, what is it that I am doing wrong?

回答 (0 件)

カテゴリ

Help Center および File ExchangeCamera Calibration についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by