Match the coordinate systems of "triangulate" and "reconstructScene" with "disparitySGM"
6 ビュー (過去 30 日間)
古いコメントを表示
Hello,
I have an image pair of a grid-ruled sheet of paper and the corresponding stereoParams, obtained from checkerboard calibration with the Stereo Camera Calibrator using default settings. I apply both the following trains of processing steps to the image pair:
Processing A: rectifyStereoImages (with stereoParams), disparitySGM, reconstructScene (with stereoParams)
Processing B: individual undistortImage (with stereoParams.CameraParameters1[or 2]), detection of line intersections in both images, get the 3D locations of the intersections with triangulate (with stereoParams)
I wish to display both the point cloud resulting from A and the 3D feature locations from processing B in the same 3D plot, such that they coincide. But I find that they don't coincide; there is an apparent rotation about the origin of coordinates (optical center of camera 1) between the two. I show here the output of showExtrinsics(stereoParams) together with the point cloud (jet colormap in y) and the triangulated points (green). The triangulated points are where I expected them to be, while the point cloud is located right of the area that I had calibrated, off the plane of symmetry between the cameras. Both cameras' optical axes point at the center of the sheet of paper in the experiment.
When I apply the rotation matrix of inv(stereoParams.RotationOfCamera2)^0.5 to the point cloud, the point cloud almost coincides with the feature locations, but not to my satisfaction. Here I show a closeup (color map in depth, triangulated points in orange):
Note that I have a similar result when I compute the Euler angles of the original rotation matrix, multiply them by -0.5, and turn them into a rotation matrix again.
Now I would like to understand what exactly are the output coordinate systems of both reconstructScene and triangulate, including the orientation. And maybe somebody can disapprove my assumption that both methods should yield matching results.
0 件のコメント
採用された回答
Qu Cao
2022 年 8 月 17 日
The point cloud generated from reconstructScene is in the rectified camera 1 coordinate.
Starting in R2022a, you can use the additional output R1 of the rectifyStereoImage function to convert the reconstructed point cloud from the rectified camera 1 coordinate to the original, unrectified camera 1 coordinate, used by the triangulate function.
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Point Cloud Processing についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!