Aligning 3D stereo co-ordinate system along local vertical and local horizontal

10 ビュー (過去 30 日間)
Using stereo vision, I am able to reconstruct the object under inspection.
After calibration, I know the co-ordinate system which stereo setup uses is mentioned here (it is wrt optical centre of Camera 1).
I have noticed that the same co-ordinate system may not be followed in real world 3D. For example, the local vertical (of a place, which can be found using a spirit level) need not align with the vertical axis (y-axis) of my stereo setup's co-ordinate system.
Suppose I move my object only along the local vertical, without any change in its x position (i.e. local horizontal of that place), according to the stereo co-ordinate system used by the cameras, my object would have moved along y axis (obviously), and also along the x-axis of camera (which is not right, since according to the real world, my object hasn't been moved along local horizontal at all!).
How can I tackle this issue? I am aligning the stereo setup according to the local vertical and local horizontal (using a spirit level), as well as my object. Still, when I move it along local vertical only, there is a few mm change in x-coordinate reading as well. Any inputs regarding this would be appreciated.
  1 件のコメント
Meghana Dinesh
Meghana Dinesh 2015 年 6 月 29 日
編集済み: Meghana Dinesh 2015 年 7 月 1 日
I think it's similar to this question. But mine is in 3D. What are the relevant functions on MATLAB I can use?
The co-ordinate system considered by my Camera1 is different compared to the real world's. For example, the y-axis of Stereo Camera1 isn't the same as (not parallel to) the local vertical. This is transformation between two 3D co-ordinate systems. Right? How can I go about this?
BTW, I have read this. Slide #19 onwards addresses my issue.

サインインしてコメントする。

採用された回答

Dima Lisin
Dima Lisin 2015 年 6 月 29 日
編集済み: Dima Lisin 2015 年 6 月 29 日
Hi Meghana,
Please keep in mind that in the camera-based coordinates the X-Y plane is the image plane, which is inside the camera. It may well not be precisely aligned with the camera's outer casing.
If you need a world coordinate system not tied to the camera, then you can define one by placing a checkerboard in your scene. You can then use the extrinsics function to compute the transformation from the checkerboard's coordinates into the camera's coordinates.
  6 件のコメント
Meghana Dinesh
Meghana Dinesh 2015 年 7 月 9 日
編集済み: Meghana Dinesh 2015 年 7 月 9 日
Oh! I did not know this. Thank you.
I want to clarify my understanding. Camera co-ordinates is the co-ordinate system which is used when I calculate Point Cloud. It depends on the physical orientation of the sensor. It considers the optical centre of Camera 1's image sensor as origin.
[rotationMatrix,translationVector] = extrinsics(imagePoints,worldPoints,cameraParams)
In this, imagePoints are the Checkerboard corner points (where the checkerboard is aligned (local vertical and local horizontal) according to the orientation I want. Correct?).
I have a doubt here: worldPoints should be the co-ordinates after triangulation (so my algorithm knows that these are the values of checkerboard points in camera co-ordinate system which should be mapped to the new world co-ordinate system.) Instead, how does it get this information from generateCheckerboardPoints?
If I have this clarity in understanding, I will be able to use these functions more effectively.
Dima Lisin
Dima Lisin 2015 年 7 月 13 日
Hi Meghana,
There is no triangulation here. The checkerboard simply defines a coordinate system. By calling generateCheckerboardPoints you effectively specify points in the Z=0 plane. The extrinsics function then gives you the rotation and translation between this new coordinate system and your camera's coordinate system.

サインインしてコメントする。

その他の回答 (0 件)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by