Main Content

What Is Lidar-Camera Calibration?

Lidar-camera calibration establishes correspondences between 3-D lidar points and 2-D camera data to fuse the lidar and camera outputs together.

Lidar sensors and cameras are widely used together for 3-D scene reconstruction in applications such as autonomous driving, robotics, and navigation. While a lidar sensor captures the 3-D structural information of an environment, a camera captures the color, texture, and appearance information. The lidar sensor and camera each capture data with respect to their own coordinate system.

Lidar-camera calibration consists of converting the data from a lidar sensor and a camera into the same coordinate system. This enables you to fuse the data from both sensors and accurately identify objects in a scene. This figure shows the fused data.

Lidar and camera data fused together

Lidar-camera calibration consists of intrinsic calibration and extrinsic calibration.

  • Intrinsic calibration — Estimate the internal parameters of the lidar sensor and camera.

    • Manufacturers calibrate the intrinsic parameters of their lidar sensors in advance.

    • You can use the estimateCameraParameters function to estimate the intrinsic parameters of the camera, such as focal length, lens distortion, and skew. For more information, see the Single Camera Calibration example.

    • You can also interactively estimate camera parameters using the Camera Calibrator app.

  • Extrinsic calibration — Estimate the external parameters of the lidar sensor and camera, such as location, orientation, to establish relative rotation and translation between the sensors.

Extrinsic Calibration of Lidar and Camera

The extrinsic calibration of a lidar sensor and camera estimates a rigid transformation between them that establishes a geometric relationship between their coordinate systems. This process uses standard calibration objects, such as planar boards with checkerboard patterns.

This diagram shows the extrinsic calibration process for a lidar sensor and camera using a checkerboard.

Lidar camera calibration process

The programmatic workflow for extrinsic calibration consists of these steps. Alternatively, you can use the Lidar Camera Calibrator app to interactively perform lidar-camera calibration.

  1. Extract the 3-D information of the checkerboard from both the camera and lidar sensor.

    1. To extract the 3-D checkerboard corners from the camera data, in world coordinates, use the estimateCheckerboardCorners3d function.

    2. To extract the checkerboard plane from the lidar point cloud data, use the detectRectangularPlanePoints function.

  2. Use the checkerboard corners and planes to obtain the rigid transformation matrix, which consists of the rotation R and translation t. You can estimate the rigid transformation matrix by using the estimateLidarCameraTransform function. The function returns the transformation as a rigidtform3d object.

    Extrinsic parameter transforming from lidar to camera frame

You can use the transformation matrix to:

References

[1] Zhou, Lipu, Zimo Li, and Michael Kaess. “Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences.” In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5562–69. Madrid: IEEE, 2018. https://doi.org/10.1109/IROS.2018.8593660.

See Also

| | | | |

Related Examples

More About

Go to top of page