Skip to content

ZiliangMiao/Cocalibration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 

Repository files navigation

Cocalibration Method for Both Omnidirectional and Pinhole Camera-LiDAR

This work is developed by Ziliang Miao, Buwei He, Wenya Xie, supervised by Prof.Xiaoping Hong (ISEE-Lab, SDIM, SUSTech), and is accepted by IEEE Robotics and Automation Letters (RA-L).

Pre-print Paper: https://arxiv.org/pdf/2301.12934.pdf

Demo Video: https://www.youtube.com/watch?v=Uh0C9VL9YEQ

0. Abstract

This paper presents a novel 3D mapping robot with an omnidirectional field-of-view (FoV) sensor suite composed of a non-repetitive LiDAR (Livox Mid-360) and an omnidirectional camera (please consult Huanjun Technology, panoholic). Thanks to the non-repetitive scanning nature of the LiDAR, an automatic targetless co-calibration method is proposed to simultaneously calibrate the intrinsic parameters for the omnidirectional camera and the extrinsic parameters for the camera and LiDAR, which is crucial for the required step in bringing color and texture information to the point clouds in surveying and mapping tasks. Comparisons and analyses are made to target-based intrinsic calibration and mutual information (MI)-based extrinsic calibration, respectively. With this co-calibrated sensor suite, the hybrid mapping robot integrates both the odometry-based mapping mode and stationary mapping mode. Meanwhile, we proposed a new workflow to achieve coarse-to-fine mapping, including efficient and coarse mapping in a global environment with odometry-based mapping mode; planning for viewpoints in the region-of-interest (ROI) based on the coarse map (relies on the previous work); navigating to each viewpoint and performing finer and more precise stationary scanning and mapping of the ROI. The fine map is stitched with the global coarse map, which provides a more efficient and precise result than the conventional stationary approaches and the emerging odometry-based approaches, respectively.

1. Workflow

2. Co-calibration Results

3. Prerequisites

3.1 Ubuntu and ROS

Version: Ubuntu 18.04.

Version: ROS Melodic.

Please follow ROS Installation to install.

3.2. ceres-solver

Version: ceres-solver 2.1.0

Please follow Ceres-Solver Installation to install.

3.3. PCL

Version: PCL 1.7.4

Version: Eigen 3.3.4

Please follow PCL Installation to install.

3.4. OpenCV

Version: OpenCV 3.2.0

Please follow OpenCV Installation to install.

3.5. mlpack

Version: mlpack 3.4.2

Please follow mlpack Installation to install.

3.6 Livox SDK and Livox ROS Driver

The SDK and driver is used for dealing with Livox LiDAR. Remenber to install Livox SDK before Livox ROS Driver.

3.7 MindVision SDK

The SDK of the fisheye camera is in MindVision SDK.

4. Run Co-calibration

Sensors:

Currently, this cocalibration method only supports omnidirectional camera and non-repetitive scanning LiDAR (Livox).

If you want to calibrate the monocular camera and the Livox LiDAR, please replace the omnidirectional camera model to monocular camera model and modify the corresponding parameters of optimization.

We will consider supporting other types of LiDAR in the future.

Data:

Make the data, (dataset_name), cocalibration directories, refer to the file structure below.

Rename the accumulated non-repetitive scanned point cloud "full_fov_cloud.pcd", rename the hdr image "hrd_image.bmp".

Put the two raw files into ~/cocalibration/data/(dataset_name)/cocalibration directory.

Config:

Modify the parameters in the config file, cocalibration.yaml.

### Commands:
cd ~/$catkin workspace$
catkin_make
source ./devel/setup.bash
roslaunch cocalibration cocalibration.launch

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published