Journal metrics

Journal metrics

  • CiteScore value: 0.46 CiteScore 0.46
  • SNIP value: 0.339 SNIP 0.339
  • SJR value: 0.199 SJR 0.199
  • IPP value: 0.35 IPP 0.35
  • h5-index value: 6 h5-index 6
  • Scimago H index value: 17 Scimago H index 17
Adv. Radio Sci., 13, 209-215, 2015
© Author(s) 2015. This work is distributed under
the Creative Commons Attribution 3.0 License.
03 Nov 2015
Multi-view point cloud fusion for LiDAR based cooperative environment detection
B. Jaehn, P. Lindner, and G. Wanielik Professorship of Communications Engineering, Chemnitz University of Technology, Chemnitz, Germany
Abstract. A key component for automated driving is 360° environment detection. The recognition capabilities of modern sensors are always limited to their direct field of view. In urban areas a lot of objects occlude important areas of interest. The information captured by another sensor from another perspective could solve such occluded situations. Furthermore, the capabilities to detect and classify various objects in the surrounding can be improved by taking multiple views into account.

In order to combine the data of two sensors into one coordinate system, a rigid transformation matrix has to be derived. The accuracy of modern e.g. satellite based relative pose estimation systems is not sufficient to guarantee a suitable alignment. Therefore, a registration based approach is used in this work which aligns the captured environment data of two sensors from different positions. Thus their relative pose estimation obtained by traditional methods is improved and the data can be fused.

To support this we present an approach which utilizes the uncertainty information of modern tracking systems to determine the possible field of view of the other sensor. Furthermore, it is estimated which parts of the captured data is directly visible to both, taking occlusion and shadowing effects into account. Afterwards a registration method, based on the iterative closest point (ICP) algorithm, is applied to that data in order to get an accurate alignment.

The contribution of the presented approch to the achievable accuracy is shown with the help of ground truth data from a LiDAR simulation within a 3-D crossroad model. Results show that a two dimensional position and heading estimation is sufficient to initialize a successful 3-D registration process. Furthermore it is shown which initial spatial alignment is necessary to obtain suitable registration results.

Citation: Jaehn, B., Lindner, P., and Wanielik, G.: Multi-view point cloud fusion for LiDAR based cooperative environment detection, Adv. Radio Sci., 13, 209-215,, 2015.
Publications Copernicus
Short summary
In the future autonomous robots will share their environment information captured by range sensors like LiDAR or ToF cameras. In this paper it is shown that a two dimensional position and heading information, e.g. obtained by GPS tracking methods, is enough to initialize a 3D registration method using the range images from different perspectives of different platforms (e.g. car & infrastructure). Thus they will be able to explore their surrounding in a cooperative manner.
In the future autonomous robots will share their environment information captured by range...