交通运输系统工程与信息 ›› 2021, Vol. 21 ›› Issue (4): 72-81.DOI: 10.16097/j.cnki.1009-6744.2021.04.009

• 智能交通系统与信息技术 • 上一篇    下一篇

基于激光与视觉融合的车辆自主定位与建图算法

余祖俊* a, b,张晨光a,郭保青a, b   

  1. 北京交通大学,a. 机械与电子控制工程学院;b. 智慧高铁系统前沿科学中心,北京 100044
  • 收稿日期:2021-06-04 修回日期:2021-07-03 接受日期:2021-07-05 出版日期:2021-08-25 发布日期:2021-08-23
  • 作者简介:余祖俊(1968- ),男,湖北当阳人,教授,博士。
  • 基金资助:
    国家自然科学基金

Vehicle Simultaneous Localization and Mapping Algorithm with Lidar-camera Fusion

YU Zu-jun*a, b, ZHANG Chen-guanga , GUO Bao-qinga, b   

  1. a. College of Mechanical, Electronic and Control Engineering; b. Frontiers Science Center for Smart High-speed Railway System, Beijing Jiaotong University, Beijing 100044, China
  • Received:2021-06-04 Revised:2021-07-03 Accepted:2021-07-05 Online:2021-08-25 Published:2021-08-23
  • Supported by:
    National Natural Science Foundation of China(52072026)

摘要: 定位与建图是车辆未知环境自主驾驶的基础,激光雷达依赖于场景几何特征而视觉图像 易受光线干扰,依靠单一激光点云或视觉图像的定位与建图算法存在一定局限性。本文提出一 种激光与视觉融合SLAM(Simultaneous Localization And Mapping)的车辆自主定位算法,通过融 合互补的激光与视觉各自优势提升定位算法的整体性能。为发挥多源融合优势,本文在算法前 端利用激光点云获取视觉特征的深度信息,将激光-视觉特征以松耦合的方式输入位姿估计模块 提升算法的鲁棒性。针对算法后端位姿和特征点大范围优化过程中计算量过大的问题,提出基 于关键帧和滑动窗口的平衡选取策略,以及基于特征点和位姿的分类优化策略减少计算量。实 验结果表明:本文算法的平均定位相对误差为 0.11 m 和 0.002 rad,平均资源占用率为 22.18% (CPU)和 21.50%(内存),与经典的 A-LOAM(Advanced implementation of LOAM)和 ORB-SLAM2 (Oriented FAST and Rotated BRIEF SLAM2)算法相比在精确性和鲁棒性上均有良好表现。

关键词: 智能交通, 多传感器融合, 车辆自主定位, 激光视觉融合SLAM

Abstract: Localization and mapping is the basis of autonomous vehicle driving in unknown environment. Since the lidar technique heavily relies on geometric features of the scene and the visual images are vulnerable to light interference, the SLAM algorithms only rely on laser point cloud or visual images also show limitations on vehicle localization and mapping. In this paper, a vehicle self-positioning algorithm based on laser and vision fusion SLAM is proposed to improve the performance of localization by combining the complementary advantages. To give full play to the advantages of multi-source features, the laser point cloud is used to obtain the depth information of visual features at the front end of the algorithm. The laser-visual features are input into the pose estimation module in a loose-coupled way to improve the robustness of the algorithm. To solve the problem of large-scale optimization of the back-end pose and feature points, this study proposes two critical strategies to reduce the computation amount of the algorithm. The balanced selection strategy is based on key frame and sliding window and the classification optimization strategy is based on feature points and pose. Experimental results show that the average positioning relative error of the proposed algorithm is 0.11 m and 0.002 rad, and the average resource utilization rate is 22.18% (CPU) and 21.5% (memory). Compared with the traditional A-LOAM and ORB-SLAM2 algorithms, the proposed algorithm has a good performance on both accuracy and robustness.

Key words: intelligent transportation, multi-sensor fusion, vehicle self-positioning, Lidar-camera Fusion SLAM

中图分类号: