交通运输系统工程与信息 ›› 2022, Vol. 22 ›› Issue (2): 117-126.DOI: 10.16097/j.cnki.1009-6744.2022.02.011

• 智能交通系统与信息技术 • 上一篇    下一篇

一种基于地图辅助的自动驾驶视-惯融合定位方法

程君,张立炎*,陈启宏   

  1. 武汉理工大学,自动化学院,武汉 430070
  • 收稿日期:2021-10-26 修回日期:2021-12-03 接受日期:2021-12-08 出版日期:2022-04-25 发布日期:2022-04-23
  • 作者简介:程君(1993- ),男,湖北咸宁人,博士生。
  • 基金资助:
    国家重点研发计划

A Map Aided Visual-inertial Fusion Localization Method for Autonomous Vehicles

CHENG Jun, ZHANG Li-yan* , CHEN Qi-hong   

  1. School of Automation, Wuhan University of Technology, Wuhan 430070, China
  • Received:2021-10-26 Revised:2021-12-03 Accepted:2021-12-08 Online:2022-04-25 Published:2022-04-23
  • Supported by:
    National Key Research and Development Program of China (2019YFB1504703)

摘要: 视觉同步定位与建图(Simultaneous Localization and Mapping, SLAM)方法广泛应用于自动驾驶领域。传统的方法利用车载摄像头表征车辆周围环境,同时估计自身位置,当车辆运动过快时,定位精度和鲁棒性会下降。针对此问题,本文提出一种地图辅助的视-惯融合定位方法。该方法在ORB-SLAM2(Oriented FAST and Rotated BRIEF SLAM2)的基础上拓展地图保存功能,将建图和定位拆分为两个独立模块,车辆首先以较慢的速度构建并保存具有视觉特征的地图,然后,在第2次运行时车载计算机调用预先保存的地图实现精确且稳定的定位性能。由于构建地图阶段采用了图优化算法融合惯性测量单元(Inertial Measurement Unit, IMU)的信息,地图误差得到有效校正。在KITTI数据集场景和实际场景中验证了所提方法的良好性能。实验结果表明,所提方法在4, 8, 16 m·s-1 驾驶速度下的定位精度分别为2.59,2.61,2.73 m,图像失帧率和路径丢失率分别为3.76%和1.38%,3.89%和1.69%,4.27%和1.84%。相比原始的ORB-SLAM2方法,系统定位精度和鲁棒性均得到了提高。

关键词: 智能交通, 地图辅助定位, 视觉-惯性融合, 自动驾驶

Abstract: Simultaneous Localization and Mapping (SLAM) method based on visual sensors is widely used for localization in the autonomous driving field. The traditional method uses the onboard camera to characterize the surrounding environment of the vehicle and estimate the vehicle locations. However, the accuracy and robustness are decreased when the vehicle moves fast. To solve this problem, this paper proposes a map-aided visual-inertial fusion localization method for autonomous driving. This method expands the map saving function on the basis of the ORBSLAM2 (Oriented FAST and Rotated BRIEF SLAM2) framework. The map building and positioning are divided into two independent modules. A map of environmental visual features was built and saved at a low driving speed. In the second run, the onboard computer loads the previously built- up map to realize high- precision and robust positioning performance. Since the graph-based optimization algorithm is adopted to integrate the inertial measurement unit (IMU) information in the map building stage, the errors of the visual map can be effectively corrected. The experimental results for scenes of the KITTI dataset and real-world scenes verified the good performance of the proposed method. The results indicate that the positioning error of the proposed method is respectively 2.59, 2.61, 2.73 m in the speed of 4, 8, 16 m/s. The frame lost rate (FLR) and path lost rate (PLR) are respectively 3.76% and 1.38%, 3.89% and1.69%, 4.27% and 1.84% for the three speed categories. Compared with the original ORB-SLAM2 framework, the positioning accuracy and robustness of the proposed method has been improved.

Key words: intelligent transportation, map aided localization, visual-inertial fusion, autonomous vehicles

中图分类号: