Real-time LIDAR localization in natural and urban environments

Georgi Tinchev, Adrian Penate-Sanchez, Maurice Fallon

The International Journal of Robotics Research (IJRR) – IJR-19-3694

[arXiv] [Slides (TBA)]

Abstract

Localization is a key challenge in many robotics applications. In this work we explore LIDAR-based global localization in both urban and natural environments and develop a method suitable for online application. We present a method capable of achieving state-of-the-art performance while being three times faster than previous approaches, as well as occupying 70% less memory without a significant loss of performance. Our approach leverages efficient deep learning architectures capable of learning compact point cloud descriptors directly from 3D data. The method uses an efficient feature space representation of a set of segmented point clouds to match between the current scene and the prior map. We show that down-sampling in the inner layers of the network can significantly reduce computation time without sacrificing performance. We evaluate the proposed methods on nine scenarios from six datasets varying between urban, park, forest and industrial environments. Our experiments demonstrate a factor of three reduction of computation with marginal loss in localization frequency. The proposed learning method can allow the full pipeline to run on robots with limited computation payload such as drones, quadrupeds or UGVs as it does not require a GPU at run time.

Robotcar Dataset Preview

[Dataset Directory (TBA)] [Evaluation Scripts (TBA)]