Efficient LIDAR-based Global Localization

Georgi Tinchev, Adrian Penate-Sanchez, Maurice Fallon

Science Robotics

[arXiv] [Slides (TBA)]

Abstract

In this work we explore LIDAR-based global localization in both urban and natural environments and develop a method suitable for online robotics applications. Our approach leverages an efficient deep learning architecture capable of learning compact point cloud descriptors directly from 3D data. The method uses a feature space representation of a set of segmented point clouds to match observations of the current scene to a prior map. We show that down-sampling the inner layers of the network can significantly reduce computation time without sacrificing performance. We evaluate the proposed methods on nine scenarios from six datasets including urban, park, forest and industrial environments. Our experiments demonstrate that the method reduces computation by a factor three, requires 70\% less memory with a marginal loss in localization frequency when compared to state-of-the-art approaches. Crucially, the proposed learning method does not require a GPU at run time and can thus be run on robots with limited computation and power payloads such as drones, quadrupeds or UGVs.

Robotcar Dataset Preview

[Dataset Directory (TBA)]