Learning to See the Wood for the Trees:
Deep Laser Localization in Urban and Natural Environments on a CPU

Georgi Tinchev, Adrian Penate-Sanchez, Maurice Fallon

IEEE Robotics and Automation Letters/IEEE International Conference on Robotics and Automation (RA-L/ICRA) 2019. [arXiv]


Figure 1. PCA visualization of the feature space after training our model. Each sample represents a point cloud segment in 3D space. Samples from the same class are examples of similar objects in 3D space.



Localization in challenging, natural environments such as forests or woodlands is an important capability for many applications from guiding a robot navigating along a forest trail to monitoring vegetation growth with handheld sensors. In this work we explore laser-based localization in both urban and natural environments, which is suitable for online applications. We propose a deep learning approach capable of learning meaningful descriptors directly from 3D point clouds by comparing triplets (anchor, positive and negative examples). The approach learns a feature space representation for a set of segmented point clouds that are matched between a current and previous observations. Our learning method is tailored towards loop closure detection resulting in a small model which can be deployed using only a CPU. The proposed learning method would allow the full pipeline to run on robots with limited computational payload such as drones, quadrupeds or UGVs.


Supplementary Material

Loss Evaluation

We have trained two models of our network in order to understand the impact of each loss. We have trained the network on the MNIST dataset with 1) triplet loss only and 2) with both triplet and pairwise losses. As our network requires raw point clouds instead of images, thus we have transformed the images of the digits to point clouds with Z=0 (planar surface) and sampling uniformly 256 points from the white parts of the images (white represents the digit, the background is always black). The figures below present the ROC curve (left) and the TopK accuracy evaluated on the unseen portion of the MNIST dataset.