Where did my robot go? Being robust and cautious during Laser-based Localization

//Where did my robot go? Being robust and cautious during Laser-based Localization

Let’s imagine a scenario where a robot, either walking on two or four legs or moving on wheels, is required to explore a cluttered environment containing corridors, constrictions, uneven terrains or staircases.
The robot would process proprioceptive inertial and leg/wheel odometry measurements, as well as exteroceptive observations from a 3D laser scanner. The odometry estimate would accumulate drift over time. This drift can be corrected using exteroceptive sensing via a scene registration procedure.

The Boston Dynamics humanoid Atlas during the DARPA Robotics Challenge finals – video credits: MIT Team.

Autonomy versus Safety: what are the Requirements?

Such a laser-based localization system needs to be able to reliably operate with a low failure rate for long periods of time and in a variety of environments. The system should include auto-tuning capabilities and fail-safe operation modes, so as to adapt to different challenges, and achieve the reliable robot perception and localization performance which are critically important to facilitate effective control, and to achieve greater autonomy by maintaining consistent and precise reference of the terrain and objects in the environment.

The robot should be aware of its own 6 degrees-of-freedom pose with respect to the starting location at all times. If it gets lost, any plan made at start time is neither achievable or safe anymore, and needs to be re-computed. Ideally, a planned trajectory should be executable without the intervention of the human operator for re-computing and re-planning.

… and the Challenges?

State-of-the-art laser-based localization systems have been demonstrated to achieve low drift over long distances but can also be easily induced to fail in many real-world scenarios.

A cause of failure is the absence of geometric features which are necessary to constrain the alignment between two point clouds. For example, long corridors are unconstrained because of missing geometric features in the longitudinal dimension. Failures can also occur when passing through doorways, or due to occlusions which cause large variations in the volume scanned by consecutive sensor sweeps. We define “alignability” as a measure of the capacity for two point clouds to be aligned given their geometric constraints, e.g., mutually visible planar surfaces.

In many cases sensors have a limited field-of-view (FOV) because they are physically integrated within a robot’s chassis. With an obscured FOV, the degree of overlap between consecutive scans made by the sensor during motion is gradually reduced. Occlusions and constrictions in cluttered environments also introduce a large degree of overlap variation. This is a major problem for point cloud registration algorithms, as they are sensitive to the degree of overlap between input clouds. For these reasons, we believe that there is a need for failure prediction which is reliable to varying overlap.

Improving on Failure Awareness

Our previous work on laser-based localization proposed a strategy for non-incremental 3D scene registration, called Auto-tuned ICP (AICP). AICP extended a baseline Iterative Closest Point (ICP) implementation to more reliably register point clouds with reduced overlap by automatically tuning an outlier-rejection filter to account for the degree of overlap of the sensor’s footprint. This framework allowed accurate alignment to a single reference point cloud despite significant motion by the robot. But can we estimate how risky this alignment is?

The AICP algorithm was initially implemented to trigger a reference point cloud update when overlap decreases below an empirical threshold, as alignment would become risky in that case. However, overlap and alignability individually are not sufficient to predict a failure in the scenario we placed our robot in.



An example of unconstrained alignment: side view of a robot crossing the doorway between rooms A and B while capturing point clouds in poses i and j. According to our metric the clouds overlap by as little as 15%, and alignabilty is low due to the absence of matched planes in the direction of the reader. At the top we show the raw point clouds from each location. At the bottom, our proposed method analyses overlap and alignability, the latter derived when matching planes common to each point clouds. These are used to infer the suitability of aligning.





In this work we aim for continuously reliable localization by applying a registration correction to the robot state estimate depending on a predicted risk of alignment failure, which is based on both the overlap between the input point clouds and the alignability parameters. When the risk is high, we query a reference point cloud update and rely on proprioception until the next laser measurement becomes available.

How do we measure Overlap?

We define the overlap between two point clouds using the initial estimated alignment between them (from wheel odometry for example), their structural features and information about free space which is directly available given the sensor’s origin. Two octree structures are constructed to model free and occupied space from the two clouds. The set of common voxels reflects what we intuitively think of as volume of overlap between the clouds. The overlap parameter is then defined as the minimum percentage of points belonging to this volume between the two clouds.











The octree-based overlap parameter (on the right) corrects for some issues occurring in previous work, particularly when occlusions occur. On the left we illustrate a baseline FOV-based overlap parameter. Two point clouds (in blue and green) have been captured from subsequent sensor poses, the latter after crossing a doorway. The region of overlap in red is delimited by the sensor footprint. In this case the wall between room A and B occludes the view between the points of view and the region of overlap is over-estimated. On the right the volume shown in red, identified by our octree-based overlap parameter, reflects what we intuitively think of as volume of overlap between the point clouds.

How do we measure Alignability?

We derive an alignability metric which quantifies the degree to which point cloud alignment is constrained by exploiting the planar geometry commonly found in indoor environments. The alignment between a pair of 3D point clouds is well-constrained if the transform aligning them is constrained in all three dimensions. Intuitively, we envisage that at least three mutually visible non-parallel planes should exist.

Our strategy involves two main steps:

  1. firstly, we match locally planar distributions of points common to the input clouds and compute a matrix N as the set of normal directions extracted from these planar patches,
  2. secondly, we show how N represents the set of available constraints between the clouds. We compute alignability from Principal Component Analysis (PCA) on the row vectors of N.

Predicting Alignment Risk… from Overlap and Alignability

We learn a model for alignment risk using a third degree polynomial Support Vector Classifier, based on the overlap and alignability measures. The model allows us to prevent registration failure when the geometry is unstable and the overlap is not uniform.



Alignment risk model learned from overlap and alignability estimates. The classifier has been trained on a set of 1200 binary labelled samples (1:failure/0:success). We show the predictions on the test set, where high risk of alignment failure (in red) is expected for low overlap and alignability values, following a polynomial relationship. We observe that using only one parameter (overlap or alignability) is not sufficient, e.g., just using a threshold at 5% on alignability would still accept all samples with overlap below 30%, which would risk a faulty alignment.



How is this approach different?

Geometry-driven approach: typically the stability of registration is evaluated after the set of point matches have been selected, by Principal Component Analysis (PCA) on the covariance matrix used for error minimization. However, this analysis depends on the data association step of the registration algorithm, which is error-prone and itself a common cause of ICP registration failure. Our approach focuses instead on analysis before the point-wise data association step and is independent on the registration approach.

One could argue that plane matching is a variant of data association. The data association step of ICP typically involves a local point search within each pair of patches. Instead, we analyse semantic representations (the matched plane patches), rather than points.

Alignability is not enough: we learn an alignment risk model based on both alignability and overlap, resulting in more robust predictions when the geometry is unstable and the overlap is not uniform.

Avoiding parameter tuning… during operation on different datasets we selected a fixed threshold for alignment risk. This threshold did not require fine-tuning as it is the optimal one learned by our model.

About the system back-end: We believe that our model could facilitate the search of loop closures in a SLAM system, by sorting the candidates by alignment risk score prior to data association.

3D-view of the map reconstructed during exploration of the Informatics Forum building in Edinburgh. The robot navigates along a ∼ 180 m path while exploring the ground floor area, which includes two cluttered rooms, two wide atria with high ceilings and a corridor. The exploration involves passing across constrictions such as doorways. The trajectory estimated by our localization system using the proposed alignment risk (AR) for failure prediction goes from start to end, with no loop closures being performed. Along the path AR captured and prevented all 21 alignment failures, mainly in proximity of the corridor, clutter and doorways. The pose error at the end is computed from the transform which aligns the last cloud into the first one (in red and black) and results in 0.41% of the trajectory length.

More detailed information and discussions can be found in our paper.

This blog post provided an overview of our paper “Predicting Alignment Risk to Prevent Localization Failure” by Simona Nobili, Georgi Tinchev, and Maurice Fallon which was accepted for publication at the IEEE International Conference on Robotics and Automation (ICRA) 2018.

Our video shows how this approach can help improve the reliability of laser-based localization during exploration of unknown man-made environments, when navigating along corridors, through constrictions and environmental clutter. It also illustrates how our overlap and alignability metrics are captured: