Skip to main content
Menu

Introspective Radar Odometry

How do we know when we don’t know?

This is an important question to answer in any situation where we need to navigate through our surroundings, and something any autonomous mobile robot needs to know too. We discuss this introspection capability and its importance to our radar-based navigation algorithms in our paper which is to be presented at ITSC 2019 – “What Could Go Wrong? Introspective Radar Odometry in Challenging Environments” by Roberto AlderaDaniele De MartiniMatthew Gadd, and Paul Newman.

  • [PDF] R. Aldera, D. De Martini, M. Gadd, and P. Newman, “What Could Go Wrong? Introspective Radar Odometry in Challenging Environments,” in IEEE Intelligent Transportation Systems (ITSC) Conference, Auckland, New Zealand, 2019.
    [Bibtex]

With introspection, our Radar Odometry (RO) system is now able to operate in environments it finds challenging and produce estimates that are more reliable, shown here in the figure below against some ground truth data from a GNSS/INS receiver. Tall trees and hedges in this example make motion estimation tricky – notice how in the radar scan we observe two parallel lines that leave the algorithm with insufficient constraints on how we’ve moved forward along the road.

A radar scan (giving an overhead view after one full rotation) and the corresponding camera image of the scene. This plot shows our estimated speed over time using radar (green), as compared to ground truth measurements (black). Our contribution leads to a reduction in both failure magnitude and frequency with estimates now more inclined to stay within the 2σ bounds.


So why are we using a radar? Can’t you figure out where you are and how you’re moving with cameras and lasers already? As I’ve engaged more with other researchers in this field who have experience in using vision or laser data, I find myself frequently having to explain why radar is not just a rubbish lidar. The short answer to that response is to highlight the areas in which radar excels:

  • Impressive range: long-range feature detection (up to around 600 m away depending on observability) can be incredibly useful when operating in sparse environments where there aren’t many things nearby to use to navigate.
  • Weather indifference: come rain, fog, sleet, or snow, our 77 GHz radar is not bothered and will allow for reliable detection of those same features you would observe on a sunny day (or in pitch darkness).
  • Ruggedness: encased in a sealed unit, dust isn’t an issue as it would be if we had lenses, so we don’t have to worry about the dirt being thrown up during off-road use.

Shown below is our Navtech CTS-350X FMCW radar unit. For more on radar and what it offers over traditional sensors, I point you to my previous blog post: What About Radar?


The Navtech CTS-350X radar used by our navigation algorithms, pictured here in Iceland on the back of our customised off-road vehicle for some testing in harsh terrain. 


Prior to our recent work, our radar scan matching system had no means of knowing when it had produced a poor estimate of the relative motion between two scans. Under failure, the RO predictions could not be identified as having come from poor matches and would require comparison with another odometry source to validate performance. Without an idea of uncertainty, the system’s utility was limited.

RO fails in a number of identifiable situations:

  • Corridors: tall hedges in rural settings or long and straight roads in urban environments produce radar scans that lack distinctive features. It may be challenging to constrain the estimate of forward motion in environments that have these parallel rows of landmarks (see example from first figure).
  • Feature absence: the planar scans from our radar are dependent on the terrain being traversed. If we drive up a sudden slope, or down into a ditch, large swathes of the scene may disappear. The figure below shows an example from some off-road driving we did in Iceland.

These two scans were taken a quarter of a second apart on a traversal of an off-road track in Iceland. The vehicle is at the centre of each scan (small crosshairs), travelling to the left in each frame. On the left, the first scan shows a few trees, bushes and rocks that are observable ahead (first red arrow) while the vehicle is undergoing relatively little pitching. The next scan on the right shows the effect of pitching upwards, where features that had appeared in front of the vehicle have almost all disappeared, and the ground is now visible behind us (second red arrow).


Our radar scan matching algorithm makes use of a correspondence method that produces a principal eigenvector to associate points. By sorting and then interpreting these elements as a confidence score, we were able to train a classifier to predict the quality of a match between two scans. By detecting these brittle matches, the motion estimation algorithm can defer to an alternative estimator (like a Kalman Filter) until the scan matching process has recovered from its period of poor performance.

By making these predictions of failures and deferring to an alternative estimator, we see 24.7% fewer motion estimation failures over the course of a 15.81 km rural test dataset that was particularly challenging for RO. Datasets from these rural areas were specifically selected as they included scenes where we encountered failures not often observed in other settings (RO is impressively robust and finding significant failures was a challenge in itself).

Below are some qualitative results of our gains in this new approach where the ground trace of RO (red) has been “unwrapped” with each addition to the system (orange, blue) and finally in adding the introspective component, the Enhanced RO trace (green) now relatively closely resembles our ground truth measurements.


Ground traces from integrated odometry estimates show how with each addition to our RO system, the traversed path on this challenging dataset approximates our ground truth trajectory more closely. For qualitative results, please see the results section in the full paper.


Radar really is just getting started and the prospects for future work are rather exciting. If you’re interested in learning more about the work we’re doing in the Mobile Robotics Group with this sensor, I’d encourage you to have a look at the shortlist below of some of our previous radar-related publications. This post has provided a rough outline of our recent work and some of the problems we’re tackling as part of a larger thread of doing autonomy in some really difficult places. Keep an eye out for us at ITSC 2019!

Further reading on previous radar-related work:

  • [PDF] R. Aldera, D. De Martini, M. Gadd, and P. Newman, “Fast Radar Motion Estimation with a Learnt Focus of Attention using Weak Supervision,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, 2019.
    [Bibtex]
  • [PDF] S. Cen and P. Newman, “Radar-only ego-motion estimation in difficult settings via graph matching,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, 2019.
    [Bibtex]
  • [PDF] S. H. Cen and P. Newman, “Precise Ego-Motion Estimation with Millimeter-Wave Radar under Diverse and Challenging Conditions,” Proceedings of the 2018 IEEE International Conference on Robotics and Automation, 2018.
    [Bibtex]