Fast Radar Motion Estimation

//Fast Radar Motion Estimation
Matt Gadd
Matt GaddPostdoctoral Research Assistant
Load More Posts

Fast Radar Motion Estimation

This blog post provides an overview of our paper “Fast Radar Motion Estimation with a Learnt Focus of Attention using Weak Supervision” by Roberto Aldera, Daniele De Martini, Matthew Gadd, and Paul Newman which was recently accepted for publication at the IEEE International Conference on Robotics and Automation (ICRA) 2019.

For a quick overview you can take a look at our video:

Why radar?

Radar is ideal for ego-motion estimation and localisation tasks as it is good at detecting stable environmental features under adverse weather and lighting conditions.

An unfiltered radar scan with returns which are not spatially coherent from multiple viewpoints.

However, radar measurements are complex for various reasons:

  1. The beam is not narrow and tightly focused,
  2. Returns are affected by various noise sources, and
  3. The interaction of the electromagnetic wave in the environment is more complex than time-of-flight lasers.

Challenges

Using radar scans for precise ego-motion estimation means we have to deal with complex measurement patterns which are not intuitive. This boils down to a data association problem: “what detail in the last frame is relevant to the detail seen in this frame?” We wanted to make that task simpler.

Ideally, we want to operate with returns from artefacts in the scene that are visible from multiple views, excluding from the scan

  1. Noise effects
  2. Dynamic objects, and
  3. Reflections from objects that are highly specular or not visible from multiple views

With this design principle in mind, we built a filter, or classifier, that only passes through measurements which, in the context of the entire scene, are visible across wide baselines and are suitable for the data association stage.

Our approach

We decided that manual labelling for training this classifier was not reasonable as it is not a trivial task to decide what detail in one scan is visible in another given the way radar data is liable to change under small pose increments.

To avoid this, we created annotations in a weakly-supervised fashion – simply accumulating radar returns in a window of consecutive scans and looking for spatial coincidence, keeping power bins that contain objects which are observed to be wide baseline visible. To bootstrap this process we used an external ego-motion source, Visual Odometry (VO).

The following videos show the annotation procedure in action in both cartesian and polar space representations of the radar data.

This classification procedure is sufficient for prefiltering radar measurements. However, in practice, we found that our implementation was slower than the baseline radar odometry (RO) system that we wanted to use downstream, due to the extended number of scans we keep in memory during our windowing procedure.

A  complete  radar scan  in polar coordinates  (range vs azimuth).

Classification masks from the labelling procedure.

As such, we trained a popular segmentation network (U-Net) by providing it with image representations of the raw data in polar space and the corresponding annotated image obtained from the labelling procedure. Deployed on a modern GPU, this network satisfied our requirement for the rate of filtering mask production.

Improving radar motion estimation rates

The figures below show a comparison of the quality of odometry estimation as compared to the baseline radar odometry. Here, the vehicle has traversed approximately 9 km of a 10 km urban route in mixed traffic. The labeller and network (gt and unet) rivals baseline RO despite having access to significantly fewer features.

Profiles of translational and rotational velocities.

The frame-to-frame odometry estimation time for this trial – some instrumentation for which is shown below – is a marked improvement over the baseline and approaches that of the ground-truth annotation.

Profile of time taken to match consecutive radar scans while performing motion estimation.

This contribution is more than a neat efficiency improvement — by using our technique to reduce processing times, the task of motion estimation can now be run in real-time on a robot using radar alone.


© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

2019-04-10T15:53:55+01:00