21 Nov 2018
What about radar?
Radar is awesome.
When most people think of radar, they imagine a line sweeping around an old CRT display with its characteristic green hue – interrupted by approaching aircraft, or perhaps distant ships over the horizon. While our experience with cameras and GPS has remained up-to-date as hardware has advanced, our perceptions of radar are largely based on a sensor of the past. And even among roboticists who should know a little more about available sensors, this technology is underutilised.
However, radar has advanced far beyond the point of crude object detection and is changing the way mobile robots navigate. Navtech’s FMCW scanning radar allows us to observe the world in 400 equally-spaced azimuthal slices, each containing 2000 range measurements. This equates to a circular 360-degree scan of the environment, 500 metres all around, at a resolution of 0.25 metres per bin. These 800 000 readings update 4 times every second and tell us about the scene structure in great detail, making radar useful for far more than just detecting what large metallic objects might be approaching us. By observing the static environment itself, we can determine our pose in the world – as previously demonstrated and summarised here as Radar Odometry (RO).
And while cameras, lidar, and GPS have been successful in pose estimation and localisation tasks, they fail under certain challenging conditions. Cameras don’t function in direct sunlight, lidar isn’t great in the rain, and GPS won’t work indoors. Although a multi-sensor approach allows us to use sensors with different failure modes to achieve robust autonomy, none of these offer something our radar does – reliable performance in any lighting and weather conditions, at any time of day or night, with no external infrastructure, indoors or on top of a volcano (more on this later).
As mentioned earlier, radar measurements are arranged in azimuths, each containing a set number of range bins. The radar takes 2000 measurements along a fixed azimuth at different ranges that are 0.25 metres apart. It then rotates to the next azimuth and performs another measurement, repeating this process as it sweeps to 400 different angular positions. To image our radar data, each of these azimuth readings can be made into a line of pixels, where each column represents a single azimuth with range bins (pixels) near the bottom of the image corresponding to nearby objects (right portion of video shows this angle vs range image). Alternatively, for a more intuitive visualisation, we can wrap these azimuth readings around a centre point and arrange them into a Cartesian image that provides an overhead view of the scene in XY coordinates (left portion of video). The following clip gives an idea as to what our data looks like once it has been imaged:
Video showing both views of radar data from the Oxford 10k loop.
It doesn’t take long to realise that working with radar data is not simple – this sensor interacts with objects in the environment in ways that are difficult to predict, making the resulting measurements complex. It isn’t immediately obvious as to which of the returns are real objects, and what might just be a reflection off the side of a nearby bus or lamp post. Have a look at these two frames for example – taken 0.25 seconds apart, we can see how sensitive radar is to relatively minor pose increments:
Radar data is complex – notice how these sequential scans are sensitive to small pose increments.
Some of our latest research has addressed these complexities by using weak supervision to actively downsample these measurements and retain spatially coherent features. In other words, we’ve developed a filter that only keeps the measurements in the current frame that are likely to be observed in the next. This allows our radar navigation algorithms to run significantly faster than before – recall from earlier, there are 800 000 points that could potentially be considered during pose estimation, so knowing which of these to discard before we begin processing is really important.
Continuing to develop our interest in radar is a pretty sensible thing to do. If we’re able to get around the difficulties associated with using this sensor, many of the challenges that make autonomy implausible in harsh environments fall away. Earlier in the year we travelled to Iceland to outfit a custom off-road truck with a few of our sensors, including radar. Our data capturing involved covering some difficult terrain, ranging from dense forests in the pitch dark to the top of a volcano at sunrise (incidentally this was the same volcano that erupted and shut down air traffic across Europe for a week in 2010 – Eyjafjallajökull if anyone was curious). We crossed a few streams and bumped our heads more than once on the roof of the inside of the vehicle, returning with a fresh sense of what it’ll take to navigate in environments that aren’t as forgiving as urban Oxford.
(hint: we’ll need a radar.)
Driving up a glacier in Iceland to the Eyjafjallajökull volcano during a data collection run – just another day at the Oxford Robotics Institute (although admittedly this was a Friday).