Localisation answers the all important: “Where am I?” This is a fundamental requirement of mobile robots. To be useful mobile robots have to have a sense of place. Exactly how this question is framed and indeed how it is answered is an important facet of our work.

We place a great deal of emphasis on Infrastructure Free Navigation – that is figuring out where our vehicles (robots or cameras) are without having to modify the environment or depending on any bespoke hardware (like GPS) in the workspace.

Why is this so important? Well, if we can make our mobile machines independently capable, using only onboard sensors and computing, then they are inherently more flexible, more useful and, vitally, cheaper to use. This is important to us. Of course we have no problem using systems like GPS if it is available (so only in outdoor settings) but we will never depend on it entirely.  That is too limiting.

In this research topic we examine large scale, long duration localisation, both indoors and outdoors, day and night, in rain, snow and even some sunshine.  In doing this we come across profound questions regarding spatial representations, dealing with stark scene changes (from midnight to midday) and handling slowly changing structure.

We typically use cameras or 2D lasers as sensors as these are cheap and ubiquitous, but we have be known to use millimetre radar also.


Our latest research

Learning Place-Dependent Feature Detectors for Localisation Across Extreme Lighting and Weather Conditions

This work is about metric localisation across extreme lighting and weather conditions. The typical approach in robot vision is to use a point-feature-based system for localisation tasks. However, these system typically fail when appearance changes are too drastic. This research takes a ...
Read More

Robust, Long-Term Visual Localisation using Illumination Invariance

This work is about extending the reach and endurance of outdoor localisation using stereo vision. At the heart of the localisation is the fundamental task of discovering feature correspondences between recorded and live images. One aspect of this problem involves ...
Read More

Distraction Suppression for Vision-Based Pose Estimation at City Scales

This work addresses the challenging problem of vision-based pose estimation in busy and distracting urban environments. By leveraging laser-generated 3D scene priors, we demonstrate how distracting objects of arbitrary types can be identified and masked in order to improve egomotion ...
Read More

Experienced Based Navigation

This work addresses the difficult problem of navigation in changing, dynamic environments. Assuming the world is static in appearance results in brittle mapping and localisation systems. Change comes from many sources (dynamic objects, time of day, weather, seasons) and over different time ...
Read More

Dealing with Shadows: Capturing Intrinsic Scene Appearance for Image-based Outdoor Localisation

In outdoor environments shadows are common. These typically strong visual features cause considerable change in the appearance of a place, and therefore confound vision- based localisation approaches. In this work  we describe how to convert a colour image of the ...
Read More

Generation and Exploitation of Local Orthographic Imagery for Road Vehicle Localisation

  This work performs visual localisation using synthesised local orthographic imagery. We exploit state of the art stereo visual odometry (VO) on our survey vehicle to generate high precision synthetic orthographic images of the road surface as would be seen from overhead ...
Read More

Continuous Vehicle Localisation Using Sparse 3D Sensing, Kernelised Renyi Distance and Fast Gauss Transforms

Abstract—This paper is about estimating a smooth, continuous-time trajectory of a vehicle relative to a prior 3D laser map. We pose the estimation problem as that of finding a sequence of Catmull-Rom splines which optimise the Kernelised Rényi Distance (KRD) ...
Read More

FAB-MAP 3D: Topological Mapping with Spatial and Visual Appearance

Abstract— This paper describes a probabilistic framework for appearance based navigation and mapping using spatial and visual appearance data. Like much recent work on appearance based navigation we adopt a bag-of-words approach in which positive or negative observations of visual ...
Read More

LAPS – Localisation using Appearance of Prior Structure: 6-DOF Monocular Camera Localisation using Prior Pointclouds

Abstract— This paper is about pose estimation using monocular cameras with a 3D laser pointcloud as a workspace prior. We have in mind autonomous transport systems in which low cost vehicles equipped with monocular cameras are furnished with preprocessed 3D ...
Read More

Laser-only road-vehicle localization with dual 2D push-broom LIDARS and 3D priors

In this paper we consider long-term navigation using fixed 2D LIDARs. We consider how localization algorithms based on scan-matching - commonly used in indoor environments - are prone to failure when exposed to a challenging real-world outdoor environment. The driving motivation behind this work is to ...
Read More

Road vehicle localization with 2D push-broom lidar and 3D priors

In this paper we describe and demonstrate a method for precisely localizing a road vehicle using a single push-broom 2D laser scanner while leveraging a prior 3D survey. In contrast to conventional scan matching, our laser is oriented downwards, thus causing continual ground strike. Our ...
Read More

Vast Scale Outdoor Navigation Using Adaptive Relative Bundle Adjustment

Abstract - In this paper we describe a relative approach to simultaneous localisation and mapping, based on the insight that a continuous relative representation can make the problem tractable at large scales. First, it is well known that bundle adjustment is ...
Read More

Real-Time Bounded-Error Pose Estimation for Road Vehicles Using Vision

This paper is about online, constant-time pose es- timation for road vehicles. We exploit both the state of the art in vision based SLAM and the wide availability of overhead imagery of road networks. We show that by formulating the ...
Read More

FARLAP: Fast Robust Localisation using Appearance Priors

This paper is concerned with large-scale localisation at city scales with monocular cameras. Our primary motivation lies with the development of autonomous road vehicles — an application domain in which low-cost sensing is particularly important. Here we present a method ...
Read More

Leveraging Experience for Long-Term LIDAR Localisation In Changing Cities

 Successful approaches to autonomous vehicle localisation and navigation typically involve 3D LIDAR scanners and a static, curated 3D map, both of which are expensive to acquire and maintain. We propose an experience-based approach to matching a local 3D swathe built ...
Read More

Work Smart, Not Hard: Recalling Relevant Experiences for Vast-Scale but Time-Constrained Localisation

This paper is about life-long vast-scale localisation in spite of changes in weather, lighting and scene structure. Building upon our previous work in Experience-based Navigation, we continually grow and curate a visual map of the world that explicitly supports multiple ...
Read More
By specifically detecting and matching lights in the live scene with those in a map, we are able to successfully localise. In addition to position in image space, we also take into account the expected appearance of each light, based on how far away it is. This greatly improves the robustness of our data association and serves to better inform our pose estimate.

From Dusk till Dawn: Localisation at Night using Artificial Light Sources

Abstract—This paper is about localising at night in urban environments using vision. Despite it being dark exactly half of the time, surprisingly little attention has been given to this problem. A defining aspect of night-time urban scenes is the presence and effect of ...
Read More

Global LIDAR Localization

Global LIDAR Localization Localization using LIDAR has advantages over visual localization. In particular LIDAR has a great degree of viewpoint and lighting invariance. It is however less informative. We have developed machine learning algorithms to more reliably detect places using ...
Read More