A2I

Home/A2I

This is the category for all pages / news posts related to A2I.

For research topics / papers that fall into AI, use the Topics –> AI category instead

Alexander Mitchell

Alex joined ORI in 2019 and is supervised by Ioannis Havoutis and Ingmar Posner.

Alexander Mitchell2019-10-10T12:10:18+01:00

Learning Robust, Distraction-Free Radar Odometry from Pose Information

Masking by Moving: Learning Robust, Distraction-Free Radar Odometry from Pose Information Abstract - This paper presents an end-to-end radar odometry system which delivers robust, real-time pose estimates based on a learned embedding space free of sensing artefacts and distractor objects. The system deploys a fully differentiable, correlation-based radar matching approach. This provides the same level of [...]

Learning Robust, Distraction-Free Radar Odometry from Pose Information2019-09-26T18:46:10+01:00

The Oxford Radar RobotCar Dataset

The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset Abstract - In this paper we present The Oxford Radar RobotCar Dataset, a new dataset for researching scene understanding using Millimetre-Wave FMCW scanning radar data. The target application is autonomous vehicles where this modality remains unencumbered by environmental conditions such as fog, [...]

The Oxford Radar RobotCar Dataset2019-09-26T13:13:22+01:00

Deep Inverse Sensor Modelling in Radar

In the last decade, systems utilising camera and lasers have been remarkably successful increasing our expectations for what robotics might achieve in the decade to come. Our robots now need to see further, not only operating in environments where humans can operate, but also in environments where humans cannot! To this end radar is a [...]

Deep Inverse Sensor Modelling in Radar2019-04-17T15:55:44+01:00

On the Limitations of Representing Functions on Sets

Our recent work on analysing a set of permutation invariant neural network architectures is probably on the theoretical end of the spectrum of the type of work we do at the A2I lab. Nevertheless it is equally exciting as it has concrete implications for real-world robotics such as working with point clouds from Lidars. [...]

On the Limitations of Representing Functions on Sets2019-03-27T15:10:25+01:00

Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments Abstract – We present a self-supervised approach to ignoring “distractors” in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each [...]

Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments2019-03-27T15:01:57+01:00