Topics

Home/Topics

Imagine That! Leveraging Emergent Affordances for 3D Tool Synthesis

Imagine That! Leveraging Emergent Affordances for 3D Tool Synthesis   In this paper we explore the richness of information captured by the latent space of a vision-based generative model – and how to exploit it. The context of our work is an artificial agent’s ability to perform task-focused tool synthesis for 3D reaching tasks based [...]

Imagine That! Leveraging Emergent Affordances for 3D Tool Synthesis2020-05-12T07:33:37+01:00

Visual-Inertial-Kinematic Odometry for Legged Robots (VILENS)

Visual-Inertial-Kinematic Odometry for Legged Robots (VILENS) This blog post provides an overview of our recent ICRA 2020 paper Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry: [bibtex key="2020ICRA_wisth"] This is one paper in a series of works on state estimation described here. Introduction Many [...]

Visual-Inertial-Kinematic Odometry for Legged Robots (VILENS)2020-05-10T10:22:55+01:00

Advanced BIT* (ABIT*): Sampling-Based Planning with Advanced Graph-Search Techniques

Advanced BIT* (ABIT*): Sampling-Based Planning with Advanced Graph-Search Techniques Path planning is the problem of finding a continuous sequence of valid states from a start to a goal specification. Popular approaches in robotics include graph-based searches, such as A* [1], and sampling-based planners, such as Rapidly-exploring Random Trees (RRT) [2]. Both graph- and sampling-based approaches [...]

Advanced BIT* (ABIT*): Sampling-Based Planning with Advanced Graph-Search Techniques2020-02-24T09:41:42+00:00

Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar

Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar Abstract - This paper presents a self-supervised framework for learning to detect robust keypoints for odometry estimation and metric localisation in radar. By embedding a differentiable point-based motion estimator inside our architecture, we learn keypoint locations, scores and descriptors [...]

Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar2020-02-06T12:42:36+00:00

Permutation Invariance and Relational Reasoning in Multi-Object Tracking

Permutation Invariance and Relational Reasoning in Multi-Object Tracking Relational Reasoning is the ability to model interactions and relations between objects. A machine learning model performing relational reasoning often has access to a list of object representations. The ordering of the list could carry information for the task at hand, but that is not necessarily the [...]

Permutation Invariance and Relational Reasoning in Multi-Object Tracking2019-11-03T10:55:33+00:00

VILENS

VILENS - Visual-Inertial Odometry for Legged Robots VILENS (Visual Inertial Legged Navigation System) is a factor-graph based odometry algorithm for legged robots that fuses leg odometry, vision, and IMU data. This algorithm was designed by David Wisth, Marco Camurri, and Maurice Fallon at the Oxford Robotics Institute (ORI). The papers describing this work are listed below. [...]

VILENS2020-04-19T08:45:40+01:00

GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations

GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations   Generative latent-variable models are emerging as promising tools in robotics and reinforcement learning. Yet, even though tasks in these domains typically involve distinct objects, most state-of-the-art generative models do not explicitly capture the compositional nature of visual scenes. Two recent exceptions, MONet and IODINE, [...]

GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations2019-11-01T16:05:02+00:00

Learning Robust, Distraction-Free Radar Odometry from Pose Information

Masking by Moving: Learning Robust, Distraction-Free Radar Odometry from Pose Information Abstract - This paper presents an end-to-end radar odometry system which delivers robust, real-time pose estimates based on a learned embedding space free of sensing artefacts and distractor objects. The system deploys a fully differentiable, correlation-based radar matching approach. This provides the same level of [...]

Learning Robust, Distraction-Free Radar Odometry from Pose Information2020-01-30T01:48:32+00:00

The Oxford Radar RobotCar Dataset

The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset Abstract - In this paper we present The Oxford Radar RobotCar Dataset, a new dataset for researching scene understanding using Millimetre-Wave FMCW scanning radar data. The target application is autonomous vehicles where this modality remains unencumbered by environmental conditions such as fog, [...]

The Oxford Radar RobotCar Dataset2020-02-06T12:42:15+00:00

Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes

Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes Predicting the stability of block towers is a popular task in the domain of intuitive physics. Previously, work in this area focused on prediction accuracy, a one-dimensional performance measure. We provide a broader analysis of the learned physical understanding of the final model and how the learning [...]

Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes2019-11-03T10:51:08+00:00