Perception

Home/Topics/Perception

Visual-Inertial-Kinematic Odometry for Legged Robots (VILENS)

Visual-Inertial-Kinematic Odometry for Legged Robots (VILENS) This blog post provides an overview of our recent ICRA 2020 paper Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry: [bibtex key="2020ICRA_wisth"] This is one paper in a series of works on state estimation described here. Introduction Many [...]

Visual-Inertial-Kinematic Odometry for Legged Robots (VILENS)2020-05-10T10:22:55+01:00

VILENS

VILENS - Visual-Inertial Odometry for Legged Robots VILENS (Visual Inertial Legged Navigation System) is a factor-graph based odometry algorithm for legged robots that fuses leg odometry, vision, and IMU data. This algorithm was designed by David Wisth, Marco Camurri, and Maurice Fallon at the Oxford Robotics Institute (ORI). The papers describing this work are listed below. [...]

VILENS2020-04-19T08:45:40+01:00

Global LIDAR Localization

Global LIDAR Localization Localization using LIDAR has advantages over visual localization. In particular LIDAR has a great degree of viewpoint and lighting invariance. It is however less informative. We have developed machine learning algorithms to more reliably detect places using segments as a basic unit within point cloud. Efficient Segmentation and Mapping (ESM, RA-L 2019) [...]

Global LIDAR Localization2020-02-23T13:19:46+00:00

Visual Precis Generation using Coresets

Example of a coreset cluster. The coreset point is indicated in red. Highly similar views are clustered together indicating that the coreset is effective in removing redundancy in the data. Given an image stream, we demonstrate an on-line algorithm that will select the semantically-important images that summarize the visual experience of a mobile [...]

Visual Precis Generation using Coresets2016-10-22T19:49:35+01:00

Introspection

In the context of decision making in robotics, the use of a classification framework which produces scores with inappropriate confidences will ultimately lead to the robot making dangerous decisions. In order to select a framework which will make the best decisions, we should pay careful attention to the ways in which it generates scores. Precision [...]

Introspection2017-09-14T13:42:23+01:00

Knowing When We Don’t Know: Introspective Classification for Mission-Critical Decision Making

Classification precision and recall have been widely adopted by roboticists as canonical metrics to quantify the performance of learning algorithms. This paper advocates that for robotics applications, which often involve mission-critical decision making, good performance according to these standard metrics is desirable but insufficient to appropriately characterise system performance. We introduce and motivate the importance [...]

Knowing When We Don’t Know: Introspective Classification for Mission-Critical Decision Making2017-09-14T13:11:13+01:00

Model-Free Dynamic Object Detection and Tracking with 2D Lidar

This project aims at detecting and tracking moving objects with a 2D laser scanner independent of their classes and shapes. In this work, a Bayesian framework is proposed where the observations in our model are raw laser measurements. The video to the right shows the system performance during a drive through central Oxford. Detected objects are marked [...]

Model-Free Dynamic Object Detection and Tracking with 2D Lidar2017-03-21T11:55:07+00:00

Can Priors Be Trusted? Learning to Anticipate Roadworks

Abstract—This paper addresses the question of how much a previously obtained map of a road environment should be trusted for vehicle localisation during autonomous driving by assessing the probability that roadworks are being traversed. We compare two formulations of a roadwork prior: one based on Gaussian Process (GP) classification and the other on a more [...]

Can Priors Be Trusted? Learning to Anticipate Roadworks2016-10-22T19:51:01+01:00

How was your day? Online Visual Workspace Summaries using Incremental Clustering in Topic Space

Someday, mobile robots will operate continually. Day  after day, they will be in receipt of a  never ending stream of images. In anticipation of this, this paper is about having a mobile robot generate apt and compact summaries of its life experience. We consider a robot moving around its environment both revisiting and exploring, accruing [...]

How was your day? Online Visual Workspace Summaries using Incremental Clustering in Topic Space2019-12-14T13:08:57+00:00

Parsing Outdoor Scenes from Streamed 3D Laser Data Using Online Clustering and Incremental Belief Updates

Abstract - In this paper, we address the problem of continually parsing a stream of 3D point cloud data acquired from a laser sensor mounted on a road vehicle. We leverage an online star clustering algorithm coupled with an incre- mental belief update in an evolving undirected graphical model. The fusion of these techniques allows the [...]

Parsing Outdoor Scenes from Streamed 3D Laser Data Using Online Clustering and Incremental Belief Updates2016-10-22T19:51:02+01:00