Perception answers the essential question of “What is around me?” Situational awareness is crucial for safe operation in real-world, dynamic environments.
In this research topic we examine how to equip machines with a semantic understanding of the world, how we can reliably recognise objects of interest across vast seasonal and environmental changes, and importantly, investigate how to augment the algorithms with an introspective capacity that is able to predict when they are uncertain, or may fail. For this we consider the interactions with localisation and mapping systems in what we call the navigation-perception loop, which can lead to workspace-specific experts.
As mobile robotics spans many domains, we consider multiple modalities for perception including cameras, lasers, radars, and combinations thereof.
Abstract—The world we live in is labeled extensively for the benefit of humans. Yet, to date, robots have made little use of human readable text as a resource. In this paper we aim to draw attention to text as a ... Read More
Abstract—This paper presents a novel way to bias the sampling domain of stochastic planners by learning from example plans. We learn a generative model of a planner as a function of proximity to labeled objects in the workspace. Our motivation ... Read More
Abstract— This paper presents a novel semantic categorization method for 3D point cloud data using supervised, multi-class Gaussian Process (GP) classification. In contrast to other approaches, and particularly Support Vector Machines, which probably are the most used method for this ... Read More
In this work, we are concerned with planning paths from overhead imagery. The novelty here lies in taking explicit account of uncertainty in terrain classification and spatial variation in terrain cost. The image is first classified using a multi-class Gaussian Process Classifier which ... Read More
We present a novel way to learn sampling distributions for sampling-based motion planners by making use of expert data. We learn an estimate (in a non-parametric setting) of sample densities around semantic regions of interest, and incorporate these learned distributions ... Read More
Abstract - Consider the task of a mobile robot autonomously navigating through an environment while detecting and mapping objects of interest using a noisy object detector. The robot must reach its destination in a timely manner, but is rewarded for correctly ... Read More
Abstract—This paper is a demonstration of how a robot can, through introspection and then targeted data retrieval, improve its own performance. It is a step in the direction of lifelong learning and adaptation and is motivated by the desire to ... Read More
Abstract— This paper is about generating plans over uncertain maps quickly. Our approach combines the ALT (A* search, landmarks and the triangle inequality) algorithm and risk heuristics to guide search over probabilistic cost maps. We build on previous work which ... Read More
Abstract— Understanding and analysing static or mobile surveillance cameras often requires knowledge of the scene and the camera placement. In this article, we provide a way to simplify the user’s task of understanding the scene by rendering the camera view ... Read More
This paper is about the autonomous acquisition of detailed 3D maps of a-priori unknown environments using a stereo camera - it is about choosing where to go. Our approach hinges upon a boundary value constrained partial differential equation (PDE) – ... Read More
This project provides an end-to-end system for the detection of cars, pedestrians and bicyclists -- hazardous objects that could potentially change their motion state, hence whose detection is key to successful autonomous driving. The video to the right shows typical ... Read More
Today, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; ... Read More
Abstract - In this paper, we address the problem of continually parsing a stream of 3D point cloud data acquired from a laser sensor mounted on a road vehicle. We leverage an online star clustering algorithm coupled with an incre- mental ... Read More
Someday, mobile robots will operate continually. Day after day, they will be in receipt of a never ending stream of images. In anticipation of this, this paper is about having a mobile robot generate apt and compact summaries of its ... Read More
Abstract—This paper addresses the question of how much a previously obtained map of a road environment should be trusted for vehicle localisation during autonomous driving by assessing the probability that roadworks are being traversed. We compare two formulations of a ... Read More
This project aims at detecting and tracking moving objects with a 2D laser scanner independent of their classes and shapes. In this work, a Bayesian framework is proposed where the observations in our model are raw laser measurements. The video to the ... Read More
Classification precision and recall have been widely adopted by roboticists as canonical metrics to quantify the performance of learning algorithms. This paper advocates that for robotics applications, which often involve mission-critical decision making, good performance according to these standard metrics ... Read More
In the context of decision making in robotics, the use of a classification framework which produces scores with inappropriate confidences will ultimately lead to the robot making dangerous decisions. In order to select a framework which will make the best ... Read More
Given an image stream, we demonstrate an on-line algorithm that will select the semantically-important images that summarize the visual experience of a mobile robot. Our approach consists of data pre-clustering using coresets followed by a graph based incremental clustering procedure ... Read More
Learning to See the Wood for the Trees: Deep Laser Localization in Urban and Natural Environments on a CPU Georgi Tinchev, Adrian Penate-Sanchez, Maurice Fallon IEEE Robotics and Automation Letters/IEEE International Conference on Robotics and Automation (RA-L/ICRA) 2019. [arXiv] Figure ... Read More
VILENS - Visual-Inertial Odometry for Legged Robots VILENS (Visual Inertial Legged Navigation System) is a factor-graph based odometry algorithm for legged robots that fuses leg odometry, vision, and IMU data. This algorithm was designed by David Wisth, Marco Camurri, and Maurice ... Read More