Learning from demonstration and reinforcement learning have been applied to many difficult problems in sequential decision making and control. In most settings, it is assumed that the demonstrations available are fixed. In this work, we consider learning from demonstration in the context of shared autonomy...
Our new all-weather platform pictured outside Blenheim Palace. For more information please take a look at the paper as well as the presentation. This blog post provides an overview of our paper which was recently presented by Stephen Kyberd at the 12th Conference on Field and Service Robotics, Tokyo, Japan – “The Hulk: Design and Development of a Weather-proof Vehicle for Long-term Autonomy in Outdoor Environments” – as well as the ongoing work by our engineers and researchers in deploying this exciting new platform in challenging conditions and places. [bibtex key="2019FSR_kyberd"] Why have we built this new platform? At the ORI, we commonly approach robotics by: Isolating key questions when fielding complex systems and Augmenting or inventing new techniques to solve the problem. In the past we have had great success with driverless cars - self-driving vehicles equipped with ORI-developed autonomy software were tested successfully in public for the first [...]
How do we know when we don’t know? This is an important question to answer in any situation where we need to navigate through our surroundings, and something any autonomous mobile robot needs to know too. We discuss this introspection capability and its importance to our radar-based navigation algorithms in our paper which is to be presented at ITSC 2019 - “What Could Go Wrong? Introspective Radar Odometry in Challenging Environments” by Roberto Aldera, Daniele De Martini, Matthew Gadd, and Paul Newman. [bibtex key="2019ITSC_aldera"] With introspection, our Radar Odometry (RO) system is now able to operate in environments it finds challenging and produce estimates that are more reliable, shown here in the figure below against some ground truth data from a GNSS/INS receiver. Tall trees and hedges in this example make motion estimation tricky - notice how in the radar scan we observe two parallel [...]
Fast Radar Motion Estimation This blog post provides an overview of our paper “Fast Radar Motion Estimation with a Learnt Focus of Attention using Weak Supervision” by Roberto Aldera, Daniele De Martini, Matthew Gadd, and Paul Newman which was recently accepted for publication at the IEEE International Conference on Robotics and Automation (ICRA) 2019. [bibtex key="2019ICRA_aldera"] For a quick overview you can take a look at our video: Why radar? Radar is ideal for ego-motion estimation and localisation tasks as it is good at detecting stable environmental features under adverse weather and lighting conditions. An unfiltered radar scan with returns which are not spatially coherent from multiple viewpoints. However, radar measurements are complex for various reasons: The beam is not narrow and tightly focused, Returns are affected by various noise sources, and The interaction of the electromagnetic wave in the environment is more complex [...]
Over the past few months, one of our 2nd year DPhil students, Mark Finean, has been volunteering as a robotics mentor to a group of girls pursuing their A levels at St Paul’s Girls’ School (SPGS) in London. For a school project, they wished to learn more about robots and investigate how they could be used in schools as well as the ways that students could interact with and use them to learn. To help learn more about the current state of modern robotics and gain some hands-on-experience, they spent a day in the ORI where they were introduced to some of the research we conduct as well as tried their hands at programming the Toyota HSR. They were given a series of mini-challenges, teaching them how to programme the HSR to perform movements and tasks. This culminated in them getting the robot to approach and pick up a bottle followed [...]