Planning for Multiple Robots in Congested Environments When most of us plan journeys, chances are at some point we open Google Maps to find an array of colours telling us that we will probably experience traffic at certain points on our journey. This allows us to plan our journeys accordingly, possibly choosing to take a longer route with less traffic. This begs the question: If we can plan our journeys to avoid congested areas, why can’t robots? This blog post provides an overview of our recent paper ‘Multi-Robot Planning Under Uncertainty with Congestion-Aware Models’ which addresses this problem: [bibtex key="2020AAMAS_street"] What do we want to do? Multi-robot systems are now deployed widely in warehouse logistics, agriculture and on our roads. Commonly, we wish to solve multi-robot path planning problems in these environments, where each robot has a goal location to [...]
Advanced BIT* (ABIT*): Sampling-Based Planning with Advanced Graph-Search Techniques Path planning is the problem of finding a continuous sequence of valid states from a start to a goal specification. Popular approaches in robotics include graph-based searches, such as A* , and sampling-based planners, such as Rapidly-exploring Random Trees (RRT) . Both graph- and sampling-based approaches have characteristic strengths and weaknesses. Advanced BIT* (ABIT*) continues previous work  to combine these strengths and mitigate these weaknesses using a unified planning paradigm. ABIT* achieves this by viewing the planning problem as the two subproblems of approximation and search. This perspective allows ABIT* to use advanced graph-search techniques on an anytime sampling-based approximation to quickly find initial solutions and almost-surely asymptotically converge to the global optimum. https://www.youtube.com/watch?v=VFdihv8Lq2A Full details of ABIT* can be found in the paper (https://arxiv.org/abs/2002.06589). Approximation ABIT* approximates the state space of a planning problem by sampling multiple batches of [...]
Learning from demonstration and reinforcement learning have been applied to many difficult problems in sequential decision making and control. In most settings, it is assumed that the demonstrations available are fixed. In this work, we consider learning from demonstration in the context of shared autonomy...
Our new all-weather platform pictured outside Blenheim Palace. For more information please take a look at the paper as well as the presentation. This blog post provides an overview of our paper which was recently presented by Stephen Kyberd at the 12th Conference on Field and Service Robotics, Tokyo, Japan – “The Hulk: Design and Development of a Weather-proof Vehicle for Long-term Autonomy in Outdoor Environments” – as well as the ongoing work by our engineers and researchers in deploying this exciting new platform in challenging conditions and places. [bibtex key="2019FSR_kyberd"] Why have we built this new platform? At the ORI, we commonly approach robotics by: Isolating key questions when fielding complex systems and Augmenting or inventing new techniques to solve the problem. In the past we have had great success with driverless cars - self-driving vehicles equipped with ORI-developed autonomy software were tested successfully in public for the first [...]
How do we know when we don’t know? This is an important question to answer in any situation where we need to navigate through our surroundings, and something any autonomous mobile robot needs to know too. We discuss this introspection capability and its importance to our radar-based navigation algorithms in our paper which is to be presented at ITSC 2019 - “What Could Go Wrong? Introspective Radar Odometry in Challenging Environments” by Roberto Aldera, Daniele De Martini, Matthew Gadd, and Paul Newman. [bibtex key="2019ITSC_aldera"] With introspection, our Radar Odometry (RO) system is now able to operate in environments it finds challenging and produce estimates that are more reliable, shown here in the figure below against some ground truth data from a GNSS/INS receiver. Tall trees and hedges in this example make motion estimation tricky - notice how in the radar scan we observe two parallel [...]