Team ORIon for RoboCup@Home 2018

ORIon is a new team created within the Oxford Robotics Institute (ORI) at the University of Oxford. The team consists of undergraduate and graduate students; robotics researchers; and faculty members of ORI whose  experience and support will be leveraged to create a RoboCup@Home team capable of delivering across the whole competition.

The Domestic Standard Platform League (DSPL) affords a tangible new domain in which existing ORI research can be applied, and which provides new challenges for the group. The capabilities of our DSPL system will build upon those developed over the last four years within the EU STRANDS Project. Key members of this project (Nick Hawes, Lars Kunze, Bruno Lacerda) have recently moved to ORI and will actively engage with ORIon. The STRANDS Project deployed autonomous mobile robots (MetraLabs SCITOS A5s) in a range of human-populated environments for long durations. These robots provided a range of services to real users similar to the tasks required in the DSPL. The ROS-based STRANDS Core System (SCS) developed will constitute an ideal basis for DSPL participation. The SCS is open source and its use by Team ORIon will contribute to its maintenance and development for use by the entire robotics community.

The Toyota Human Support Robot (HSR) will allow us to focus on developing the intelligence required for successfully completing the RoboCup@Home tasks, without the added burden of building and maintaining a custom platform. The STRANDS Core System which was originally developed for MetraLabs robots, was recently deployed on other platforms and will be ported to the Toyota HSR.

MEET THE TEAM

The team is led by Prof. Nick Hawes who has extensive background in intelligent autonomous robots that can work with or for humans, and Dr. Ioannis Havoutis, an expert in combining motion planning with machine learning. The core of the team for 2018 will be ORI post-doctoral researchers (Lars Kunze, Bruno Lacerda) and senior PhD students (Paul Amayo, Julie Dequaire, Rowan Border), with more junior members (PhDs and undergraduates) being brought in as the team develops. ORIon will be further supported by ORI’s hardware and software team led by Senior Platforms and Systems engineer Stephen Kyberd.

Nick Hawes
Nick Hawes Associate Professor of Engineering Science (Robotics)
Ioannis Havoutis
Ioannis Havoutis Departmental Lecturer in Robotics
Lars Kunze
Lars Kunze Post Doctoral Research Associate in Mobile Robotics
Bruno Lacerda
Bruno Lacerda Post Doctoral Research Assistant in Mobile Robotics
Stephen Kyberd
Stephen KyberdSenior Platform and Systems Engineer
Paul Amayo
Paul Amayo4th year PhD candidate
Rowan Border
Rowan Border 2nd year PhD candidate
Julie Dequaire
Julie Dequaire4th year PhD candidate

Goals And Capabilities

Manipulation – Learning new skills

In the context of manipulation, the robot will require a number of key skills to successfully perform a wide variety of tasks that involve interaction with the environment or other agents, eg. pushing buttons, tuning handles, grasping and passing objects etc …

The robots used in the STRANDS Project did not have manipulation capabilities, therefore the SCS does not currently provide software support for this. To deliver these capabilities, we will start from ORIon member Lars Kunze’s experience of knowledge-enabled manipulation. His work resulted in a system which could grasp an egg and make a pancake:

  • Envisioning the Qualitative Effects of Robot Manipulation Actions using Simulation-Based Projections, Lars Kunze and Michael Beetz, Artificial Intelligence, Special Issue on AI and Robotics, 2015

We aim to augment this with the ability to learn and refine new skills as tasks change or as new tasks are added to the required repertoire. Such capability will be based on ORIon member Ioannis Havoutis’ background in learning, synthesis and control of complex motions. Skill representations are learnt from demonstration using a probabilistic generative encoding. Motion generation is formulated as an optimal control problem that adapts to changing task configurations online:

  • Learning assistive teleoperation behaviours from demonstration, I. Havoutis and S. Calinon, IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), 2016
  • Supervisory teleoperation with online learning and optimal control, I. Havoutis and S. Calinon, IEEE International Conference on Robotics and Automation (ICRA), 2017
  • An approach for imitation learning on Riemannian manifolds, M. Zeestraten, I. Havoutis, J. Silverio, S. Calinon, and D. Caldwell, IEEE Robotics and Automation Letters, 2017
  • Learning task-space synergies using Riemannian geometry, M. Zeestraten, I. Havoutis, S. Calinon, and D. Caldwell, IEEE International Conference on Intelligent Robots and Systems (IROS), 2017

Navigation

To enable robust navigation in all settings we take a hierarchical approach to navigation. The hierarchy is structured around a topological map in which discrete locations are connected by directed edges. Edges correspond to navigation actions the robot can perform to transition between locations. These may be standard move actions, social navigation, closed-lop controllers or teach-and-repeat paths. Choices between the actions are made by a Markov decision process-based planner which jointly optimises for success probability and completion time, using probabilistic models learnt online through experience. We ensure that the robot does not get stuck by employing navigation layer which monitors the execution of the low level edge actions and performs recovery behaviours (e.g backtracking, HRI) to correct observed problems. This collection of techniques drove the STRANDS robots for over 360km of autonomous navigation in human-populated environments:

  • Now or Later? Predicting and Maximising Success of Navigation Actions from Long-Term Experience, J. Pulido Fentanes, B. Lacerda, T. Krahnik, N. Hawes and M. Hanheide, IEEE International Conference for Robotics and Automation (ICRA), 2015
  • Optimal and Dynamic Planning for Markov Decision Processes with Co-Safe LTL Specifications, B. Lacerda, D. Parker and N.Hawes, IEEE International Conference on Intelligent Robots and Systems (IROS), 2014
  • The STRANDS Project: Long-Term Autonomy in Everyday Environments, N. Hawes et al., IEEE Robotics and Automation Magazine (RAM), 2017

Team ORIon will extend the framework to enable integration of ORI’s visual teach-and-repeat paradigm, to enable the robot to navigation in areas where laser-based localisation is likely to result in impressive navigation. We will also look to integrate some of ORI’s previous 3D mapping work to increase the accuracy of the robot’s environment representation:

  • A Unified Representation for Application of Architectural Constraints in Large-Scale Mapping, P. Amayo, P. Pinies, L. Paz, and P. Newman, IEEE International Conference on Robotics and Automation (ICRA), 2016
  • Work Smart, Not Hard: Recalling Relevant Experiences for Vast-Scale but Time-Constrained Localisation, C. Linegar, W. Churchill, and P. Newman, IEEE International Conference for Robotics and Automation (ICRA), 2015
  • Made to Measure: Bespoke Landmarks for 24-Hour, All-Weather Localisation with a Camera, C. Linegar, W. Churchill, and P. Newman, IEEE International Conference for Robotics and Automation (ICRA), 2016

Semantic Vision

Learning and recognising objects during operation is a key task for a mobile service robot in human environments. Team ORIon will exploit the work done in the STRANDS Project in terms of autonomous object learning, and the recognition and modelling of previously unseen objects. The work is based on the meta-room approach which builds dense RGB-D reconstructions of regions around locations in the robot’s topological map. Objects are found through inspecting or differencing meta-rooms. Surfaces and possible objects found in meta-rooms become either targets for more detailed view planning leading to recognition, or for autonomous object learning:

  • Using qualitative spatial relations for indirect object search, L. Kunze, K. Doreswamy, and N. Hawes, IEEE International Conference on Robotics and Automation (ICRA), 2014
  • Autonomous Learning of Object Models on a Mobile Robot, T. Faeulhammer, R. Ambrus, C. Burbridge, M. Zillich, J. Folkesson, N. Hawes, P. Jensfelt, and M. Vincze

Our recognition pipeline mixes top-down semantic reasoning with bottom-up appearance-based processing for scene understanding , jointly estimating object locations and categories based on qualitative spatial models. The object learning process can build detailed 3D models entire without supervision. Previously unknown objects are processed with a mix of deep vision and semantic web technologies to provide the robot with an initial estimate of their identity:

  • Combining Top-down Spatial Reasoning and Bottom-up Object Class Recognition for Scene Understanding, L. Kunze, C. Burbridge, M. Alberti, A. Tippur, J. Folkesson, P. Jensfelt, and N. Hawes, IEEE International Conference on Intelligent Robots and Systems (IROS), 2014
  • Bootstrapping Probabilistic Models of Qualitative Spatial Relations for Active Visual Object Search, L. Kunze, C. Burbridge, and N. Hawes, AAAI Spring Symposium on Qualitative Representations for Robots, 2014
  • Semantic Web-Mining and Deep Vision for Lifelong Object Discovery, J. Young, L. Kunze, V. Baslie, E. Cabrio, and N. Hawes, IEEE International Conference on Robotics and Automation (ICRA), 2017

Robot System Integration

A substantial effort will be needed to develop behaviours that are robust to changes in the environment and to noise typical of real-world scenarios. In this respect we will exploit our experience from the STANDS project and build upon the tested SCS. Given the similarity of the HSR to the SCITOS platform, and the familiarity of ORIon with other platforms, we predict that porting the SCS to the Toyota HSR will be an easy task. SCS was developed to be expandable and is built with standard ROS components which are also supported by the
Toyota HSR.

Additionally, we are planning to build a mock-up of the “house” arena to allow us to run live robot trials and limit simulation use to the development phase. We aim to schedule recurring trials as our framework is developed, to ensure that robot behaviours are successful and to collect data on the success probabilities of tasks and sequences of tasks.

Team ORIon will benefit from the many years of experience of the team of creating integrated robot systems. Team members (Julie Dequaire, Paul Amayo and Stephen Kyberd) contributed to the first public demonstration of a self-driving car in the UK, and all members have contributed to integrated robot systems demonstrated at science museums, public engagement events and trade shows across Europe. All of these systems integrate perception, planning and action in non-trivial ways. Such integration is central to producing a functional and reliable system, but can be incredibly challenging when trying to produce novel capabilities for robots in task environments which you are only able to experience a short time before a deadline. The team’s joint experience of bringing diverse robot capabilities together for successful demos will enable the team to start working effectively very quickly, and to deal with common team and system teething problems smoothly.

Our experience on systems which span the capability spectrum from low-level sensing to high-level cognition means that the diverse capabilities described above will be successfully integrated to produce a competitive entry in the DSPL.

  • The STRANDS Project: Long-Term Autonomy in Everyday Environments, N. Hawes et al., IEEE Robotics and Automation Magazine (RAM), 2017

Applicability and Re-usability of the System

The continued maintenance and development of the SCS will provide a well-tested software framework for mobile service robots. Continuing the practice started by the STRANDS Project, we will make extensions to the SCS available as open source software. This will enable our core framework to be reusable by other groups. The validity of this approach has already been demonstrated by the reuse of the SCS at labs including the Intelligent Robots and Systems group at the Institute for Systems and Robotics, Lisbon, and at Honda Research Institute Europe. The aforementioned use of the majority of our technology within systems which have already been successfully demonstrated in real challenging service robot environments shows that our approach is applicable in the real world.

 

RELATED MEDIA