The Oxford Robotics Institute have a long association with ICRA, Paul first published at ICRA 2000 in San Francisco. 20 years later, ICRA 2020 is an opportunity for us to showcase our latest research.

Below is a complete list of all the research and work put forward for this conference.

The Oxford Radar RobotCar Dataset: A RadarExtension to the Oxford RobotCar Dataset

  • [PDF] D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, “The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, 2020.
    [Bibtex]
    @inproceedings{RadarRobotCarDatasetICRA2020,
    address = {Paris},
    author = {Barnes, Dan and Gadd, Matthew and Murcutt, Paul and Newman, Paul and Posner, Ingmar},
    title = {The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset},
    booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
    url = {https://arxiv.org/abs/1909.01300},
    pdf = {https://arxiv.org/pdf/1909.01300.pdf},
    year = {2020}
    }

Link to the webpage

In this paper we present The Oxford Radar RobotCar Dataset, a new dataset for researching scene understanding using Millimetre-Wave FMCW scanning radar data. The target application is autonomous vehicles where this modality remains unencumbered by environmental conditions such as fog, rain, snow, or lens flare, which typically challenge other sensor modalities such as vision and LIDAR.

The data were gathered in January 2019 over thirty-two traversals of a central Oxford route spanning a total of 280 km of urban driving. It encompasses a variety of weather, traffic, and lighting conditions. This 4.7 TB dataset consists of over 240,000 scans from a Navtech CTS350-X radar and 2.4 million scans from two Velodyne HDL-32E 3D LIDARs; along with six cameras, two 2D LIDARs, and a GPS/INS receiver. In addition we release ground truth optimised radar odometry to provide an additional impetus to research in this domain.

Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar

  • [PDF] D. Barnes and I. Posner, “Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, 2020.
    [Bibtex]
    @inproceedings{UnderTheRadarICRA2020,
    address = {Paris},
    author = {Barnes, Dan and Posner, Ingmar},
    title = {Under the Radar: Learning to Predict Robust Keypoints for Odometry Estimation and Metric Localisation in Radar},
    booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
    url = {https://arxiv.org/abs/2001.10789},
    pdf = {https://arxiv.org/pdf/2001.10789.pdf},
    year = {2020}
    }

Link to the webpage

This paper presents a self-supervised framework for learning to detect robust keypoints for odometry estimation and metric localisation in radar. By embedding a differentiable point-based motion estimator inside our architecture, we learn keypoint locations, scores and descriptors from localisation error alone. This approach avoids imposing any assumption on what makes a robust keypoint and crucially allows them to be optimised for our application. Furthermore the architecture is sensor agnostic and can be applied to most modalities. We run experiments on 280km of real world driving from the Oxford Radar RobotCar Dataset and improve on the state-of-the-art in point-based radar odometry, reducing errors by up to 45% whilst running an order of magnitude faster, simultaneously solving metric loop closures. Combining these outputs, we provide a framework capable of full mapping and localisation with radar in urban environments.

Adaptively Informed Trees (AIT*): Fast Asymptotically Optimal Path Planning through Adaptive Heuristics

  • [PDF] M. P. Strub and J. D. Gammell, “Adaptively Informed Trees (AIT*): Fast Asymptotically Optimal Path Planning through Adaptive Heuristics,” in IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020.
    [Bibtex]
    @InProceedings{2020ICRA_strub1,
    author = {Strub, Marlin P. and Gammell, Jonathan D.},
    title = {Adaptively Informed Trees (AIT*): Fast Asymptotically Optimal Path Planning through Adaptive Heuristics},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    address = {Paris, France},
    year = {2020},
    Pdf = {https://arxiv.org/abs/2002.06599},
    }

Informed sampling-based planning algorithms exploit problem knowledge for better search performance. This knowledge is often expressed as heuristic estimates of solution cost and used to order the search. The practical improvement of this informed search depends on the accuracy of the heuristic.
Selecting an appropriate heuristic is difficult. Heuristics applicable to an entire problem domain are often simple to define and inexpensive to evaluate but may not be beneficial for a specific problem instance. Heuristics specific to a problem instance are often difficult to define or expensive to evaluate but can make the search itself trivial.
This paper presents Adaptively Informed Trees (AIT*), an almost-surely asymptotically optimal sampling-based planner based on BIT*. AIT* adapts its search to each problem instance by using an asymmetric bidirectional search to simultaneously estimate and exploit a problem-specific heuristic. This allows it to quickly find initial solutions and converge towards the optimum. AIT* solves the tested problems as fast as RRT-Connect while also converging towards the optimum.

Advanced BIT* (ABIT*): Sampling-Based Planning with Advanced Graph-Search Techniques

  • [PDF] M. P. Strub and J. D. Gammell, “Advanced BIT* (ABIT*): Sampling-Based Planning with Advanced Graph-Search Techniques,” in IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020.
    [Bibtex]
    @InProceedings{2020ICRA_strub2,
    author = {Strub, Marlin P. and Gammell, Jonathan D.},
    title = {Advanced BIT* (ABIT*): Sampling-Based Planning with Advanced Graph-Search Techniques},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    address = {Paris, France},
    year = {2020},
    Pdf = {https://arxiv.org/abs/2002.06589},
    }

Link to the webpage

Path planning is an active area of research essential for many applications in robotics. Popular techniques include graph-based searches and sampling-based planners. These approaches are powerful but have limitations. This paper continues work to combine their strengths and mitigate their limitations using a unified planning paradigm. It does this by viewing the path planning problem as the two subproblems of search and approximation and using advanced graph-search techniques on a sampling-based approximation. This perspective leads to Advanced BIT*. ABIT* combines truncated anytime graph-based searches, such as ATD*, with anytime almost-surely asymptotically optimal sampling-based planners, such as RRT*. This allows it to quickly find initial solutions and then converge towards the optimum in an anytime manner. ABIT* outperforms existing single-query, sampling-based planners on the tested problems in R4 R8, and was demonstrated on real-world problems with NASA/JPL-Caltech.

A Framework for Learning from Demonstration with Minimal Human Effort

  • [PDF] M. Rigter, B. Lacerda, and N. Hawes, “A Framework for Learning from Demonstration with Minimal Human Effort,” Robotics and Automation Letters (RA-L), 2020.
    [Bibtex]
    @article{2020ICRA_rigter,
    author = {Rigter, Marc and Lacerda, Bruno and Hawes, Nick},
    title = {A Framework for Learning from Demonstration with Minimal Human Effort},
    journal = {Robotics and Automation Letters (RA-L)},
    year = {2020},
    pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/2020ICRA_rigter.pdf},
    }

Link to the webpage

We consider robot learning in the context of shared autonomy, where control of the system can switch between a human teleoperator and autonomous control. In this setting we address reinforcement learning, and learning from demonstration, where there is a cost associated with human time. This cost represents the human time required to teleoperate the robot, or recover the robot from failures. For each episode, the agent must choose between requesting human teleoperation, or using one of its autonomous controllers. In our approach, we learn to predict the success probability for each controller, given the initial state of an episode. This is used in a contextual multi-armed bandit algorithm to choose the controller for the episode. A controller is learnt online from demonstrations and reinforcement learning so that autonomous performance improves, and the system becomes less reliant on the teleoperator with more experience. We show that our approach to controller selection reduces the human cost to perform two simulated tasks and a single real-world task.

Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot Locomotion

  • [PDF] S. Gangapurwala, A. Mitchell, and I. Havoutis, “Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot Locomotion,” IEEE Robotics and Automation Letters, 2020.
    [Bibtex]
    @article{2020RAL_gangapurwala,
    author = {Siddhant Gangapurwala and Alexander Mitchell and Ioannis Havoutis},
    title = {Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot Locomotion},
    journal = {IEEE Robotics and Automation Letters},
    year = 2020,
    month = may,
    pdf = {https://ihavoutis.github.io/publications/2020/ral2020gangapurwala.pdf},
    }

Deep reinforcement learning (RL) uses model-free techniques to optimize task-specific control policies. Despite having emerged as a promising approach for complex problems, RL is still hard to use reliably for real-world applications. Apart from challenges such as precise reward function tuning, inaccurate sensing and actuation, and non-deterministic response, existing RL methods do not guarantee behavior within required safety constraints that are crucial for real robot scenarios. In this regard, we introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained proximal policy optimization (CPPO) for tracking base velocity commands while following the defined constraints. We introduce schemes which encourage state recovery into constrained regions in case of constraint violations. We present experimental results of our training method and test it on the real ANYmal quadruped robot. We compare our approach against the unconstrained RL method and show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.

Reliable Trajectories for Dynamic Quadrupeds using Analytical Costs and Learned Initializations

  • [PDF] O. Melon, M. Geisert, D. Surovik, I. Havoutis, and M. Fallon, “Reliable Trajectories for Dynamic Quadrupeds using Analytical Costs and Learned Initializations,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020.
    [Bibtex]
    @inproceedings{2020ICRA_melon,
    Title = {Reliable Trajectories for Dynamic Quadrupeds using Analytical Costs and Learned Initializations},
    Author = {Oliwier Melon and Mathieu Geisert and David Surovik and Ioannis Havoutis and Maurice Fallon},
    Booktitle = {IEEE Intl. Conf. on Robotics and Automation (ICRA)},
    Year = 2020,
    month = may,
    Pdf = {http://www.robots.ox.ac.uk/~mobile/drs/Papers/2020ICRA_melon.pdf},
    }

Dynamic traversal of uneven terrain is a major objective in the field of legged robotics. The most recent model predictive control approaches for these systems can generate robust dynamic motion of short duration; however, planning over a longer time horizon may be necessary when navigating complex terrain. A recently-developed framework, Trajectory Optimization for Walking Robots (TOWR), computes such plans but does not guarantee their reliability on real platforms, under uncertainty and perturbations. We extend TOWR with analytical costs to generate trajectories that a state-of-the-art whole-body tracking controller can successfully execute. To reduce online computation time, we implement a learning-based scheme for initialization of the nonlinear program based on offline experience. The execution of trajectories as long as 16 footsteps and 5.5 s over different terrains by a real quadruped demonstrates the effectiveness of the approach on hardware. This work builds toward an online system which can efficiently and robustly replan dynamic trajectories.

Actively Mapping Industrial Structures with Information Gain-Based Planning on a Quadruped Robot

  • [PDF] Y. Wang, M. Ramezani, and M. Fallon, “Actively Mapping Industrial Structures with Information Gain-Based Planning on a Quadruped Robot,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020.
    [Bibtex]
    @inproceedings{2020ICRA_wang,
    Title = {Actively Mapping Industrial Structures with Information Gain-Based Planning on a Quadruped Robot},
    Author = {Yiduo Wang and Milad Ramezani and Maurice Fallon},
    Booktitle = {IEEE Intl. Conf. on Robotics and Automation (ICRA)},
    Year = 2020,
    month = may,
    Pdf = {http://www.robots.ox.ac.uk/~mobile/drs/Papers/2020ICRA_wang.pdf},
    }

Link to the webpage

Deep reinforcement learning (RL) uses model-free techniques to optimize task-specific control policies. Despite having emerged as a promising approach for complex problems, RL is still hard to use reliably for real-world applications. Apart from challenges such as precise reward function tuning, inaccurate sensing and actuation, and non-deterministic response, existing RL methods do not guarantee behavior within required safety constraints that are crucial for real robot scenarios. In this regard, we introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained proximal policy optimization (CPPO) for tracking base velocity commands while following the defined constraints. We introduce schemes which encourage state recovery into constrained regions in case of constraint violations. We present experimental results of our training method and test it on the real ANYmal quadruped robot. We compare our approach against the unconstrained RL method and show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.

Online LiDAR-SLAM for Legged Robots with Robust Registration and Deep-Learned Loop Closure

  • [PDF] M. Ramezani, G. Tinchev, E. Iuganov, and M. Fallon, “Online LiDAR-SLAM for Legged Robots with Robust Registration and Deep-Learned Loop Closure,” in IEEE Robotics and Automation Letters, 2020.
    [Bibtex]
    @inproceedings{2020ICRA_ramezani,
    title = {Online LiDAR-SLAM for Legged Robots with Robust Registration and Deep-Learned Loop Closure},
    author = {Milad Ramezani and Georgi Tinchev and Egor Iuganov and Maurice Fallon},
    Booktitle = {IEEE Robotics and Automation Letters},
    year = 2020,
    month = may,
    Pdf = {http://www.robots.ox.ac.uk/~mobile/drs/Papers/2020ICRA_ramezani.pdf},
    }

Link to the webpage

In this paper, we present a 3D factor-graph LiDAR-SLAM system which incorporates a state-of-the-art deeply learned feature-based loop closure detector to enable a legged robot to localize and map in industrial environments. Point clouds are accumulated using an inertial-kinematic state estimator before being aligned using ICP registration. To close loops we use a loop proposal mechanism which matches individual segments between clouds. We trained a descriptor offline to match these segments. The efficiency of our method comes from carefully designing the network architecture to minimize the number of parameters such that this deep learning method can be deployed in real-time using only the CPU of a legged robot, a major contribution of this work. The set of odometry and loop closure factors are updated using pose graph optimization. Finally we present an efficient risk alignment prediction method which verifies the reliability of the registrations. Experimental results at an industrial facility demonstrated the robustness and flexibility of our system, including autonomous following paths derived from the SLAM map.

Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry

  • [PDF] D. Wisth, M. Camurri, and M. Fallon, “Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020.
    [Bibtex]
    @inproceedings{2020ICRA_wisth,
    Title = {Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry},
    Author = {David Wisth and Marco Camurri and Maurice Fallon},
    Booktitle = {IEEE Intl. Conf. on Robotics and Automation (ICRA)},
    Year = 2020,
    month = may,
    Pdf = {http://www.robots.ox.ac.uk/~mobile/drs/Papers/2020ICRA_wisth.pdf},
    }

Link to the webpage

In this paper, we present a novel factor graph formulation to estimate the pose and velocity of a quadruped robot on slippery and deformable terrain. The factor graph introduces a preintegrated velocity factor that incorporates velocity inputs from leg odometry and also estimates related biases. From our experimentation we have seen that it is difficult to model uncertainties at the contact point such as slip or deforming terrain, as well as leg flexibility. To accommodate for these effects and to minimize leg odometry drift, we extend the robot’s state vector with a bias term for this preintegrated velocity factor. The bias term can be accurately estimated thanks to the tight fusion of the preintegrated velocity factor with stereo vision and IMU factors, without which it would be unobservable. The system has been validated on several scenarios that involve dynamic motions of the ANYmal robot on loose rocks, slopes and muddy ground. We demonstrate a 26% improvement of relative pose error compared to our previous work and 52% compared to a state-of-the-art proprioceptive state estimator

GaitMesh: controller-aware navigation meshes for long-range legged locomotion planning in multi-layered environments

  • [PDF] M. Brandao, O. B. Aladag, and I. Havoutis, “GaitMesh: controller-aware navigation meshes for long-range legged locomotion planning in multi-layered environments,” IEEE Robotics and Automation Letters, 2020.
    [Bibtex]
    @article{2020RAL_brandao,
    author = {Martim Brandao and Omer Burak Aladag and Ioannis Havoutis},
    title = {{GaitMesh}: controller-aware navigation meshes for long-range legged locomotion planning in multi-layered environments},
    journal = {IEEE Robotics and Automation Letters},
    year = 2020,
    month = may,
    Pdf = {http://www.robots.ox.ac.uk/~mobile/drs/Papers/2020RAL_brandao.pdf},
    }

Long-range locomotion planning is an important problem for the deployment of legged robots to real scenarios. Current methods used for legged locomotion planning often do not exploit the flexibility of legged robots, and do not scale well with environment size. In this paper we propose the use of navigation meshes for deployment in large-scale multi-floor sites. We leverage this representation to improve long-term locomotion plans in terms of success rates, path costs and reasoning about which gait-controller to use when. We show that NavMeshes have higher planning success rates than sampling-based planners, but are 400x faster to construct and at least 100x faster to plan with. The performance gap further increases when considering multifloor environments. We present both a procedure for building controller-aware NavMeshes and a full navigation system that adapts to changes to the environment. We demonstrate the capabilities of the system in simulation experiments and in field trials at a real-world oil rig facility.

Kidnapped Radar: Topological Radar Localisation using Rotationally-Invariant Metric Learning

  • [PDF] S. Saftescu, M. Gadd, D. De Martini, D. Barnes, and P. Newman, “Kidnapped Radar: Topological Radar Localisation using Rotationally-Invariant Metric Learning,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, 2020.
    [Bibtex]
    @InProceedings{KidnappedRadarArXiv,
    author = {Saftescu, Stefan and Gadd, Matthew and De Martini, Daniele and Barnes, Dan and Newman, Paul},
    title = {{Kidnapped Radar: Topological Radar Localisation using Rotationally-Invariant Metric Learning}},
    booktitle = {{Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)}},
    url = {https://arxiv.org/abs/2001.09438},
    pdf = {https://arxiv.org/pdf/2001.09438.pdf},
    address = {Paris},
    year = {2020},
    }

This paper presents a system for robust, large-scale topological localisation using Frequency-Modulated ContinuousWave (FMCW) scanning radar. We learn a metric space for embedding polar radar scans using CNN and NetVLAD architectures traditionally applied to the visual domain. However, we tailor the feature extraction for more suitability to the polar nature of radar scan formation using cylindrical convolutions, anti-aliasing blurring, and azimuth-wise max-pooling; all in order to bolster the rotational invariance. The enforced metric space is then used to encode a reference trajectory, serving as a map, which is queried for nearest neighbours (NNs) for recognition of places at run-time. We demonstrate the performance of our topological localisation system over the course of many repeat forays using the largest radar-focused mobile autonomy dataset released to date, totalling 280 km of urban driving, a small portion of which we also use to learn the weights of the modified architecture. As this work represents a novel application for FMCW radar, we analyse the utility of the proposed method via a comprehensive set of metrics which provide insight into the efficacy when used in a realistic system, showing improved performance over the root architecture even in the face of random rotational perturbation.

RSL-Net: Localising in Satellite Images from a Radar on the Ground

  • [PDF] [DOI] T. Y. Tang, D. De Martini, D. Barnes, and P. Newman, “RSL-Net: Localising in Satellite Images From a Radar on the Ground,” IEEE Robotics and Automation Letters, vol. 5, iss. 2, pp. 1087-1094, 2020.
    [Bibtex]
    @article{tang2020rsl,
    title={{RSL-Net: Localising in Satellite Images From a Radar on the Ground}},
    author={Tang, Tim Y and De Martini, Daniele and Barnes, Dan and Newman, Paul},
    journal={IEEE Robotics and Automation Letters},
    year={2020},
    volume={5},
    number={2},
    pages={1087-1094},
    keywords={Autonomous vehicle navigation;deep learning in robotics and automation;localization;range sensing},
    doi={10.1109/LRA.2020.2965907},
    ISSN={2377-3774},
    month={April},
    url={https://ieeexplore.ieee.org/document/8957240},
    pdf={https://ieeexplore.ieee.org/document/8957240},
    }

This letter is about localising a vehicle in an overhead image using FMCW radar mounted on a ground vehicle. FMCW radar offers extraordinary promise and efficacy for vehicle localisation. It is impervious to all weather types and lighting conditions. However the complexity of the interactions between millimetre radar wave and the physical environment makes it a challenging domain. Infrastructure-free large-scale radar-based localisation is in its infancy. Typically here a map is built and suitable techniques, compatible with the nature of sensor, are brought to bear. In this work we eschew the need for a radar-based map; instead we simply use an overhead image – a resource readily available everywhere. This letter introduces a method that not only naturally deals with the complexity of the signal type but does so in the context of cross modal processing.

Weakly Supervised Vehicle Detection using Radar Labels

  • [PDF] S. Chadwick and P. Newman, “Radar as a Teacher: Weakly Supervised Vehicle Detection using Radar Labels,” in IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020.
    [Bibtex]
    @InProceedings{2020ICRA_chadwick,
    author = {Chadwick, Simon and Newman, Paul},
    title = {Radar as a Teacher: Weakly Supervised Vehicle Detection using Radar Labels},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    address = {Paris, France},
    year = {2020},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/Relabel_ICRA2020.pdf},
    }

It has been demonstrated that the performance of an object detector degrades when it is used outside the domain of the data used to train it. However, obtaining training data for a new domain can be time consuming and expensive. In this work we demonstrate how a radar can be used to generate plentiful (but noisy) training data for image-based vehicle detection. We then show that the performance of a detector trained using the noisy labels can be considerably improved through a combination of noise-aware training techniques and relabelling of the training data using a second viewpoint. In our experiments, using our proposed process improves average precision by more than 17 percentage points when training from scratch and 10 percentage points when fine-tuning a pre-trained model.

Variational Inference for Predictive and Reactive Controllers

  • [PDF] M. Baioumy, M. Mattamala, and N. Hawes, “Variational Inference for Predictive and Reactive Controllers,” in ICRA 2020 Workshop on New advances in Brain-inspired Perception, Interaction and Learning, Paris, France, 2020.
    [Bibtex]
    @InProceedings{2020ICRA_baioumy,
    author = {Baioumy, Mohamed and Mattamala, Matias and Hawes, Nick},
    title = {Variational Inference for Predictive and Reactive Controllers},
    booktitle = {ICRA 2020 Workshop on New advances in Brain-inspired Perception, Interaction and Learning},
    address = {Paris, France},
    year = {2020},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/Brain-PIL_2020_paper_7.pdf},
    }

Active inference is a general framework for decision-making prominent neuroscience that utilizes variational inference. Recent work in robotics adopted this framework for control and state-estimation; however, these approaches provide a form of ‘reactive’ control which fails to track fast-moving reference trajectories. In this work, we present a variational inference predictive controller. Given a reference trajectory, the controller uses its forward dynamic model to predict future states and chooses appropriate actions. Furthermore, we highlight the limitation of the reactive controller such as the dependency between estimation and control.