Masking by Moving: Learning Robust, Distraction-Free Radar Odometry from Pose Information

Abstract – This paper presents an end-to-end radar odometry system which delivers robust, real-time pose estimates based on a learned embedding space free of sensing artefacts and distractor objects. The system deploys a fully differentiable, correlation-based radar matching approach. This provides the same level of interpretability as established scan-matching methods and allows for a principled derivation of uncertainty estimates. The system is trained in a (self-)supervised way using only previously obtained pose information as a training signal. Using 280km of urban driving data, we demonstrate that our approach outperforms the previous state-of-the-art in radar odometry by reducing errors by up 68% whilst running an order of magnitude faster.

Dataset – For this paper we use our recently released Radar Odometry Datset.

Further Info – For more experimental details please read our paper and watch the project video below.

  • [PDF] D. Barnes, R. Weston, and I. Posner, “Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information.” in arXiv preprint arXiv: 1909.03752 2019 (link).
    [Bibtex]
    @article{MaskingByMovingArXiv,
    author = {Barnes, Dan and Weston, Rob and Posner, Ingmar},
    title = {Masking by Moving: Learning Distraction-Free Radar Odometry from Pose Information},
    journal = {arXiv preprint arXiv: 1909.03752},
    url = {https://arxiv.org/pdf/1909.03752},
    pdf = {https://arxiv.org/pdf/1909.03752.pdf},
    year = {2019}
    }