Mark Yourself: Road Marking Segmentation via Weakly-Supervised Annotations from Multimodal Data

Abstract – This paper presents a weakly-supervised learning system for real-time road marking detection using images of complex urban environments obtained from a monocular camera. We avoid expensive manual labelling by exploiting additional sensor modalities to generate large quantities of annotated images in a weakly-supervised way, which are then used to train a deep semantic segmentation network. At run time, the road markings in the scene are detected in real time in a variety of traffic situations and under different lighting and weather conditions without relying on any preprocessing steps or predefined models. We achieve reliable qualitative performance on the Oxford RobotCar dataset, and demonstrate quantitatively on the CamVid dataset that exploiting these annotations significantly reduces the required labelling effort and improves performance.

Further Info – For more experimental details please read our paper:

  • [PDF] T. Bruls, W. Maddern, A. A. Morye, and P. Newman, “Mark Yourself: Road Marking Segmentation via Weakly-Supervised Annotations from Multimodal Data,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 2018.
    [Bibtex]
    @InProceedings{2018ICRA_bruls,
    author = {Tom Bruls and Will Maddern and Akshay A. Morye and Paul Newman},
    title = {Mark Yourself: Road Marking Segmentation via Weakly-Supervised Annotations from Multimodal Data},
    booktitle = {2018 IEEE International Conference on Robotics and Automation (ICRA)},
    year = {2018},
    address = {Brisbane, Australia},
    month = {May},
    pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/2018ICRA_bruls.pdf},
    }

For a quick overview you can take a look at our video: