We have published a number of datasets. Under each heading you will find the associated paper, as well as links to their websites where relevant. We also make some data from other publications available as is.


New College Vision and Laser Dataset

  • [PDF] [DOI] M. Smith, I. Baldwin, W. Churchill, R. Paul, and P. Newman, “The New College Vision and Laser Data Set.” in The International Journal of Robotics Research 2009, vol. 28, pp. 595-599, ISSN: 0921-8890, DOI: DOI: 10.1177/0278364909103911, keywords: “Data Paper and 3D Laser Aquisition and 3D Laser Acquisition and Fusing Vision and Laser” (link).
    [Bibtex]
    @Article{SmithEtAl:IJRR09,
    author = {Mike Smith and Ian Baldwin and Winston Churchill and Rohan Paul and Paul Newman},
    title = {The New College Vision and Laser Data Set},
    journal = {The International Journal of Robotics Research},
    year = {2009},
    volume = {28},
    number = {5},
    pages = {595 - 599},
    month = {May},
    issn = {0921-8890},
    note = {Data Papers ¬{\'o} Peer Reviewed Publication of High Quality Data Sets},
    bdsk-url-1 = {http://www.robots.ox.ac.uk/NewCollegeData/},
    bdsk-url-2 = {http://dx.doi.org/10.1177/0278364909103911},
    doi = {DOI: 10.1177/0278364909103911},
    pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/IJRRDataPaper09.pdf},
    keywords = {Data Paper and 3D Laser Aquisition and 3D Laser Acquisition and Fusing Vision and Laser},
    url = {http://www.robots.ox.ac.uk/NewCollegeData/},
    }

FABMAP Multimedia Extension Dataset

FABMAP 10k and 100k word vocabularies can be requested here.

  • [PDF] [DOI] M. Cummins and P. Newman, “FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance.” in The International Journal of Robotics Research 2008, vol. 27, pp. 647-665, ePrint: http://ijr.sagepub.com/cgi/reprint/27/6/647.pdf, DOI: 10.1177/0278364908090961, keywords: “FABMAP and Loop Closure Bayesian Appearance and FABMAP Dataset” (link).
    [Bibtex]
    @Article{CumminsIJRR08,
    author = {Mark Cummins and Paul Newman},
    title = {{FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance}},
    journal = {The International Journal of Robotics Research},
    year = {2008},
    volume = {27},
    number = {6},
    pages = {647-665},
    abstract = {This paper describes a probabilistic approach to the problem of recognizing
    places based on their appearance. The system we present is not limited
    to localization, but can determine that a new observation comes from
    a previously unseen place, and so augment its map. Effectively this
    is a SLAM system in the space of appearance. Our probabilistic approach
    allows us to explicitly account for perceptual aliasing in the environment--identical
    but indistinctive observations receive a low probability of having
    come from the same place. We achieve this by learning a generative
    model of place appearance. By partitioning the learning problem into
    two parts, new place models can be learned online from only a single
    observation of a place. The algorithm complexity is linear in the
    number of places in the map, and is particularly suitable for online
    loop closure detection in mobile robotics.},
    bdsk-url-1 = {http://ijr.sagepub.com/cgi/content/abstract/27/6/647},
    bdsk-url-2 = {http://dx.doi.org/10.1177/0278364908090961},
    doi = {10.1177/0278364908090961},
    eprint = {http://ijr.sagepub.com/cgi/reprint/27/6/647.pdf},
    pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/IJRR_2008_FabMap.pdf},
    keywords = {FABMAP and Loop Closure Bayesian Appearance and FABMAP Dataset},
    url = {http://ijr.sagepub.com/cgi/content/abstract/27/6/647},
    }

Data used in “Navigating, Recognising and Describing Urban Spaces With Vision and Laser”

(ftp) Alog File For November 7th 2008 (180M)

Note: Alogs can be parsed by the software available on The New College Vision and Laser Dataset (see above). If you are prompted for a username and password, use the following:
User: Anonymous
Pass:

  • [PDF] [DOI] P. Newman, G. Sibley, M. Smith, M. Cummins, A. Harrison, C. Mei, I. Posner, R. Shade, D. Schroeter, L. Murphy, W. Churchill, D. Cole, and I. Reid, “Navigating, Recognising and Describing Urban Spaces With Vision and Laser.” in The International Journal of Robotics Research 2009, vol. 28, DOI: 10.1177/0278364909341483, keywords: “Urban Classification and, journal_posner”.
    [Bibtex]
    @Article{NewmanEtAlIJRR09,
    author = {Paul Newman and Gabe Sibley and Mike Smith and Mark Cummins and Alastair Harrison and Christopher Mei and Ingmar Posner and Robbie Shade and Derik Schroeter and Liz Murphy and Winston Churchill and Dave Cole and Ian Reid},
    title = {Navigating, Recognising and Describing Urban Spaces With Vision and Laser},
    journal = {The International Journal of Robotics Research},
    year = {2009},
    volume = {28},
    month = {October},
    bdsk-url-1 = {http://dx.doi.org/10.1177/0278364909341483},
    doi = {10.1177/0278364909341483},
    pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/1406-1433 ijr-341483_Low.pdf},
    keywords = {Urban Classification and, journal_posner},
    }

RobotCar Dataset

The Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different combinations of weather, traffic and pedestrians, along with longer term changes such as construction and roadworks.

You can find it here: http://robotcar-dataset.robots.ox.ac.uk/

  • [PDF] [DOI] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 Year, 1000 km: The Oxford RobotCar dataset.” in The International Journal of Robotics Research 2017, vol. 36, p. 3–15, ePrint: http://dx.doi.org/10.1177/0278364916679498, DOI: 10.1177/0278364916679498 (link).
    [Bibtex]
    @Article{MaddernIJRR2016,
    author = {Maddern, Will and Pascoe, Geoffrey and Linegar, Chris and Newman, Paul},
    title = {1 Year, 1000 km: The Oxford RobotCar dataset},
    journal = {The International Journal of Robotics Research},
    year = {2017},
    volume = {36},
    number = {1},
    pages = {3--15},
    abstract = {We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on average using the Oxford RobotCar platform, an autonomous Nissan LEAF. This resulted in over 1000 km of recorded driving with almost 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including heavy rain, night, direct sunlight and snow. Road and building works over the period of a year significantly changed sections of the route from the beginning to the end of data collection. By frequently traversing the same route over the period of a year we enable research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments. The full dataset is available for download at: http://robotcar-dataset.robots.ox.ac.uk},
    doi = {10.1177/0278364916679498},
    eprint = {http://dx.doi.org/10.1177/0278364916679498},
    pdf = {http://robotcar-dataset.robots.ox.ac.uk/images/robotcar_ijrr.pdf},
    url = {http://dx.doi.org/10.1177/0278364916679498},
    }

Radar RobotCar Dataset

The Oxford Radar RobotCar Dataset is a new dataset for researching scene understanding using Millimetre-Wave FMCW scanning radar data. The target application is autonomous vehicles where this modality remains unencumbered by environmental conditions such as fog, rain, snow, or lens flare, which typically challenge other sensor modalities such as vision and LIDAR.

You can find it here: http://ori.ox.ac.uk/datasets/radar-robotcar-dataset

  • [PDF] D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, “The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset.” in arXiv preprint arXiv: 1909.01300 2019 (link).
    [Bibtex]
    @Article{RadarRobotCarDatasetArXiv,
    author = {Barnes, Dan and Gadd, Matthew and Murcutt, Paul and Newman, Paul and Posner, Ingmar},
    title = {The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset},
    journal = {arXiv preprint arXiv: 1909.01300},
    year = {2019},
    pdf = {https://arxiv.org/pdf/1909.01300.pdf},
    url = {https://arxiv.org/pdf/1909.01300},
    }

Vote3D – Example Dataset

An example dataset to run vote3d over to demonstrate train and testing a detector with KITTI data: https://ori.ox.ac.uk/vote3d-example-dataset/