We have published a number of datasets. Under each heading you will find the associated paper, as well as links to their websites where relevant. We also make some data from other publications available as is.


New College Vision and Laser Dataset

  • [PDF] [DOI] M. Smith, I. Baldwin, W. Churchill, R. Paul, and P. Newman, “The New College Vision and Laser Data Set.” in The International Journal of Robotics Research 2009, vol. 28, pp. 595-599, ISSN: 0921-8890, DOI: DOI: 10.1177/0278364909103911, keywords: “Data Paper and 3D Laser Aquisition and 3D Laser Acquisition and Fusing Vision and Laser” (link).
    [Bibtex]
    @article{SmithEtAl:IJRR09,
    Author = {Mike Smith and Ian Baldwin and Winston Churchill and Rohan Paul and Paul Newman},
    Doi = {DOI: 10.1177/0278364909103911},
    Issn = {0921-8890},
    Journal = {The International Journal of Robotics Research},
    Keywords = {Data Paper and 3D Laser Aquisition and 3D Laser Acquisition and Fusing Vision and Laser},
    Month = {May},
    Note = {Data Papers ¬{\'o} Peer Reviewed Publication of High Quality Data Sets},
    Number = {5},
    Pages = {595 - 599},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/IJRRDataPaper09.pdf},
    Title = {The New College Vision and Laser Data Set},
    Url = {http://www.robots.ox.ac.uk/NewCollegeData/},
    Volume = {28},
    Year = {2009},
    Bdsk-Url-1 = {http://www.robots.ox.ac.uk/NewCollegeData/},
    Bdsk-Url-2 = {http://dx.doi.org/10.1177/0278364909103911}}

FABMAP Multimedia Extension Dataset

FABMAP 10k and 100k word vocabularies can be requested here.

  • [PDF] [DOI] M. Cummins and P. Newman, “FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance.” in The International Journal of Robotics Research 2008, vol. 27, pp. 647-665, ePrint: http://ijr.sagepub.com/cgi/reprint/27/6/647.pdf, DOI: 10.1177/0278364908090961, keywords: “FABMAP and Loop Closure Bayesian Appearance and FABMAP Dataset” (link).
    [Bibtex]
    @article{CumminsIJRR08,
    Abstract = {This paper describes a probabilistic approach to the problem of recognizing
    places based on their appearance. The system we present is not limited
    to localization, but can determine that a new observation comes from
    a previously unseen place, and so augment its map. Effectively this
    is a SLAM system in the space of appearance. Our probabilistic approach
    allows us to explicitly account for perceptual aliasing in the environment--identical
    but indistinctive observations receive a low probability of having
    come from the same place. We achieve this by learning a generative
    model of place appearance. By partitioning the learning problem into
    two parts, new place models can be learned online from only a single
    observation of a place. The algorithm complexity is linear in the
    number of places in the map, and is particularly suitable for online
    loop closure detection in mobile robotics.},
    Author = {Mark Cummins and Paul Newman},
    Doi = {10.1177/0278364908090961},
    Eprint = {http://ijr.sagepub.com/cgi/reprint/27/6/647.pdf},
    Journal = {The International Journal of Robotics Research},
    Keywords = {FABMAP and Loop Closure Bayesian Appearance and FABMAP Dataset},
    Number = {6},
    Pages = {647-665},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/IJRR_2008_FabMap.pdf},
    Title = {{FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance}},
    Url = {http://ijr.sagepub.com/cgi/content/abstract/27/6/647},
    Volume = {27},
    Year = {2008},
    Bdsk-Url-1 = {http://ijr.sagepub.com/cgi/content/abstract/27/6/647},
    Bdsk-Url-2 = {http://dx.doi.org/10.1177/0278364908090961}}

Data used in “Navigating, Recognising and Describing Urban Spaces With Vision and Laser”

(ftp) Alog File For November 7th 2008 (180M)

Note: Alogs can be parsed by the software available on The New College Vision and Laser Dataset (see above). If you are prompted for a username and password, use the following:
User: Anonymous
Pass:

  • [PDF] [DOI] P. Newman, G. Sibley, M. Smith, M. Cummins, A. Harrison, C. Mei, I. Posner, R. Shade, D. Schroeter, L. Murphy, W. Churchill, D. Cole, and I. Reid, “Navigating, Recognising and Describing Urban Spaces With Vision and Laser.” in The International Journal of Robotics Research 2009, vol. 28, DOI: 10.1177/0278364909341483, keywords: “Urban Classification and, journal_posner”.
    [Bibtex]
    @article{NewmanEtAlIJRR09,
    Author = {Paul Newman and Gabe Sibley and Mike Smith and Mark Cummins and Alastair Harrison and Christopher Mei and Ingmar Posner and Robbie Shade and Derik Schroeter and Liz Murphy and Winston Churchill and Dave Cole and Ian Reid},
    Doi = {10.1177/0278364909341483},
    Journal = {The International Journal of Robotics Research},
    Keywords = {Urban Classification and, journal_posner},
    Month = {October},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/1406-1433 ijr-341483_Low.pdf},
    Title = {Navigating, Recognising and Describing Urban Spaces With Vision and Laser},
    Volume = {28},
    Year = {2009},
    Bdsk-Url-1 = {http://dx.doi.org/10.1177/0278364909341483}}

RobotCar Dataset

The Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different combinations of weather, traffic and pedestrians, along with longer term changes such as construction and roadworks.

You can find it here: http://robotcar-dataset.robots.ox.ac.uk/

  • [PDF] [DOI] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 Year, 1000 km: The Oxford RobotCar dataset.” in The International Journal of Robotics Research 2017, vol. 36, p. 3–15, ePrint: http://dx.doi.org/10.1177/0278364916679498, DOI: 10.1177/0278364916679498 (link).
    [Bibtex]
    @article{MaddernIJRR2016,
    author = {Maddern, Will and Pascoe, Geoffrey and Linegar, Chris and Newman, Paul},
    title = {1 Year, 1000 km: The Oxford RobotCar dataset},
    year = {2017},
    doi = {10.1177/0278364916679498},
    abstract ={We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on average using the Oxford RobotCar platform, an autonomous Nissan LEAF. This resulted in over 1000 km of recorded driving with almost 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including heavy rain, night, direct sunlight and snow. Road and building works over the period of a year significantly changed sections of the route from the beginning to the end of data collection. By frequently traversing the same route over the period of a year we enable research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments. The full dataset is available for download at: http://robotcar-dataset.robots.ox.ac.uk},
    URL = {http://dx.doi.org/10.1177/0278364916679498},
    eprint = {http://dx.doi.org/10.1177/0278364916679498},
    journal = {The International Journal of Robotics Research},
    volume = {36},
    number = {1},
    pages = {3--15},
    Pdf = {http://robotcar-dataset.robots.ox.ac.uk/images/robotcar_ijrr.pdf}
    }

Vote3D – Example Dataset

An example dataset to run vote3d over to demonstrate train and testing a detector with KITTI data: https://ori.ox.ac.uk/vote3d-example-dataset/