Much of our localisation, perception and mapping work uses multiple sensors. Sometimes we are talking about simple sets of cameras but frequently we are dealing with combinations of inertial, laser and camera sensors. These sensors are deployed across the physical vehicle and we cannot a-priori be sure of their exact location relative to each other. This is a problem because without very accurate knowledge of this spatial calibration we cannot hope to leverage the advantages of multiple sensor perception.
There is also the issue of temporal calibration – how do we figure out the time lags between multiple sensors and how do we figure out the rate of change of those lags?
These spatial and temporal calibration tasks are vital to our endeavour. We need to be able calibrate any set of sensors and to do so on a continual basis – we call this life-long calibration. We do not want to calibrate just once and hope for the best for all time to come. We want to be continually watching for opportunity to improve our calibrations and spot changes in them.
Our latest research