Kinect Monte Carlos Localization (KMCL) can localize in real time with a simple plane-based 3D building model in real-time using only RGB-D from a lowcost Microsoft Kinect.
First an entire building floor of the Stata Center was captured using a Kinect. The data was then used to build a simple plane-based 3D model using PCL (and sensor poses from Lidar-based SLAM) of about 1MB in size. We then developed a robust algorithm for localization within the model in real-time using a particle filter.
The approach works by generating simulated range images using the 3D model for a virtual camera located the different particle poses. This approach correctly simulates the image formation model and allows for a disparity parameterized likelihood function. This allows the filter to correctly utilize the very noisy RGB-D points up to 20 meters away – data that has usually been discarded up to this.
Particles are propagated using the FOVIS Visual Odometry library – so no wheel odometry or IMU was required.
Implicitly the approach is robust to dynamic objects and people in the environment, but given the use of the RGB-D sensors such challenges could be explicitly supported by alternative detectors.
As the approach utilizes the GPU for scene rendering and depth buffering (via OpenGL), the approach is efficient: allowing for real-time operation with 100s of particles. This allows for broad exploration of the particle cloud around the source location: resulting in robust localization 6-DOF. The approach is designed to be applicable not just for robotics but other applications such as wearable computing.
- May 2012: ICRA 2012 nominated for best conference paper award. 4 of 2000 submissions.
- Maurice F. Fallon, Hordur Johannsson and John Leonard, ‘Efficient Scene Simulation for Robust Monte Carlo Localization using an RGB-D Camera’, ICRA, St Paul, Minnesota, USA. May 2012.
Raw Kinect Video:
ICRA 2012 Overview Video and localizing a quadrotor flying: