Today, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; the robot must be able to accurately detect these objects to correctly understand the instructions.
This paper presents an online planning algorithm which learns an explicit model of the spatial dependence of object detection and generates plans which maximise the expected performance of the detection, and by extension the overall plan performance. Crucially, the learned sensor model incorporates spatial correlations between measurements, capturing the fact that successive measurements taken at the same or nearby locations are not independent. We show how this sensor model can be incorporated into an efficient forward search algorithm in the information space of detected objects, allowing the robot to generate motion plans efficiently. We investigate the performance of our approach by addressing the tasks of door and text detection in indoor environments and demonstrate significant improvement in detection performance during task execution over alternative methods in simulated and real robots experiments.
The figure opposite (a) illustrates the trajectory executed on an actual robot wheelchair (b) using planned-waypoints from start ‘S’ to goal ‘G’ where the robot discovers one true door (cyan). Near the goal, it detects two more possible doors (red dots), detours to inspect them, and (correctly) decides that they are not doors.
a) Robotic wheelchair platform equipped with onboard laser range scanners and stereo camera
b) Trajectory executed during real-world trial using planned-waypoints from ‘S’ to ‘G’