This blog post provides an overview of our paper “Inferring Road Boundaries Through and Despite Traffic” by Tarlan Suleymanov, Paul Amayo and Paul Newman, which has been accepted for publication at the 21st IEEE International Conference on Intelligent Transportation Systems (ITSC) 2018.
In the context of autonomous driving, road boundaries play a vital role as they legally and intentionally delimit driveable space. They provide information for navigation, path planning and mapping, and can be used as reference structure for accurate lateral vehicle positioning on a road. Additionally, road boundary detection is a crucial component of ADAS (Advanced Driving Assistance Systems) such as parking assist systems. Knowing where the road ends is always good, but the road users block the view of the road boundaries. Driving in urban scenes requires innate reasoning about unseen regions and our goal here is to trace out, in an image, the projection of road boundaries irrespective of whether or not the boundary is actually visible.
In recent years, machine learning has achieved state-of- the-art performance in segmentation and object detection problems. However, many existing image segmentation or object detection methods do not explicitly infer geometry of road boundaries, rather segment the road . Road boundary detection using deep learning approaches from mono images hasn’t been addressed deeply in the literature. The small width, elongated shape and no clear form in either appearance or geometry of road boundaries make detection challenging for state-of-the-art deep models. Presence of occluding obstacles makes this even more challenging. In this year’s International Conference on Intelligent Transportation Systems in Hawaii, I will present a deep learning based approach that relies on a single camera image and capable of detecting visible road boundaries and estimating positions of occluded ones behind other road users (cars, cyclists, pedestrians) without making assumptions about structure, shape or colour of road boundaries or occluding obstacles. Our CNN based models solve the road boundary detection problem as a coupled, solving for visible and occluded curve partitions with a continuity constraint. Moreover, to train our deep models we propose a framework that enables swift accumulation of ground truth masks of visible and occluded target:
Below given sample outputs demonstrate the ability of the proposed approach to seamlessly deal with and predict the position of the road boundaries in scenarios of low to no occlusion (left column), some occlusion (middle column) and full occlusion (right column). The blue pixels are the visible road boundaries while the yellow pixels represent the occluded boundaries.
We also apply a model selection stage which associates measurements to minimal number of a-priori unknown set of geometric primitives (cubic curves) representing road boundaries.
 T. Suleymanov, L. M. Paz, P. Piniés, G. Hester, and P. Newman, “The Path Less Taken: A Fast Variational Approach for Scene Segmentation Used for Closed Loop Control,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, South Korea, 2016.