Latest From The Blog

Home/Blog
Blog2019-04-29T14:51:00+00:00

Laser Localization

What's the problem? Motivation This is a blog post designed for non-technical people to understand the basics of localization. As such we are not going to touch any code but rather interactively present you with examples how these systems run and what makes them tick. Localization is an essential part of any autonomous vehicle. Nowadays any self-driving car would use a combination of AI-driven tools to understand where it is, given some previously obtained map. But how do you obtain a map? Using sensors, of course - all autonomous cars have multiple cameras and lasers that record the environment around them. Think of these as a collection of pictures as the one on the left, or a collection of points in space (right). These pictures or clouds are stored on the on-board computers of each vehicle and processed.   Okay I got the map, but we [...]

By |November 30th, 2018|Categories: ORI Blog|Comments Off on Laser Localization

What about radar?

Radar is awesome. When most people think of radar, they imagine a line sweeping around an old CRT display with its characteristic green hue - interrupted by approaching aircraft, or perhaps distant ships over the horizon. While our experience with cameras and GPS has remained up-to-date as hardware has advanced, our perceptions of radar are largely based on a sensor of the past. And even among roboticists who should know a little more about available sensors, this technology is underutilised. However, radar has advanced far beyond the point of crude object detection and is changing the way mobile robots navigate. Navtech’s FMCW scanning radar allows us to observe the world in 400 equally-spaced azimuthal slices, each containing 2000 range measurements. This equates to a circular 360-degree scan of the environment, 500 metres all around, at a resolution of 0.25 metres per bin. These 800 000 readings [...]

By |November 21st, 2018|Categories: ORI Blog|Tags: , , |Comments Off on What about radar?

Mission Planning for Mobile Robots with Probabilistic Guarantees

Consider an office robot that must execute a range of tasks such as guiding visitors to offices; checking whether fire exits are clear; carrying and delivering items as requested by office users; and making announcements (“staff meeting in 10 minutes!”). A crucial requirement for such a robot is being able to be in the right place, at the right time. One strand of research in the GOALS Lab addresses mission planning techniques for robots in this context. We have considered long term (weeks to months) deployments of mobile service robots in human populated environments, and the use of probabilistic model checking techniques to generate mobile robot policies with formal performance guarantees. Such guarantees typically include notions such as safety, robustness and efficiency. Safety means robot behaviour must not reach “bad” situations (e.g., colliding with something, driving down steps). Robustness means systems must be [...]

By |September 26th, 2018|Categories: GOALS, ORI Blog, Publications, yr_2019|Comments Off on Mission Planning for Mobile Robots with Probabilistic Guarantees

Multimotion Visual Odometry

Just as we use our eyes to perceive and navigate the world, many autonomous vehicles and systems rely on cameras to observe their environment. As we move through the world, we can watch as the world seems to move past us. We’ve long understood how to accurately estimate this egomotion (i.e., the motion of a camera) relative to the static world from a sequence of images. This process, known as visual odometry (VO), is fundamental to robotic navigation. One of the most complex aspects of our world is its motion. Not only do our environments change over time, but most of the things we do (and want to automate) involve moving around and interacting with other dynamic objects and agents. Traditionally, VO systems ignore the dynamic parts of a scene, focusing only on the motion of the camera, but the ability to isolate and estimate [...]

By |September 10th, 2018|Categories: ORI Blog|Comments Off on Multimotion Visual Odometry

Road Boundary Detection

This blog post provides an overview of our paper “Inferring Road Boundaries Through and Despite Traffic” by Tarlan Suleymanov, Paul Amayo and Paul Newman, which has been accepted for publication at the 21st IEEE International Conference on Intelligent Transportation Systems (ITSC) 2018. In the context of autonomous driving, road boundaries play a vital role as they legally and intentionally delimit driveable space. They provide information for navigation, path planning and mapping, and can be used as reference structure for accurate lateral vehicle positioning on a road. Additionally, road boundary detection is a crucial component of ADAS (Advanced Driving Assistance Systems) such as parking assist systems. Knowing where the road ends is always good, but the road users block the view of the road boundaries. Driving in urban scenes requires innate reasoning about unseen regions and our goal here is to trace out, in an image, the [...]

By |August 17th, 2018|Categories: MRG Highlights, ORI Blog|Comments Off on Road Boundary Detection
Load More Posts