Latest From The Blog

Blog 2018-08-08T13:56:04+00:00

Multimotion Visual Odometry

Just as we use our eyes to perceive and navigate the world, many autonomous vehicles and systems rely on cameras to observe their environment. As we move through the world, we can watch as the world seems to move past us. We’ve long understood how to accurately estimate this egomotion (i.e., the motion of a camera) relative to the static world from a sequence of images. This process, known as visual odometry (VO), is fundamental to robotic navigation. One of the most complex aspects of our world is its motion. Not only do our environments change over time, but most of the things we do (and want to automate) involve moving around and interacting with other dynamic objects and agents. Traditionally, VO systems ignore the dynamic parts of a scene, focusing only on the motion of the camera, but the ability to isolate and estimate each of the motions within a scene is essential for an autonomous agent to successfully navigate its environment. This presents a challenging chicken-and-egg problem, where segmenting a scene into independent motions requires knowledge of those motions, but estimating the constituent motions in a scene requires knowledge of its segmentation. To address this challenge, we developed a multimotion visual odometry (MVO) pipeline that applies state-of-the-art techniques to estimate trajectories for every motion in a scene. MVO extends the traditional VO pipeline with multimodel fitting algorithms and batch estimation techniques to simultaneously estimate the trajectories of all [...]

By | September 10th, 2018|Categories: ORI Blog|Comments Off on Multimotion Visual Odometry
Load More Posts