A2I

Home/A2I

This is the category for all pages / news posts related to A2I.

For research topics / papers that fall into AI, use the Topics –> AI category instead

Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects

Sequential Attend, Infer, Repeat: Generative Modelling of Moving Object We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. [...]

Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects2019-11-03T10:46:37+00:00

Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments Abstract – We present a self-supervised approach to ignoring “distractors” in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each [...]

Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments2019-03-27T15:01:57+00:00

TACO: Learning Task Decomposition via Temporal Alignment for Control

TACO: Learning Task Decomposition via Temporal Alignment for Control   Many advanced Learning from Demonstration (LfD) methods consider the decomposition of complex, real-world tasks into simpler sub-tasks. By reusing the corresponding sub-policies within and between tasks, they provide training data for each policy from different high-level tasks and compose them to perform novel ones. Existing [...]

TACO: Learning Task Decomposition via Temporal Alignment for Control2019-11-04T15:07:47+00:00