Publication (ECCV 2020): STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos

This article has been published in European Conference on Computer Vision (ECCV 2020).

Abstract

Existing methods for instance segmentation in videos typically involve multi-stage pipelines that follow the tracking-by-detection paradigm and model a video clip as a sequence of images. Multiple networks are used to detect objects in individual frames, and then associate these detections over time. Hence, these methods are often non-end-toend trainable and highly tailored to specific tasks. In this paper, we propose a different approach that is well-suited to a variety of tasks involving instance segmentation in videos. In particular, we model a video clip as a single 3D spatio-temporal volume, and propose a novel approach that segments and tracks instances across space and time in a single stage. Our problem formulation is centered around the idea of spatio-temporal embeddings which are trained to cluster pixels belonging to a specific object instance over an entire video clip. To this end, we introduce (i) novel mixing functions that enhance the feature representation of spatio-temporal embeddings, and (ii) a single-stage, proposal-free network that can reason about temporal context. Our network is trained end-to-end to learn spatio-temporal embeddings as well as parameters required to cluster these embeddings, thus simplifying inference. Our method achieves stateof-the-art results across multiple datasets and tasks. Code and models are available at https://github.com/sabarim/STEm-Seg.

Details

  • Title: STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos
  • Authors: Athar, Ali; Mahadevan, Sabarinath; Osep, Aljosa; Leal-Taixe, Laura; Leibe, Bastian
  • Date of publication: 24/08/2020
  • View article