RVMOS: Range-View Moving Object Segmentation Leveraged by Semantic and Motion Features

Jaeyeul Kim, Jungwan Woo, Sunghoon Im

Research output: Contribution to journalArticlepeer-review

38 Scopus citations

Abstract

Detecting traffic participants is an essential and age-old problem in autonomous driving. Recently, the recognition of moving objects has emerged as a major issue in this field for safe driving. In this paper, we present RVMOS, a LiDAR Range-View-based Moving Object Segmentation framework that segments moving objects given a sequence of range-view images. In contrast to the conventional method, our network incorporates both motion and semantic features, each of which encodes the motion of objects and the surrounding circumstance of the objects. In addition, we design a new feature extraction module suitably designed for range-view images. Lastly, we introduce simple yet effective data augmentation methods: time interval modulation and zero residual image synthesis. With these contributions, we achieve a 19% higher performance (mIoU) with 10% faster computational time (34 FPS on RTX 3090) than the state-of-the-art method with the SemanticKitti benchmark. Extensive experiments demonstrate the effectiveness of our network design and data augmentation scheme.

Original languageEnglish
Pages (from-to)8044-8051
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume7
Issue number3
DOIs
StatePublished - 1 Jul 2022

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Autonomous driving
  • LiDAR
  • Moving object segmentation
  • Perception
  • Range-view

Fingerprint

Dive into the research topics of 'RVMOS: Range-View Moving Object Segmentation Leveraged by Semantic and Motion Features'. Together they form a unique fingerprint.

Cite this