Abstract
In this paper, we present the first large-scale synthetic dataset for visual perception in disaster scenarios, and analyze state-of-the-art methods for multiple computer vision tasks with reference baselines. We simulated before and after disaster scenarios such as fire and building collapse for fifteen different locations in realistic virtual worlds. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for semantic segmentation, depth, optical flow, surface normal estimation and camera pose estimation. To create realistic disaster scenes, we manually augmented the effects with 3D models using physical-based graphics tools. We use our dataset to train state-of-the-art methods and evaluate how well these methods can recognize the disaster situations and produce reliable results on virtual scenes as well as real-world images. The results obtained from each task are then used as inputs to the proposed visual odometry network for generating 3D maps of buildings on fire. Finally, we discuss challenges for future research.
Original language | English |
---|---|
Title of host publication | 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 187-194 |
Number of pages | 8 |
ISBN (Electronic) | 9781728140049 |
DOIs | |
State | Published - Nov 2019 |
Event | 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 - Macau, China Duration: 3 Nov 2019 → 8 Nov 2019 |
Publication series
Name | IEEE International Conference on Intelligent Robots and Systems |
---|---|
ISSN (Print) | 2153-0858 |
ISSN (Electronic) | 2153-0866 |
Conference
Conference | 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 |
---|---|
Country/Territory | China |
City | Macau |
Period | 3/11/19 → 8/11/19 |
Bibliographical note
Publisher Copyright:© 2019 IEEE.