MVMO: A MULTI-OBJECT DATASET FOR WIDE BASELINE MULTI-VIEW SEMANTIC SEGMENTATION

No Thumbnail Available
Identifiers
Publication date
2022
Advisors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE Computer Society
Citations
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116, 000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups.
Description
Publisher Copyright: © 2022 IEEE.
Citation
Alvarez-Gila , A , van de Weijer , J , Wang , Y & Garrote , E 2022 , MVMO : A MULTI-OBJECT DATASET FOR WIDE BASELINE MULTI-VIEW SEMANTIC SEGMENTATION . in 2022 IEEE International Conference on Image Processing, ICIP 2022 - Proceedings . Proceedings - International Conference on Image Processing, ICIP , IEEE Computer Society , pp. 1166-1170 , 29th IEEE International Conference on Image Processing, ICIP 2022 , Bordeaux , France , 16/10/22 . https://doi.org/10.1109/ICIP46576.2022.9897955
conference