%0 Generic %A Alvarez-Gila, Aitor %A van de Weijer, Joost %A Wang, Yaxing %A Garrote, Estibaliz %T MVMO: A MULTI-OBJECT DATASET FOR WIDE BASELINE MULTI-VIEW SEMANTIC SEGMENTATION %J Proceedings - International Conference on Image Processing, ICIP %D 2022 %@ 1522-4880 %U https://hdl.handle.net/11556/2498 %X We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116, 000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups. %~