Improving tag transfer for image annotation using visual and semantic information

No Thumbnail Available
Identifiers
Publication date
2014
Advisors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE Computer Society
Citations
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
This paper addresses the problem of image annotation using a combination of visual and semantic information. Our model involves two stages: a Nearest Neighbor computation and a tag transfer stage that collects the final annotations. For the latter stage, several algorithms have been implemented in the past using labels' information or including implicitly some visual features. In this paper we propose a novel algorithm for tag transfer that takes advantage explicitly of semantic and visual information. We also present a structured training procedure based on a concept we have called Image Networking: all the images in a training database are 'connected' visually and semantically, so it is possible to exploit these connections to learn the tag transfer parameters at annotation time. This learning is local for the test image and it exploits the information obtained in the Nearest Neighbor computation stage. We demonstrate that our approach achieves state-of-The-art performance on the ImageCLEF2011 dataset.
Description
Citation
Rodriguez-Vaamonde , S , Torresani , L , Espinosa , K & Garrote , E 2014 , Improving tag transfer for image annotation using visual and semantic information . in 2014 12th International Workshop on Content-Based Multimedia Indexing, CBMI 2014 . , 6849846 , Proceedings - International Workshop on Content-Based Multimedia Indexing , IEEE Computer Society , 12th International Workshop on Content-Based Multimedia Indexing, CBMI 2014 , Klagenfurt , Austria , 18/06/14 . https://doi.org/10.1109/CBMI.2014.6849846
conference