%0 Generic %A Rodriguez-Vaamonde, Sergio %A Torresani, Lorenzo %A Espinosa, Koldo %A Garrote, Estibaliz %T Improving tag transfer for image annotation using visual and semantic information %J Proceedings - International Workshop on Content-Based Multimedia Indexing %D 2014 %@ 1949-3991 %U https://hdl.handle.net/11556/1534 %X This paper addresses the problem of image annotation using a combination of visual and semantic information. Our model involves two stages: a Nearest Neighbor computation and a tag transfer stage that collects the final annotations. For the latter stage, several algorithms have been implemented in the past using labels' information or including implicitly some visual features. In this paper we propose a novel algorithm for tag transfer that takes advantage explicitly of semantic and visual information. We also present a structured training procedure based on a concept we have called Image Networking: all the images in a training database are 'connected' visually and semantically, so it is possible to exploit these connections to learn the tag transfer parameters at annotation time. This learning is local for the test image and it exploits the information obtained in the Nearest Neighbor computation stage. We demonstrate that our approach achieves state-of-The-art performance on the ImageCLEF2011 dataset. %~