Maurtua, IñakiFernández, IzaskunTellaeche, AlbertoKildal, JohanSusperregi, LoretoIbarguren, AitorSierra, Basilio2017-07Maurtua , I , Fernández , I , Tellaeche , A , Kildal , J , Susperregi , L , Ibarguren , A & Sierra , B 2017 , ' Natural multimodal communication for human-robot collaboration ' , International Journal of Advanced Robotic Systems , vol. 14 , no. 4 , pp. 1-12 . https://doi.org/10.1177/17298814177160431729-8806Publisher Copyright: © The Author(s) 2017.This article presents a semantic approach for multimodal interaction between humans and industrial robots to enhance the dependability and naturalness of the collaboration between them in real industrial settings. The fusion of several interaction mechanisms is particularly relevant in industrial applications in which adverse environmental conditions might affect the performance of vision-based interaction (e.g. poor or changing lighting) or voice-based interaction (e.g. environmental noise). Our approach relies on the recognition of speech and gestures for the processing of requests, dealing with information that can potentially be contradictory or complementary. For disambiguation, it uses semantic technologies that describe the robot characteristics and capabilities as well as the context of the scenario. Although the proposed approach is generic and applicable in different scenarios, this article explains in detail how it has been implemented in two real industrial cases in which a robot and a worker collaborate in assembly and deburring operations.12enginfo:eu-repo/semantics/openAccessNatural multimodal communication for human-robot collaborationjournal article10.1177/1729881417716043Collaborative robotsFusionMultimodal interactionNatural communicationReasoningSafe human-robot collaborationSemantic web technologiesSoftwareComputer Science ApplicationsArtificial IntelligenceSDG 9 - Industry, Innovation, and Infrastructurehttp://www.scopus.com/inward/record.url?scp=85027442420&partnerID=8YFLogxK