Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

dc.contributor.authorBarredo Arrieta, Alejandro
dc.contributor.authorDíaz-Rodríguez, Natalia
dc.contributor.authorDel Ser, Javier
dc.contributor.authorBennetot, Adrien
dc.contributor.authorTabik, Siham
dc.contributor.authorBarbado, Alberto
dc.contributor.authorGarcia, Salvador
dc.contributor.authorGil-Lopez, Sergio
dc.contributor.authorMolina, Daniel
dc.contributor.authorBenjamins, Richard
dc.contributor.authorChatila, Raja
dc.contributor.authorHerrera, Francisco
dc.contributor.institutionTecnalia Research & Innovation
dc.contributor.institutionIA
dc.date.accessioned2024-07-24T12:04:27Z
dc.date.available2024-07-24T12:04:27Z
dc.date.issued2020-06
dc.descriptionPublisher Copyright: © 2019
dc.description.abstractIn the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.en
dc.description.sponsorshipAlejandro Barredo-Arrieta, Javier Del Ser and Sergio Gil-Lopez would like to thank the Basque Government for the funding support received through the EMAITEK and ELKARTEK programs. Javier Del Ser also acknowledges funding support from the Consolidated Research Group MATHMODE ( IT1294-19 ) granted by the Department of Education of the Basque Government. Siham Tabik, Salvador Garcia, Daniel Molina and Francisco Herrera would like to thank the Spanish Government for its funding support (SMART-DaSCI project, TIN2017-89517-P ), as well as the BBVA Foundation through its Ayudas Fundación BBVA a Equipos de Investigación Científica 2018 call (DeepSCOP project). This work was also funded in part by the European Union’s Horizon 2020 research and innovation programme AI4EU under grant agreement 825619 . We also thank Chris Olah, Alexander Mordvintsev and Ludwig Schubert for borrowing images for illustration purposes. Part of this overview is inspired by a preliminary work of the concept of Responsible AI: R. Benjamins, A. Barbado, D. Sierra, “Responsible AI by Design”, to appear in the Proceedings of the Human-Centered AI: Trustworthiness of AI Models & Data (HAI) track at AAAI Fall Symposium, DC, November 7–9, 2019 [386] . Alejandro Barredo-Arrieta, Javier Del Ser and Sergio Gil-Lopez would like to thank the Basque Government for the funding support received through the EMAITEK and ELKARTEK programs. Javier Del Ser also acknowledges funding support from the Consolidated Research Group MATHMODE (IT1294-19) granted by the Department of Education of the Basque Government. Siham Tabik, Salvador Garcia, Daniel Molina and Francisco Herrera would like to thank the Spanish Government for its funding support (SMART-DaSCI project, TIN2017-89517-P), as well as the BBVA Foundation through its Ayudas Fundación BBVA a Equipos de Investigación Científica 2018 call (DeepSCOP project). This work was also funded in part by the European Union's Horizon 2020 research and innovation programme AI4EU under grant agreement 825619. We also thank Chris Olah, Alexander Mordvintsev and Ludwig Schubert for borrowing images for illustration purposes. Part of this overview is inspired by a preliminary work of the concept of Responsible AI: R. Benjamins, A. Barbado, D. Sierra, “Responsible AI by Design”, to appear in the Proceedings of the Human-Centered AI: Trustworthiness of AI Models & Data (HAI) track at AAAI Fall Symposium, DC, November 7–9, 2019 [386].
dc.description.statusPeer reviewed
dc.format.extent34
dc.identifier.citationBarredo Arrieta , A , Díaz-Rodríguez , N , Del Ser , J , Bennetot , A , Tabik , S , Barbado , A , Garcia , S , Gil-Lopez , S , Molina , D , Benjamins , R , Chatila , R & Herrera , F 2020 , ' Explainable Artificial Intelligence (XAI) : Concepts, taxonomies, opportunities and challenges toward responsible AI ' , Information Fusion , vol. 58 , pp. 82-115 . https://doi.org/10.1016/j.inffus.2019.12.012
dc.identifier.doi10.1016/j.inffus.2019.12.012
dc.identifier.issn1566-2535
dc.identifier.urihttps://hdl.handle.net/11556/3455
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85077515399&partnerID=8YFLogxK
dc.language.isoeng
dc.relation.ispartofInformation Fusion
dc.relation.projectIDDepartment of Education of the Basque Government
dc.relation.projectIDEuropean Union's Horizon 2020 research and innovation programme AI4EU
dc.relation.projectIDEuropean Union’s Horizon 2020 research and innovation programme AI4EU
dc.relation.projectIDSpanish Government, TIN2017-89517-P
dc.relation.projectIDFundación BBVA, FBBVA
dc.relation.projectIDHorizon 2020 Framework Programme, H2020, 825619
dc.relation.projectIDEusko Jaurlaritza, IT1294-19
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subject.keywordsAccountability
dc.subject.keywordsComprehensibility
dc.subject.keywordsData Fusion
dc.subject.keywordsDeep Learning
dc.subject.keywordsExplainable Artificial Intelligence
dc.subject.keywordsFairness
dc.subject.keywordsInterpretability
dc.subject.keywordsMachine Learning
dc.subject.keywordsPrivacy
dc.subject.keywordsResponsible Artificial Intelligence
dc.subject.keywordsTransparency
dc.subject.keywordsSoftware
dc.subject.keywordsSignal Processing
dc.subject.keywordsInformation Systems
dc.subject.keywordsHardware and Architecture
dc.titleExplainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AIen
dc.typejournal article
Files