Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
dc.contributor.author | Ali, Sajid | |
dc.contributor.author | Abuhmed, Tamer | |
dc.contributor.author | El-Sappagh, Shaker | |
dc.contributor.author | Muhammad, Khan | |
dc.contributor.author | Alonso-Moral, Jose M. | |
dc.contributor.author | Confalonieri, Roberto | |
dc.contributor.author | Guidotti, Riccardo | |
dc.contributor.author | Del Ser, Javier | |
dc.contributor.author | Díaz-Rodríguez, Natalia | |
dc.contributor.author | Herrera, Francisco | |
dc.contributor.institution | IA | |
dc.date.issued | 2023-11 | |
dc.description | Publisher Copyright: © 2023 The Author(s) | |
dc.description.abstract | Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model's decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data. | en |
dc.description.sponsorship | This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT)(No. 2021R1A2C1011198 ), ( Institute for Information & communications Technology Planning & Evaluation ) (IITP) grant funded by the Korea government (MSIT) under the ICT Creative Consilience Program ( IITP-2021-2020-0-01821 ), and AI Platform to Fully Adapt and Reflect Privacy-Policy Changes (No. 2022-0-00688 ). | |
dc.description.status | Peer reviewed | |
dc.identifier.citation | Ali , S , Abuhmed , T , El-Sappagh , S , Muhammad , K , Alonso-Moral , J M , Confalonieri , R , Guidotti , R , Del Ser , J , Díaz-Rodríguez , N & Herrera , F 2023 , ' Explainable Artificial Intelligence (XAI) : What we know and what is left to attain Trustworthy Artificial Intelligence ' , Information Fusion , vol. 99 , 101805 . https://doi.org/10.1016/j.inffus.2023.101805 | |
dc.identifier.doi | 10.1016/j.inffus.2023.101805 | |
dc.identifier.issn | 1566-2535 | |
dc.identifier.url | http://www.scopus.com/inward/record.url?scp=85159601901&partnerID=8YFLogxK | |
dc.language.iso | eng | |
dc.relation.ispartof | Information Fusion | |
dc.relation.projectID | Institute for Information & communications Technology Planning & Evaluation, 2022-0-00688-IITP-2021-2020-0-01821 | |
dc.relation.projectID | Ministry of Science, ICT and Future Planning, MSIP, 2021R1A2C1011198 | |
dc.relation.projectID | National Research Foundation of Korea, NRF | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.subject.keywords | AI principles | |
dc.subject.keywords | Data Fusion | |
dc.subject.keywords | Deep Learning | |
dc.subject.keywords | Explainable Artificial Intelligence | |
dc.subject.keywords | Interpretable machine learning | |
dc.subject.keywords | Post-hoc explainability | |
dc.subject.keywords | Trustworthy AI | |
dc.subject.keywords | XAI assessment | |
dc.subject.keywords | Software | |
dc.subject.keywords | Signal Processing | |
dc.subject.keywords | Information Systems | |
dc.subject.keywords | Hardware and Architecture | |
dc.title | Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence | en |
dc.type | journal article |