Browsing by Keyword "Explainability"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence(2022-03) Holzinger, Andreas; Dehmer, Matthias; Emmert-Streib, Frank; Cucchiara, Rita; Augenstein, Isabelle; Ser, Javier Del; Samek, Wojciech; Jurisica, Igor; Díaz-Rodríguez, Natalia; IAMedical artificial intelligence (AI) systems have been remarkably successful, even outperforming human performance at certain tasks. There is no doubt that AI is important to improve human health in many ways and will disrupt various medical workflows in the future. Using AI to solve problems in medicine beyond the lab, in routine environments, we need to do more than to just improve the performance of existing AI methods. Robust AI solutions must be able to cope with imprecision, missing and incorrect information, and explain both the result and the process of how it was obtained to a medical expert. Using conceptual knowledge as a guiding model of reality can help to develop more robust, explainable, and less biased machine learning models that can ideally learn from less data. Achieving these goals will require an orchestrated effort that combines three complementary Frontier Research Areas: (1) Complex Networks and their Inference, (2) Graph causal models and counterfactuals, and (3) Verification and Explainability methods. The goal of this paper is to describe these three areas from a unified view and to motivate how information fusion in a comprehensive and integrative manner can not only help bring these three areas together, but also have a transformative role by bridging the gap between research and practical applications in the context of future trustworthy medical AI. This makes it imperative to include ethical and legal aspects as a cross-cutting discipline, because all future solutions must not only be ethically responsible, but also legally compliant.Item Normalization Influence on ANN-Based Models Performance: A New Proposal for Features’ Contribution Analysis: A New Proposal for Features' Contribution Analysis(2021) Nino-Adan, Iratxe; Portillo, Eva; Landa-Torres, Itziar; Manjarres, Diana; Tecnalia Research & Innovation; IAArtificial Neural Networks (ANNs) are weighted directed graphs of interconnected neurons widely employed to model complex problems. However, the selection of the optimal ANN architecture and its training parameters is not enough to obtain reliable models. The data preprocessing stage is fundamental to improve the model’s performance. Specifically, Feature Normalisation (FN) is commonly utilised to remove the features’ magnitude aiming at equalising the features’ contribution to the model training. Nevertheless, this work demonstrates that the FN method selection affects the model performance. Also, it is well-known that ANNs are commonly considered a “black box” due to their lack of interpretability. In this sense, several works aim to analyse the features’ contribution to the network for estimating the output. However, these methods, specifically those based on network’s weights, like Garson’s or Yoon’s methods, do not consider preprocessing factors, such as dispersion factors , previously employed to transform the input data. This work proposes a new features’ relevance analysis method that includes the dispersion factors into the weight matrix analysis methods to infer each feature’s actual contribution to the network output more precisely. Besides, in this work, the Proportional Dispersion Weights (PWD) are proposed as explanatory factors of similarity between models’ performance results. The conclusions from this work improve the understanding of the features’ contribution to the model that enhances the feature selection strategy, which is fundamental for reliably modelling a given problem.