Browsing by Keyword "Data Fusion"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Data Harvesting, Curation and Fusion Model to Support Public Service Recommendations for e-Governments(SciTePress, 2018-01) Sedrakyan, Gayane; De Vocht, Laurens; Alonso, Juncal; Escalante, Marisa; Orue-Echevarria, Leire; Mannens, Erik; Hammoudi, Slimane; Pires, Luis Ferreira; Selic, Bran; HPA; CIBERSEC&DLT; Tecnalia Research & InnovationThis work reports on early results from CITADEL project that aims at creating an ecosystem of best practices, tools, and recommendations to transform Public Administrations with more efficient, inclusive and citizen-centric services. The goal of the recommendations is to support Governments to find out why citizens stop using public services, and use this information to re-adjust provision to bring these citizens back in. Furthermore, it will help identifying why citizens are not using a given public service (due to affordability, accessibility, lack of knowledge, embarrassment, lack of interest, etc.) and, where appropriate, use this information to make public services more attractive, so they start using the services. While recommender systems can enhance experiences by providing targeted information, the entry barriers in terms of data acquisition are very high, often limiting recommender solutions to closed systems of user/context models. The main focus of this work is to provide an architectural model that allows harvesting data from various sources, curating datasets that originate from a multitude of formats and fusing them into semantically enhanced data that contain key performance indicators for the utility of e-Government services. The output can be further processed by analytics and/or recommender engines to suggest public service improvement needs.Item Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI(2020-06) Barredo Arrieta, Alejandro; Díaz-Rodríguez, Natalia; Del Ser, Javier; Bennetot, Adrien; Tabik, Siham; Barbado, Alberto; Garcia, Salvador; Gil-Lopez, Sergio; Molina, Daniel; Benjamins, Richard; Chatila, Raja; Herrera, Francisco; Tecnalia Research & Innovation; IAIn the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.