Browsing by Keyword "trustworthy AI"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability(Institute of Electrical and Electronics Engineers Inc., 2024) Rodriguez-Barroso, Nuria; Del Ser, Javier; Luzon, M. Victoria; Herrera, Francisco; IAThe rise of high-risk AI systems has led to escalating concerns, prompting regulatory efforts such as the recently approved EU AI Act. In this context, the development of responsible AI systems is crucial. To this end, trustworthy AI techniques aim at requirements (including transparency, privacy awareness and fairness) that contribute to the development of responsible, robust and safe AI systems. Among them, Federated Learning (FL) has emerged as a key approach to safeguarding data privacy while enabling the collaborative training of AI models. However, FL is prone to adversarial attacks, particularly byzantine attacks, which aim to modify the behavior of the model. This work addresses this issue by proposing an eXplainable and Impartial Dynamic Defense against Byzantine Attacks (XI-DDaBA). This defense mechanism relies on robust aggregation operators and filtering techniques to mitigate the effects of adversarial attacks in FL, while providing explanations for its decisions and ensuring that clients with poor data quality are not discriminated. Experimental simulations are discussed to assess the performance of XI-DDaBA against other baselines from the literature, and to showcase its provided explanations. Overall, XI-DDaBA aligns with the need for responsible AI systems in high-risk collaborative learning scenarios through the explainable and impartial provision of robustness against attacks.Item The Right to Be Forgotten in Artificial Intelligence: Issues, Approaches, Limitations and Challenges(Institute of Electrical and Electronics Engineers Inc., 2023) Lobo, Jesus L.; Gil-Lopez, Sergio; Del Ser, Javier; IAThe Right To Be Forgotten is widely conceived as a fundamental principle of the human being. It has become a subject of capital importance in domains where sensitive information is collected from individuals, requiring the provision of monitoring, governance and audit tools to control where such information is used. Artificial Intelligence models are not an exception to this statement: since they are learned from data, this fundamental right should allow individuals to have their personal information erased from AI-based systems. However, the application of this right is not straightforward: what does erasing mean in the context of a model learned from data? Is it just a matter of removing the concerned data and retraining the models? This manuscript provides a brief overview of these and more issues, proposing a desiderata for technical advances noted in this direction, and outlining research directions for prospective studies.