%0 Generic %A Rodriguez-Barroso, Nuria %A Del Ser, Javier %A Luzon, M. Victoria %A Herrera, Francisco %T Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability %J IEEE International Conference on Fuzzy Systems %D 2024 %@ 1098-7584 %U https://hdl.handle.net/11556/4817 %X The rise of high-risk AI systems has led to escalating concerns, prompting regulatory efforts such as the recently approved EU AI Act. In this context, the development of responsible AI systems is crucial. To this end, trustworthy AI techniques aim at requirements (including transparency, privacy awareness and fairness) that contribute to the development of responsible, robust and safe AI systems. Among them, Federated Learning (FL) has emerged as a key approach to safeguarding data privacy while enabling the collaborative training of AI models. However, FL is prone to adversarial attacks, particularly byzantine attacks, which aim to modify the behavior of the model. This work addresses this issue by proposing an eXplainable and Impartial Dynamic Defense against Byzantine Attacks (XI-DDaBA). This defense mechanism relies on robust aggregation operators and filtering techniques to mitigate the effects of adversarial attacks in FL, while providing explanations for its decisions and ensuring that clients with poor data quality are not discriminated. Experimental simulations are discussed to assess the performance of XI-DDaBA against other baselines from the literature, and to showcase its provided explanations. Overall, XI-DDaBA aligns with the need for responsible AI systems in high-risk collaborative learning scenarios through the explainable and impartial provision of robustness against attacks. %~