Defense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainability

dc.contributor.authorRodriguez-Barroso, Nuria
dc.contributor.authorDel Ser, Javier
dc.contributor.authorLuzon, M. Victoria
dc.contributor.authorHerrera, Francisco
dc.contributor.institutionIA
dc.date.accessioned2024-09-06T08:55:04Z
dc.date.available2024-09-06T08:55:04Z
dc.date.issued2024
dc.descriptionPublisher Copyright: © 2024 IEEE.
dc.description.abstractThe rise of high-risk AI systems has led to escalating concerns, prompting regulatory efforts such as the recently approved EU AI Act. In this context, the development of responsible AI systems is crucial. To this end, trustworthy AI techniques aim at requirements (including transparency, privacy awareness and fairness) that contribute to the development of responsible, robust and safe AI systems. Among them, Federated Learning (FL) has emerged as a key approach to safeguarding data privacy while enabling the collaborative training of AI models. However, FL is prone to adversarial attacks, particularly byzantine attacks, which aim to modify the behavior of the model. This work addresses this issue by proposing an eXplainable and Impartial Dynamic Defense against Byzantine Attacks (XI-DDaBA). This defense mechanism relies on robust aggregation operators and filtering techniques to mitigate the effects of adversarial attacks in FL, while providing explanations for its decisions and ensuring that clients with poor data quality are not discriminated. Experimental simulations are discussed to assess the performance of XI-DDaBA against other baselines from the literature, and to showcase its provided explanations. Overall, XI-DDaBA aligns with the need for responsible AI systems in high-risk collaborative learning scenarios through the explainable and impartial provision of robustness against attacks.en
dc.description.statusPeer reviewed
dc.identifier.citationRodriguez-Barroso , N , Del Ser , J , Luzon , M V & Herrera , F 2024 , Defense Strategy against Byzantine Attacks in Federated Machine Learning : Developments towards Explainability . in 2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024 - Proceedings . IEEE International Conference on Fuzzy Systems , Institute of Electrical and Electronics Engineers Inc. , 2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024 , Yokohama , Japan , 30/06/24 . https://doi.org/10.1109/FUZZ-IEEE60900.2024.10611769
dc.identifier.citationconference
dc.identifier.doi10.1109/FUZZ-IEEE60900.2024.10611769
dc.identifier.isbn9798350319545
dc.identifier.issn1098-7584
dc.identifier.urihttps://hdl.handle.net/11556/4817
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85201570978&partnerID=8YFLogxK
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.relation.ispartof2024 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2024 - Proceedings
dc.relation.ispartofseriesIEEE International Conference on Fuzzy Systems
dc.rightsinfo:eu-repo/semantics/restrictedAccess
dc.subject.keywordsadversarial attacks
dc.subject.keywordsbyzantine attacks
dc.subject.keywordsFederated Learning
dc.subject.keywordssafe AI
dc.subject.keywordstrustworthy AI
dc.subject.keywordsSoftware
dc.subject.keywordsTheoretical Computer Science
dc.subject.keywordsArtificial Intelligence
dc.subject.keywordsApplied Mathematics
dc.titleDefense Strategy against Byzantine Attacks in Federated Machine Learning: Developments towards Explainabilityen
dc.typeconference output
Files