FLA

FLA

End-to-end, pervasive, and continuous compliance checking for Federated Learning

FLA

Compliance-by-design for Federated Learning (FL) based on multi-faceted & multi-perspective assurance.

Project description

Federated Learning (FL) allows for sharing knowledge, while keeping data private. Multiple actors train models collaboratively by applying privacy-preserving machine learning techniques without having to share the raw data used for training.

However, FL can still be vulnerable to communication intercepts or private data leakage through inference attacks. This is of particular relevance to highly regulated domains, where trustworthiness of FL is critical to its practical introduction. Trustworthiness generally includes explicit information about the data, such as provenance or bias, and its processing, such as consent, explainability, or fairness. From a legal perspective, trustworthiness is linked to lawfulness and compliance, which leads to the necessity for assuring compliance for each participant as well as for the overall federated model. Assurance would involve checking design-time and run-time of FL, and mitigating risks.

In the project Federated Learning Architecuture (FLA), fortiss designs a FL system that draws a privacy-preserving architecture and integrates cutting-edge privacy enhancing techniques across all stages. It sis including differential privacy and homomorphic encryption for learning models, practical anonymization, as well as a tamper-proof record-keeping via a distributed ledger. Moreover, the system provides multi-faceted, multi-perspective, pervasive and end-to-end formal guarantees for compliance based on a knowledge graph.

fortiss evaluates its research on the use case of collaborative training of a feedback text classifier. The feedback is natural text input provide by users of online public administration services. The task at hand is to assign appropriate feedback classes to responsible departments in German public administration.

Research contribution

In this project, fortiss contributes with three main results:

  • a set of architectural patterns for privacy-by-design FL,
  • a method for multi-faceted and multi-perspective assurance based on formalized claims,
  • a toolchain and a library of claims around EU AI Act and GDPR for applying the developer method in practical use cases.

Funding

Projektdauer

01.01.2023 - 31.07.2023

 Mahdi Sellami

Your contact

Mahdi Sellami

+49 89 3603522 171
sellami@fortiss.org

Project partner

Publications

  • 2024 Towards Assuring EU AI Act Compliance and Adversarial Robustness of LLMs Tomas Bueno Momčilović, Beat Buesser, Giulio Zizzo, Mark Purcell and Dian Balta In AI Act Workshop, 19th International Conference on Wirtschaftsinformatik, September 2024, Würzburg, Germany, Details URL BIB
  • 2024 Towards Assurance of LLM Adversarial Robustness using Ontology-Driven Argumentation Tomas Bueno Momčilović, Beat Buesser, Giulio Zizzo, Mark Purcell and Dian Balta In Valletta, Malta, xAI 2024: World Conference on eXplainable Artificial Intelligence. Details URL BIB
  • 2024 Emergent Needs in Assuring Security-Relevant Compliance of Information Systems Tomas Bueno Momčilović and Dian Balta In EICC 2024: European Interdisciplinary Cybersecurity Conference, pages 46–49, Xanthi, Greece, Association for Computing Machinery. Details DOI BIB
  • 2024 Challenges of Assuring Compliance of Information Systems in Finance Tomas Bueno Momčilović and Dian Balta In Software Quality as a Foundation for Security. SWQD 2024, volume 505 of Lecture Notes in Business Information Processing, pages 135–152, Springer. Details DOI BIB
  • 2023 Interaction Patterns for Regulatory Compliance in Federated Learning Mahdi Sellami, Tomas Bueno Momčilović, Peter Kuhn and Dian Balta In CIISR 2023: 3rd International Workshop on Current Information Security and Compliance Issues in Information Systems Research, co-located with the 18th International Conference on Wirtschaftsinformatik (WI 2023), September 18, 2023, Paderborn, Germany, pages 6-18, CEUR Workshop Proceedings. Details URL BIB