Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

Cryptographic Auditing for Collaborative Learning

Hidde Lycklama · Nicolas Küchler · Alexander Viand · Emanuel Opel · Lukas Burkhalter · Anwar Hithnawi


Abstract:

Collaborative machine learning paradigms based on secure multi-party computation have emerged as a compelling alternative for sensitive applications in the last few years. These paradigms promise to unlock the potential of important data silos that are currently hard to access and compute across due to privacy concerns and regulatory policies (e.g., health and financial sectors). Although collaborative machine learning provides many privacy benefits, it makes sacrifices in terms of robustness. It opens the learning process to the possibility of an active malicious participant who can covertly influence the model’s behavior. As these systems are being deployed for a range of sensitive applications, their robustness is increasingly important. To date, no compelling solution exists that fully addresses the robustness of secure collaborative learning paradigms. As the robustness of these learning paradigms remains an open challenge, it is necessary to augment these systems with measures that strengthen their reliability at deployment time. This paper describes our efforts in developing privacy-preserving auditing mechanisms for secure collaborative learning. We focus on audits that allow tracing the source of integrity issues back to the responsible party, providing a technical path towards accountability in these systems.

Chat is not available.