Poster
Unravelling in Collaborative Learning
Aymeric Capitaine · Etienne Boursier · Antoine Scheid · Eric Moulines · Michael Jordan · El-Mahdi El-Mhamdi · Alain Durmus
East Exhibit Hall A-C #4302
Collaborative learning offers a promising avenue for leveraging decentralized data. However, collaboration in groups of learners is not a given, as strategic learners may act in a way to maximize their own utility at the expense of a collective utility. In this work, we consider strategic agents who wish to train a model together but have sampling distributions of different quality. The collaboration is organized by a benevolent aggregator who gathers samples so as to maximize total welfare, but is unaware of data quality. This setting allows us to shed light on the deleterious effect of \textit{adverse selection} in collaborative learning. More precisely, we demonstrate that when data quality indices are private, the coalition may undergo a phenomenon known as \emph{unravelling}, wherein it shrinks up to the point that it becomes empty or solely comprised of the worst agent. We show how this issue can be addressed without making use of external transfers, by proposing a novel method inspired by accuracy shaping. This approach makes the truthful, optimal coalition an approximate pure Nash equilibrium with high probability in spite of information asymmetry.
Live content is unavailable. Log in and register to view live content