Timezone: »

 
Poster
Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning
Xinyi Xu · Lingjuan Lyu · Xingjun Ma · Chenglin Miao · Chuan Sheng Foo · Bryan Kian Hsiang Low

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @

In collaborative machine learning(CML), multiple agents pool their resources(e.g., data) together for a common learning task. In realistic CML settings where the agents are self-interested and not altruistic, they may be unwilling to share data or model information without adequate rewards. Furthermore, as the data/model information shared by the agents may differ in quality, designing rewards which are fair to them is important so that they would not feel exploited nor discouraged from sharing. In this paper, we adopt federated learning as the CML paradigm, propose a novel cosine gradient Shapley value(CGSV) to fairly evaluate the expected marginal contribution of each agent’s uploaded model parameter update/gradient without needing an auxiliary validation dataset, and based on the CGSV, design a novel training-time gradient reward mechanism with a fairness guarantee by sparsifying the aggregated parameter update/gradient downloaded from the server as reward to each agent such that its resulting quality is commensurate to that of the agent’s uploaded parameter update/gradient. We empirically demonstrate the effectiveness of our fair gradient reward mechanism on multiple benchmark datasets in terms of fairness, predictive performance, and time overhead.

Author Information

Xinyi Xu (National University of Singapore)
Lingjuan Lyu (the University of Melbourne)
Xingjun Ma (Deakin University)
Chenglin Miao (University of Georgia)
Chuan Sheng Foo (Institute for Infocomm Research)
Bryan Kian Hsiang Low (National University of Singapore)

More from the Same Authors