Timezone: »
As recommender systems play a larger and larger role in our interactions with online content, the biases that plague these systems grow in their impact on our content consumption and creation. This work focuses on the mitigation of one such bias, popularity bias, as it relates to music recommendation. We formulate the problem of music recommendation as that of automatic playlist continuation. In order to harness the power of graph neural networks (GNNs), we define our recommendation space as a bipartite graph with songs and playlists as nodes and edges between them indicating a song being contained in a playlist. Then, we implement PinSage, a state of the art graph based recommender system to perform link prediction. Finally, we integrate an individual fairness framework into the training regime of PinSage to learn fair representations which can be used to generate relevant recommendations.
Author Information
Rebecca Salganik (Universite de Montreal)
Fernando Diaz (Google)
Fernando Diaz is a research scientist at Google Brain Montréal. His research focuses on the design of information access systems, including search engines, music recommendation services and crisis response platforms is particularly interested in understanding and addressing the societal implications of artificial intelligence more generally. Previously, Fernando was the assistant managing director of Microsoft Research Montréal and a director of research at Spotify, where he helped establish its research organization on recommendation, search, and personalization. Fernando’s work has received awards at SIGIR, WSDM, ISCRAM, and ECIR. He is the recipient of the 2017 British Computer Society Karen Spärck Jones Award. Fernando has co-organized workshops and tutorials at SIGIR, WSDM, and WWW. He has also co-organized several NIST TREC initiatives, WSDM (2013), Strategic Workshop on Information Retrieval (2018), FAT* (2019), SIGIR (2021), and the CIFAR Workshop on Artificial Intelligence and the Curation of Culture (2019)
Golnoosh Farnadi (Mila)
More from the Same Authors
-
2021 : Artsheets for Art Datasets »
Ramya Srinivasan · Emily Denton · Jordan Famularo · Negar Rostamzadeh · Fernando Diaz · Beth Coleman -
2022 : Mitigating Online Grooming with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Towards Private and Fair Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Fair Targeted Immunization with Dynamic Influence Maximization »
Nicola Neophytou · Golnoosh Farnadi -
2022 : Early Detection of Sexual Predators with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Privacy-Preserving Group Fairness in Cross-Device Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Striving for data-model efficiency: Identifying data externalities on group performance »
Esther Rolf · Ben Packer · Alex Beutel · Fernando Diaz -
2022 Workshop: Cultures of AI and AI for Culture »
Alex Hanna · Rida Qadri · Fernando Diaz · Nick Seaver · Morgan Scheuerman -
2022 : Panel »
Hannah Korevaar · Manish Raghavan · Ashudeep Singh · Fernando Diaz · Chloé Bakalar · Alana Shine -
2022 : Q & A »
Golnoosh Farnadi · Elliot Creager · Q.Vera Liao -
2022 : Tutorial part 1 »
Golnoosh Farnadi -
2022 Tutorial: Algorithmic fairness: at the intersections »
Golnoosh Farnadi · Q.Vera Liao · Elliot Creager -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2021 Workshop: Algorithmic Fairness through the lens of Causality and Robustness »
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 Tutorial: (Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems Q&A »
Praveen Chandar · Fernando Diaz · Brian St. Thomas -
2020 Poster: Counterexample-Guided Learning of Monotonic Neural Networks »
Aishwarya Sivaraman · Golnoosh Farnadi · Todd Millstein · Guy Van den Broeck -
2020 Tutorial: (Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems »
Praveen Chandar · Fernando Diaz · Brian St. Thomas -
2016 Demonstration: Project Malmo - Minecraft for AI Research »
Katja Hofmann · Matthew A Johnson · Fernando Diaz · Alekh Agarwal · Tim Hutton · David Bignell · Evelyne Viegas