Skip to yearly menu bar Skip to main content


Awarded Paper Presentation
in
Workshop: Progress and Challenges in Building Trustworthy Embodied AI

Post-Hoc Attribute-Based Explanations for Recommender Systems

Sahil Verma · Anurag Beniwal · Narayanan Sadagopan · Arjun Seshadri

Keywords: [ interpretability ] [ post-hoc explainability ] [ explainability ] [ Explainability for Recommender Systems ]


Abstract:

Recommender systems are ubiquitous in most of our interactions in the current digital world. Whether shopping for clothes, scrolling YouTube for exciting videos, or searching for restaurants in a new city, the recommender systems at the back-end power these services. Most large-scale recommender systems are huge models trained on extensive datasets and are black-boxes to both their developers and end-users. Prior research has shown that providing recommendations along with their reason enhances trust, scrutability, and persuasiveness of the recommender systems. Recent literature in explainability has been inundated with works proposing several algorithms to this end. Most of these works provide item-style explanations, i.e., 'We recommend item A because you bought item B.' We propose a novel approach to generate more fine-grained explanations based on the user's preference over the attributes of the recommended items. We perform experiments using real-world datasets and demonstrate the efficacy of our technique in capturing users' preferences and using them to explain recommendations. We also propose ten new evaluation metrics and compare our approach to six baseline methods. We have also submitted this paper to Trustworthy and Socially Responsible Machine Learning workshop at Neurips. That workshop is on a different day and does not proceedings.

Chat is not available.