Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

Private Data Leakage via Exploiting Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems

Hanieh Hashemi · Wenjie Xiong · Liu Ke · Kiwan Maeng · Murali Annavaram · G. Edward Suh · Hsien-Hsin Lee


Abstract:

Deep Learning-based Recommendation models use sparse and dense features of a user to predict an item that the user may like. These features carry the users' private information, service providers often protect these values by memory encryption (e.g., with hardware such as Intel's SGX). However, even with such protection, an attacker may still learn information about which entry of the sparse feature is nonzere through the embedding table access pattern. In this work, we show that only leaking the sparse features' nonzero entry positions can be a big threat to privacy. Using the embedding table access pattern, we show that it is possible to identify or re-identify a user, or extract sensitive attributes from a user. We subsequently show that applying a hash function to anonymize the access pattern cannot be a solution, as it can be reverse-engineered in many cases.

Chat is not available.