Selective Preference Aggregation
Shreyas Kadekodi · Hayden McTavish · Berk Ustun
Abstract
Many applications in machine learning and decision making rely on procedures to aggregate human preferences. In such tasks, individuals express ordinal preferences over a set of items by voting, rating, or comparing them. We then aggregate these data into a ranking that reveals their collective preferences. Standard methods for preference aggregation are designed to return rankings that arbitrate conflicting preferences between individuals. In this work, we introduce a paradigm for \emph{selective aggregation} where we abstain from comparison rather than arbitrate dissent. We summarize collective preferences as a \emph{selective ranking} -- i.e., a partial order that reflects all collective preferences where at least $100\cdot(1 - \dissent{})\%$ of individuals agree. We develop algorithms to build selective rankings that achieve all possible trade-offs between comparability and disagreement, and derive formal guarantees on their recovery and robustness. We conduct an extensive set of experiments on real-world datasets to benchmark our approach and demonstrate its functionality. Selective rankings improve reliability under distribution shift and adversarial manipulation by exposing disagreement and abstaining on disputed pairs.
Chat is not available.
Successful Page Load