Timezone: »
Peer review systems such as conference paper review often suffer from the issue of miscalibration. Previous works on peer review calibration usually only use the ordinal information or assume simplistic reviewer scoring functions such as linear functions. In practice, applications like academic conferences often rely on manual methods, such as open discussions, to mitigate miscalibration. It remains an important question to develop algorithms that can handle different types of miscalibrations based on available prior knowledge. In this paper, we propose a flexible framework, namely \emph{least square calibration} (LSC), for selecting top candidates from peer ratings. Our framework provably performs perfect calibration from noiseless linear scoring functions under mild assumptions, yet also provides competitive calibration results when the scoring function is from broader classes beyond linear functions and with arbitrary noise. On our synthetic dataset, we empirically demonstrate that our algorithm consistently outperforms the baseline which select top papers based on the highest average ratings.
Author Information
Sijun Tan (University of Virginia)
Jibang Wu (University of Virginia)
Xiaohui Bei (Nanyang Technological University)
Haifeng Xu (University of Virginia)
More from the Same Authors
-
2021 Poster: The Limits of Optimal Pricing in the Dark »
Quinlan Dawkins · Minbiao Han · Haifeng Xu -
2021 Poster: (Almost) Free Incentivized Exploration from Decentralized Learning Agents »
Chengshuai Shi · Haifeng Xu · Wei Xiong · Cong Shen -
2020 Poster: Collapsing Bandits and Their Application to Public Health Intervention »
Aditya Mate · Jackson Killian · Haifeng Xu · Andrew Perrault · Milind Tambe -
2019 Poster: Balancing Efficiency and Fairness in On-Demand Ridesourcing »
Nixie Lesmana · Xuan Zhang · Xiaohui Bei