Timezone: »
We consider the issue of biases in scholarly research, specifically, in peer review. There is a long standing debate on whether exposing author identities to reviewers induces biases against certain groups, and our focus is on designing tests to detect the presence of such biases. Our starting point is a remarkable recent work by Tomkins, Zhang and Heavlin which conducted a controlled, large-scale experiment to investigate existence of biases in the peer reviewing of the WSDM conference. We present two sets of results in this paper. The first set of results is negative, and pertains to the statistical tests and the experimental setup used in the work of Tomkins et al. We show that the test employed therein does not guarantee control over false alarm probability and under correlations between relevant variables, coupled with any of the following conditions, with high probability can declare a presence of bias when it is in fact absent: (a) measurement error, (b) model mismatch, (c) reviewer calibration. Moreover, we show that the setup of their experiment may itself inflate false alarm probability if (d) bidding is performed in non-blind manner or (e) popular reviewer assignment procedure is employed. Our second set of results is positive, in that we present a general framework for testing for biases in (single vs. double blind) peer review. We then present a hypothesis test with guaranteed control over false alarm probability and non-trivial power even under conditions (a)--(c). Conditions (d) and (e) are more fundamental problems that are tied to the experimental setup and not necessarily related to the test.
Author Information
Ivan Stelmakh (Carnegie Mellon University)
Nihar Shah (CMU)
Aarti Singh (CMU)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Spotlight: On Testing for Biases in Peer Review »
Fri. Dec 13th 12:15 -- 12:20 AM Room West Exhibition Hall C + B3
More from the Same Authors
-
2021 : CrowdSpeech and Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription »
Nikita Pavlichenko · Ivan Stelmakh · Dmitry Ustalov -
2023 Poster: Counterfactual Evaluation of Peer-Review Assignment Strategies »
Martin Saveski · Steven Jecmen · Nihar Shah · Johan Ugander -
2021 Poster: Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels »
Stefani Karp · Ezra Winston · Yuanzhi Li · Aarti Singh -
2020 Poster: Preference-based Reinforcement Learning with Finite-Time Guarantees »
Yichong Xu · Ruosong Wang · Lin Yang · Aarti Singh · Artur Dubrawski -
2020 Spotlight: Preference-based Reinforcement Learning with Finite-Time Guarantees »
Yichong Xu · Ruosong Wang · Lin Yang · Aarti Singh · Artur Dubrawski -
2020 Poster: Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments »
Steven Jecmen · Hanrui Zhang · Ryan Liu · Nihar Shah · Vincent Conitzer · Fei Fang -
2018 Poster: How Many Samples are Needed to Estimate a Convolutional Neural Network? »
Simon Du · Yining Wang · Xiyu Zhai · Sivaraman Balakrishnan · Russ Salakhutdinov · Aarti Singh -
2018 Poster: Optimization of Smooth Functions with Noisy Observations: Local Minimax Rates »
Yining Wang · Sivaraman Balakrishnan · Aarti Singh -
2017 Poster: Hypothesis Transfer Learning via Transformation Functions »
Simon Du · Jayanth Koushik · Aarti Singh · Barnabas Poczos -
2017 Poster: Gradient Descent Can Take Exponential Time to Escape Saddle Points »
Simon Du · Chi Jin · Jason D Lee · Michael Jordan · Aarti Singh · Barnabas Poczos -
2017 Spotlight: Gradient Descent Can Take Exponential Time to Escape Saddle Points »
Simon Du · Chi Jin · Jason D Lee · Michael Jordan · Aarti Singh · Barnabas Poczos -
2017 Poster: On the Power of Truncated SVD for General High-rank Matrix Estimation Problems »
Simon Du · Yining Wang · Aarti Singh -
2017 Poster: Noise-Tolerant Interactive Learning Using Pairwise Comparisons »
Yichong Xu · Hongyang Zhang · Aarti Singh · Artur Dubrawski · Kyle Miller -
2016 Poster: Data Poisoning Attacks on Factorization-Based Collaborative Filtering »
Bo Li · Yining Wang · Aarti Singh · Yevgeniy Vorobeychik -
2015 : Tsybakov Noise Adaptive Margin-Based Active Learning »
Aarti Singh -
2015 Poster: Differentially private subspace clustering »
Yining Wang · Yu-Xiang Wang · Aarti Singh -
2013 Poster: Near-optimal Anomaly Detection in Graphs using Lovasz Extended Scan Statistic »
James L Sharpnack · Akshay Krishnamurthy · Aarti Singh -
2013 Poster: Low-Rank Matrix and Tensor Completion via Adaptive Sampling »
Akshay Krishnamurthy · Aarti Singh -
2013 Poster: Minimax Theory for High-dimensional Gaussian Mixtures with Sparse Mean Separation »
Martin Azizyan · Aarti Singh · Larry Wasserman -
2013 Poster: Cluster Trees on Manifolds »
Sivaraman Balakrishnan · Srivatsan Narayanan · Alessandro Rinaldo · Aarti Singh · Larry Wasserman -
2012 Workshop: Algebraic Topology and Machine Learning »
Sivaraman Balakrishnan · Alessandro Rinaldo · Donald Sheehy · Aarti Singh · Larry Wasserman -
2011 Poster: Minimax Localization of Structural Information in Large Noisy Matrices »
Mladen Kolar · Sivaraman Balakrishnan · Alessandro Rinaldo · Aarti Singh -
2011 Poster: Noise Thresholds for Spectral Clustering »
Sivaraman Balakrishnan · Min Xu · Akshay Krishnamurthy · Aarti Singh -
2011 Spotlight: Noise Thresholds for Spectral Clustering »
Sivaraman Balakrishnan · Min Xu · Akshay Krishnamurthy · Aarti Singh -
2011 Spotlight: Minimax Localization of Structural Information in Large Noisy Matrices »
Mladen Kolar · Sivaraman Balakrishnan · Alessandro Rinaldo · Aarti Singh -
2010 Oral: Identifying graph-structured activation patterns in networks »
James L Sharpnack · Aarti Singh -
2010 Poster: Identifying graph-structured activation patterns in networks »
James L Sharpnack · Aarti Singh -
2008 Poster: Unlabeled data: Now it helps, now it doesn't »
Aarti Singh · Rob Nowak · Jerry Zhu -
2008 Oral: Unlabeled data: Now it helps, now it doesn't »
Aarti Singh · Rob Nowak · Jerry Zhu