Collective Data Bargaining for Fairness in Health Time Series AI
Abstract
We present collective data bargaining as a participatory mechanism for improving algorithmic fairness in health time series AI systems. Using gender bias in medical profession predictions as a proxy for broader fairness challenges, we introduce a three-phase pipeline: (1) baseline bias measurement with 95% confidence intervals, (2) collective bargaining with tipping-curve analysis, and (3) evaluation under realistic defense mechanisms. Experiments with health-specific prompts show that coordinated community contributions reduce gender bias by 31 percentage points without degrading model utility. These results demonstrate that collective bargaining, often framed as a security concern, can be reframed as a civic mechanism for fairness in health AI, offering real experimental validation and opening new directions for community-driven governance of time series models.