Expo Talk Panel
Room R06-R09 (level 2)

Sound advancements in machine learning progress through a rigorous process from idea, to baselines, peer review, publication, and reproduction. Rinse and repeat. This process, however, is fraught with three crises: reproducibility, robustness, and reviewers. The Cambrian explosion of advancements over the last year coupled with novel hurdles posed by generative AI evaluation further exacerbate these crises. This not only stymies advancements in ML, but also threatens trust among practitioners and the general public about the capabilities of this technology. Fortunately, there’s an embarrassingly simple structural answer to these crises: parallelization. This talk will present what it looks like to apply parallelization to existing frameworks of empirical rigor and how it can be applied in practice at scale. A transparent, community-driven process to parallelizing empirical rigor has the potential to improve the quality, trustworthiness, and pace at which machine learning advances as a field.

Chat is not available.