Following growing concerns with both harmful research impact and research conduct in computer science, including concerns with research published at NeurIPS, this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions.
These efforts reflect a recognition that existing research norms have failed to address the impacts of AI research, and take place against the backdrop of a larger reckoning with the role of AI in perpetuating injustice. The changes have been met with both praise and criticism some within and outside the community see them as a crucial first step towards integrating ethical reflection and review into the research process, fostering necessary changes to protect populations at risk of harm. Others worry that AI researchers are not well placed to recognize and reason about the potential impacts of their work, as effective ethical deliberation may require different expertise and the involvement of other stakeholders.
This debate reveals that even as the AI research community is beginning to grapple with the legitimacy of certain research questions and critically reflect on its research practices, there remains many open questions about how to ensure effective ethical oversight. This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year devoted to normative issues in AI and builds on others from years past, but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.