Timezone: »

How Should a Machine Learning Researcher Think About AI Ethics?
Amanda Askell · Abeba Birhane · Jesse Dodge · Casey Fiesler · Pascale N Fung · Hanna Wallach

Fri Dec 10 03:00 PM -- 04:00 PM (PST) @ None

As machine learning becomes increasingly widespread in the real world, there has been a growing set of well-documented potential harms that need to be acknowledged and addressed. In particular, valid concerns about data privacy, algorithmic bias, automation risk, potential malicious uses, and more have highlighted the need for the active consideration of critical ethical issues in the field. In the light of this, there have been calls for machine learning researchers to actively consider not only the potential benefits of their research but also its potential negative societal impact, and adopt measures that enable positive trajectories to unfold while mitigating risk of harm. However, grappling with ethics is still a difficult and unfamiliar problem for many in the field. A common difficulty with assessing ethical impact is its indirectness: most papers focus on general-purpose methodologies (e.g., optimization algorithms), whereas ethical concerns are more apparent when considering downstream applications (e.g., surveillance systems). Also, real-world impact (both positive and negative) often emerges from the cumulative progress of many papers, so it is difficult to attribute the impact to an individual paper. Furthermore, regular research ethics mechanisms such as an Institutional Review Board (IRB) are not always a good fit for machine learning and problematic research practices involving extensive environmental and labor costs or inappropriate data use are so ingrained in community norms that it can be difficult to articulate where to draw the line as expectations evolve. How should machine learning researchers wrestle with these topics in their own research? In this panel, we invite the NeurIPS community to contribute questions stemming from their own research and other experiences, so that we can develop community norms around AI ethics and provide concrete guidance to individual researchers.

Author Information

Amanda Askell (Anthropic)
Abeba Birhane (University College Dublin, Ireland)
Jesse Dodge (Allen Institute for AI)
Casey Fiesler (University of Colorado Boulder)
Pascale N Fung (Hong Kong University of Science and Technology)
Pascale N Fung

Pascale Fung (馮雁) (born 1966 in Shanghai, China) is a professor in the Department of Electronic & Computer Engineering and the Department of Computer Science & Engineering at the Hong Kong University of Science & Technology(HKUST). She is the director of the newly established, multidisciplinary Centre for AI Research (CAiRE) at HKUST. She is an elected Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for her “contributions to human-machine interactions”[1] and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”.

Hanna Wallach (Microsoft)

More from the Same Authors