Skip to yearly menu bar Skip to main content


NeurIPS Code of Ethics

Preamble

 

The Code of Ethics aims to guide the NeurIPS community towards higher standards of ethical conduct as it pertains to elements of research ethics and the broader societal and environmental impact of research submitted to NeurIPS. It outlines conference expectations about the ethical practices that must be adopted by the submitting authors, members of the program and organizing committees. The Code of Ethics complements the NeurIPS Code of Conduct, which focuses on professional conduct and research integrity issues, including plagiarism, fraud and reproducibility concerns. The points described below also inform the NeurIPS Submission Checklist, which outlines more concrete communication requirements. 

Potential Harms Caused by the Research Process 

 

Research involving human subjects or participants:

  • Fair Wages: all human research subjects or participants must receive appropriate compensation. If you make use of crowdsourcing or contract work for a particular task as part of your research project,  you must respect the minimum hourly rate in the region where the work is carried out.
  • Research involving human participants: if the research presented involves direct interactions between the researchers and human participants or between a technical system and human participants, authors are required to follow existing protocols in their institutions (e.g. human subject research accreditation, IRB) and go through the relevant process. In cases when no formal process exists, they can undergo an equivalent informal process (e.g. via their peers or an internal ethics review).
 

Data-related concerns:

The points listed below apply to all datasets used for submissions, both for publicly available data and internal datasets.

  • Privacy: Datasets should minimize the exposure of any personally identifiable information, unless informed consent from those individuals is provided to do so. 
  • Consent: Any paper that chooses to create a dataset with real data of real people should ask for the explicit consent of participants, or explain why they were unable to do so.
  • Deprecated datasets: Authors should take care to confirm with dataset creators that a dataset is still available for use. Datasets taken down by the original author (ie. deemed obsolete, or otherwise discontinued), should no longer be used, unless it is for the purposes of audit or critical assessment. For some indication of known depreciated datasets, please refer to the NeurIPS list of deprecated datasets.
  • Copyright and Fair Use: While the norms of fair use and copyright in machine learning research are still evolving, authors must respect the terms of datasets that have defined licenses (e.g. CC 4.0, MIT, etc). 
  • Representative evaluation practice:  When collecting new datasets or making decisions about which datasets to use, authors should assess and communicate the degree to which their datasets are representative of their intended population. Claims of diverse or universal representation should be substantiated by concrete evidence or examples. 

Societal Impact and Potential Harmful Consequences

 

Authors should transparently communicate the known or anticipated consequences of research: for instance via the paper checklist or a separate section in a submission.

The following specific areas are of particular concern:

 
  • Safety: Contributors should consider whether there are foreseeable situations in which their technology can be used to harm, injure or kill people through its direct application, side effects, or potential misuse. We do not accept research whose primary goal is to increase the lethality of weapons systems.
  • Security: Researchers should consider whether there is a risk that applications could open security vulnerabilities or cause serious accidents when deployed in real world environments. If this is the case, they should take concrete steps to recommend or implement ways to protect against such security risks.
  • Discrimination: Researchers should consider whether the technology they developed can be used to discriminate, exclude, or otherwise negatively impact people, including impacts on the provision of services such as healthcare, education or access to credit.  
  • Surveillance: Researchers should consult on local laws or legislation before collecting or analyzing any bulk surveillance data. Surveillance should not be used to predict protected categories, or be used in any way to endanger individual well-being. 
  • Deception & Harassment: Researchers should communicate about whether their approach could be used to facilitate deceptive interactions that would cause harm such as theft, fraud, or harassment, and whether it could be used to impersonate public figures and influence political processes, or as a tool to promote hate speech or abuse.
  • Environment: Researchers should consider whether their research is going to negatively impact the environment by, e.g., promoting fossil fuel extraction, increasing societal consumption or producing substantial amounts of greenhouse gasses.
  • Human Rights: We prohibit circulation of any research work that builds upon or facilitates illegal activity, and we strongly discourage any work that could be used to deny people rights to privacy, speech, health, liberty, security, legal personhood, or freedom of conscience or religion.
  • Bias and fairness:  Contributors should consider any suspected biases or limitations to the scope of performance of models or the contents of datasets and inspect these to ascertain whether they encode, contain or exacerbate bias against people of a certain gender, race, sexuality, or other protected characteristics.

Impact Mitigation Measures 

 

We propose some reflection and actions taken to mitigate potential harmful consequences from the research project. 

 
  • Data and model documentation: Researchers should communicate the details of the dataset or the model as part of their submissions via structured templates.
  • Data and model licenses: If releasing data or models, authors should also provide licenses for them. These should include the intended use and limitations of these artifacts, in order to prevent misuse or inappropriate use.
  • Secure and privacy-preserving data storage & distribution: Authors should leverage privacy protocols, encryption and anonymization to reduce the risk of data leakage or theft. Stronger measures should be employed for more sensitive data (e.g., biometric or medical data). 
  • Responsible release and publication strategy: Models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, e.g. by requiring that users adhere to a code of conduct to access the model. Authors of papers exposing a security vulnerability in a system should follow the responsible disclosure procedures of the system owners.
  • Allowing access to research artifacts: When releasing research artifacts, it is important to make accessible the information required to understand these artifacts (e.g. the code, execution environment versions, weights, and hyperparameters of systems) to enable external scrutiny and auditing.
  • Disclose essential elements for reproducibility: Any work submitted to NeurIPS should be accompanied by the information sufficient for the reproduction of results described. This can include the code, data, model weights, and/or a description of the computational resources needed to train the proposed model or validate the results.
  • Ensure legal compliance: Ensure adequate awareness of regional legal requirements. This can be done, for instance, by consulting with law school clinics specializing in intellectual property and technology issues. Additional information is required from authors where legal compliance could not be met due to human rights violations (e.g. freedom of expression, the right to work and education, bodily autonomy, etc.). 

Violations

Violations to the Code of Ethics should be reported to hotline@neurips.cc. NeurIPS reserves the right to reject the presentation of scientific works that violate the Code of Ethics. Notice that conference contributors are also obliged to adhere to additional ethical codes or review requirements arising from other stakeholders such as funders and research institutions.

 

 

Further reading

 

UNDERSTANDING LICENSES

 

MODEL AND DATA DOCUMENTATION TEMPLATES

 

SOCIETAL IMPACT

 

RELATED ENDEAVORS

 

RELATED RESEARCH COMMUNITIES