Skip to yearly menu bar Skip to main content


Poster
in
Workshop: HCAI@NeurIPS 2022, Human Centered AI

Indexing AI Risks with Incidents, Issues, and Variants

Sean McGregor · Kevin Paeth · Khoa Lam

Keywords: [ ai governance ] [ hcai ] [ human-centered AI ] [ AI safety ] [ AI Fairness ] [ AI Incidents ] [ Ontology ] [ Database ] [ AI Issues ] [ AI Transparency ]


Abstract:

Two years after publicly launching the AI Incident Database (AIID) as a collection of harms or near harms produced by AI in the world, a backlog of issues'' that do not meet its incident ingestion criteria have accumulated in its review queue. Despite not passing the database's current criteria for incidents, these issues advance human understanding of where AI presents the potential for harm. Similar to databases in aviation and computer security, the AIID proposes to adopt a two-tiered system for indexing AI incidents (i.e., a harm or near harm event) and issues (i.e., a risk of a harm event). Further, as some machine learning-based systems will sometimes produce a large number of incidents, the notion of an incidentvariant'' is introduced. These proposed changes mark the transition of the AIID to a new version in response to lessons learned from editing 1,800+ incident reports and additional reports that fall under the new category of ``issue.''

Chat is not available.