NIPS 2018 Expo Panel
Dec. 5, 2020
Governance of AI - a Human Rights Approach
Sponsor: Element AI
Mathieu Marcotte (Element AI), Genevieve Merat (Element AI)
As AI is growing at a faster pace, it raises questions about the social and ethical impacts of this technology. Several renowned experts have studied, and are still studying, the main ethical principles that should guide the development of AI. To draw a parallel with scientific research, we can say that ethical research to date - the equivalent of “fundamental research” - is well advanced, but “applied research” - the legal and regulatory tools to be developed to govern AI - is lagging.
Our panel would look at the latest approaches related to legislative and regulatory changes being considered to protect rights - the right to privacy, to equality, to procedural fairness, etc. - and to build public trust in AI.
Regulations and laws organize society, setting out the rules that everyone must abide by. In the case of AI, development has largely been unfettered by regulation - a laissez-faire approach - which has certainly brought about unparalleled innovation. As the risks inherent to AI become more apparent, and as public trust in AI & big data erodes, what regulatory approach should we take to continue to support innovation while ensuring that AI’s development is human-centric? Is our current legal infrastructure capable of handling AI-related issues? What required changes are immediately apparent?
- Marc-Étienne Ouimette, Director of Public Policy, Element AI
- Ed Santow, Australian Human Rights Commissioner
- JF Gagné, CEO, Element AI
- Eimear Farrell, Advocate and Adviser for Technology and Human Rights, Amnesty International