Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Trustworthy and Socially Responsible Machine Learning

A Brief Overview of AI Governance for Responsible Machine Learning Systems

Navdeep Gill · Marcos Conde


Abstract:

Organizations of all sizes, across all industries and domains are leveraging artificial intelligence (AI) technologies to solve some of their biggest challenges around operations, customer experience, and much more. However, due to the probabilistic nature of AI, the risks associated with it are far greater than traditional technologies. Research has shown that these risks can range anywhere from regulatory / compliance, reputational, user trust and societal risks, to financial and even existential risks. Depending on the nature and size of the organization, AI technologies can pose a significant risk, if not used in a responsible way. This text seeks to present a brief introduction to AI governance, which is a framework designed to oversee the responsible use of AI with the goal of preventing and mitigating risks. Having such a framework will not only manage risks but also gain maximum value out of AI projects and develop consistency for organization-wide adoption of AI.

Chat is not available.