Timezone: »

 
Workshop
Transparent and interpretable Machine Learning in Safety Critical Environments
Alessandra Tosi · Alfredo Vellido · Mauricio Álvarez

Fri Dec 08 08:00 AM -- 06:30 PM (PST) @ 204
Event URL: https://sites.google.com/view/timl-nips2017/home »

The use of machine learning has become pervasive in our society, from specialized scientific data analysis to industry intelligence and practical applications with a direct impact in the public domain. This impact involves different social issues including privacy, ethics, liability and accountability. This workshop aims to discuss the use of machine learning in safety critical environments, with special emphasis on three main application domains:
- Healthcare
- Autonomous systems
- Complainants and liability in data driven industries
We aim to answer some of these questions: How do we make our models more comprehensible and transparent? Shall we always trust our decision making process? How do we involve field experts in the process of making machine learning pipelines more practically interpretable from the viewpoint of the application domain?

Author Information

Alessandra Tosi (Mind Foundry)

Alessandra Tosi is a Machine Learning research scientist at Mind Foundry, an Oxford University spin out company. Her research interest falls in the area of probabilistic models, with a particular focus on Gaussian Process based techniques and latent variable models. She is interested in the underlying geometry of probabilistic models, with a special attention to the behaviour of metrics in Probabilistic Geometries. In her work a great attention is paid to data visualization and interpretability of these models.

Alfredo Vellido (Universitat Politècnica de Catalunya, UPC BarcelonaTech)
Mauricio Álvarez (University of Sheffield)

More from the Same Authors