Exhibitor Talk - MathWorks: Would you trust your AI model with your life?
Abstract
Generative AI has made headlines with fluent text and clever code, setting new benchmarks in creativity and performance. But in safety-critical domains like aerospace, automotive, and healthcare, the challenge is different: ensuring that AI systems behave reliably and safely under all possible operating conditions.
This talk explores the known gap between academic breakthroughs and industrial trust, and shows how formal verification, explainability, and runtime assurance can turn black-box models into certifiable systems. Drawing on MathWorks’ experience working with engineers and scientists in developing AI-enabled safety-critical systems, we’ll demonstrate how to verify robustness, detect out-of-distribution inputs, and meet emerging safety standards—across models in PyTorch, ONNX, and MATLAB.
If your AI is heading into the real world, it’s time to ask yourself: Would you trust it with your life?