Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Progress and Challenges in Building Trustworthy Embodied AI

Addressing Mistake Severity in Neural Networks with Semantic Knowledge

Victoria Helus · Nathan Vaska · Natalie Abreu

Keywords: [ robustness ] [ semantic knowledge ] [ semantics ] [ artificial intelligence ] [ machine learning ] [ adversarial training ] [ CNNs ] [ Reasoning ] [ Deep Learning ]


Abstract:

Robustness in deep neural networks and machine learning algorithms in general is an open research challenge. In particular, it is difficult to ensure algorithmic performance is maintained on out-of-distribution inputs or anomalous instances that cannot be anticipated at training time. Embodied agents will be deployed in these conditions, and are likely to make incorrect predictions. An agent will be viewed as untrustworthy unless it can maintain its performance in dynamic environments. Most robust training techniques aim to improve model accuracy on perturbed inputs; as an alternate form of robustness, we aim to reduce the severity of mistakes made by neural networks in challenging conditions. We leverage current adversarial training methods to generate targeted adversarial attacks during the training process in order to increase the semantic similarity between a model's predictions and true labels of misclassified instances. Results demonstrate that our approach performs better with respect to mistake severity compared to standard and adversarially trained models. We also find an intriguing role that non-robust features play with regards to semantic similarity.

Chat is not available.