Skip to yearly menu bar Skip to main content


Workshop

Progress and Challenges in Building Trustworthy Embodied AI

Chen Tang · Karen Leung · Leilani Gilpin · Jiachen Li · Changliu Liu

Room 357

The recent advances in deep learning and artificial intelligence have equipped autonomous agents with increasing intelligence, which enables human-level performance in challenging tasks. In particular, these agents with advanced intelligence have shown great potential in interacting and collaborating with humans (e.g., self-driving cars, industrial robot co-worker, smart homes and domestic robots). However, the opaque nature of deep learning models makes it difficult to decipher the decision-making process of the agents, thus preventing stakeholders from readily trusting the autonomous agents, especially for safety-critical tasks requiring physical human interactions. In this workshop, we bring together experts with diverse and interdisciplinary backgrounds, to build a roadmap for developing and deploying trustworthy interactive autonomous systems at scale. Specifically, we aim to the following questions: 1) What properties are required for building trust between humans and interactive autonomous systems? How can we assess and ensure these properties without compromising the expressiveness of the models and performance of the overall systems? 2) How can we develop and deploy trustworthy autonomous agents under an efficient and trustful workflow? How should we transfer from development to deployment? 3) How to define standard metrics to quantify trustworthiness, from regulatory, theoretical, and experimental perspectives? How do we know that the trustworthiness metrics can scale to the broader population? 4) What are the most pressing aspects and open questions for the development of trustworthy autonomous agents interacting with humans? Which research areas are prime for research in academia and which are better suited for industry research?

Chat is not available.
Timezone: America/Los_Angeles

Schedule