Skip to yearly menu bar Skip to main content


Poster

Touch and Go: Learning from Human-Collected Vision and Touch

Fengyu Yang · Chenyang Ma · Jiacheng Zhang · Jing Zhu · Wenzhen Yuan · Andrew Owens

Hall J (level 1) #1024

Abstract:

The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of “in the wild” objects and scenes. We successfully apply our dataset to a variety of multimodal learning tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.

Chat is not available.