Workshop
Touch Processing: a new Sensing Modality for AI
Roberto Calandra · Haozhi Qi · Mike Lambeta · Perla Maiolino · Yasemin Bekiroglu · Jitendra Malik
Room 214
This workshop aims to seed foundations of using AI/ML dedicated to studying touch and enable future applications such as robotics and AR/VR.
Schedule
Fri 6:45 a.m. - 7:00 a.m.
|
Opening Remarks
(
Opening Remarks
)
>
SlidesLive Video |
Roberto Calandra 🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Chiara Bartolozzi - Neuromorphic Touch: from sensing to perception
(
Talk
)
>
SlidesLive Video |
Chiara Bartolozzi 🔗 |
Fri 7:30 a.m. - 8:00 a.m.
|
Poster Spotlights
(
Poster Spotlights
)
>
SlidesLive Video |
🔗 |
Fri 8:00 a.m. - 9:00 a.m.
|
Poster Session + Coffee Break
(
Break
)
>
|
🔗 |
Fri 9:00 a.m. - 9:30 a.m.
|
Jiajun Wu - Multi-Sensory Neural Objects: Modeling, Datasets, and Applications
(
Talk
)
>
SlidesLive Video |
Jiajun Wu 🔗 |
Fri 9:30 a.m. - 10:00 a.m.
|
Satoshi Funabashi - Hand Morphology from Tactile Sensing with Spatial Deep Learning for Dexterous Tasks
(
Talk
)
>
SlidesLive Video |
Satoshi Funabashi 🔗 |
Fri 10:00 a.m. - 11:30 a.m.
|
Lunch Break
(
Break
)
>
|
🔗 |
Fri 11:30 a.m. - 12:00 p.m.
|
Ted Adelson - Building fingers and hands with vision-based tactile sensing
(
Talk
)
>
SlidesLive Video |
Edward Adelson 🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Nathan Lepora - Progress in real, simulated and sim2real optical tactile sensing
(
Talk
)
>
SlidesLive Video |
Nathan F Lepora 🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Jeremy Fishel - Using touch to create human-like intelligence in general-purpose robots
(
Talk
)
>
SlidesLive Video |
Jeremy Fishel 🔗 |
Fri 1:00 p.m. - 2:00 p.m.
|
Poster Session + Coffee Break
(
Break
)
>
|
🔗 |
Fri 2:00 p.m. - 2:30 p.m.
|
Veronica Santos - Tactile perception for human-robot systems
(
Talk
)
>
SlidesLive Video |
Veronica Santos 🔗 |
Fri 2:30 p.m. - 3:00 p.m.
|
Katherine J. Kuchenbecker - Haptic Intelligence
(
Talk
)
>
SlidesLive Video |
Katherine J. Kuchenbecker 🔗 |
Fri 3:00 p.m. - 3:30 p.m.
|
Panel Discussion
(
Panel Discussion
)
>
SlidesLive Video |
🔗 |
-
|
Tactile Active Texture Recognition With Vision-Based Tactile Sensors
(
Poster
)
>
This paper investigates active sensing strategies that employ vision-based tactile sensors for robotic perception and classification of fabric textures. We formalize the active sampling problem in the context of tactile fabric recognition and provide an implementation of information-theoretic exploration strategies based on minimizing predictive entropy and variance of probabilistic neural network classifiers. By evaluating our method on a real robotic system, we find that the choice of the active exploration strategy has a relatively minor influence on the recognition accuracy as long as the objects are touched more than once. In a comparison study, while humans achieve 66.9% recognition accuracy, our best approach reaches 90.0%, showing that vision-based tactile sensors are highly effective for fabric recognition. |
Alina Boehm · Tim Schneider · Boris Belousov · Alap Kshirsagar · Lisa Lin · Katja Doerschner · Knut Drewing · Constantin Rothkopf · Jan Peters 🔗 |
-
|
Tactile Sensing for Stable Object Placing
(
Poster
)
>
Placing objects on flat surfaces is a crucial skill to master for robots in household environments.Common object-placing approaches require either complete scene specifications or (extrinsic) vision systems, which occasionally suffer from occlusions. Rather than relying on indirect measurements, we propose a novel approach for stable object placing that leverages tactile feedback from an object grasp. We devise a neural architecture called PlaceNet that estimates a rotation matrix, resulting in a corrective gripper movement that aligns the object with the placing surface for the subsequent object manipulation.Our evaluation compares different sensing modalities to each other and PlaceNet to classical, non-learning approaches to assess whether a data-driven approach is indeed required.Applying PlaceNet to a set of unseen everyday objects reveals significant generalization of our proposed pipeline, suggesting that tactile sensing plays a vital role in the intrinsic understanding of robotic dexterous object manipulation.Code, models, and supplementary videos will be made available upon acceptance. |
Luca Lach · Niklas Funk · Georgia Chalvatzaki · Robert Haschke · Jan Peters · Helge Ritter 🔗 |
-
|
Transferring Tactile-based Continuous Force Control Policies from Simulation to Robot
(
Poster
)
>
The advent of tactile sensors in robotics has sparked many ideas on how robots can leverage direct contact measurements of their environment interactions to improve manipulation tasks.An important line of research in this regard is that of grasp force control, which aims to manipulate objects safely by limiting the amount of force exerted on the object.While prior works have either hand-modeled their force controllers, employed model-based approaches or have not shown sim-to-real transfer, we propose a model-free deep reinforcement learning approach that is trained in simulation and then transferred to the robot without further fine-tuning.We therefore present a simulation environment that produces realistic normal forces, which we use to train continuous force control policies.An evaluation in which we compare against a baseline and perform an ablation study shows that our approach outperforms the hand-modeled baseline, and that our proposed inductive bias and domain randomization facilitate sim-to-real transfer. |
Luca Lach · Robert Haschke · Davide Tateo · Jan Peters · Helge Ritter · Júlia Borràs · Carme Torras 🔗 |
-
|
An Embodied Biomimetic Model of Tactile Perception
(
Poster
)
>
Developing an artificial biomimetic model of tactile perception is a key goal for both practical applications in dexterous robotics and in our understanding of human touch. Here, we present a novel, embodied model of tactile perception that mimics physical and computational features of the peripheral and central nervous systems. We deploy the model on a grating discrimination task, showing how integrating sensory evidence over time improves perceptual accuracy. We also provide a method for learning evidence thresholds which optimise the speed accuracy trade-off. Our model therefore provides a novel implementation of robotic tactile perception and sheds light on the computational features of this process in humans. |
Luke Burguete · Thom Griffith · Nathan F Lepora 🔗 |
-
|
Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control
(
Poster
)
>
To improve the robustness of tactile robot control in unstructured environments, we propose and study a new concept: \textit{tactile saliency} for robot touch, inspired by the human touch attention mechanism from neuroscience and the visual saliency prediction problem from computer vision. In analogy to visual saliency, this concept involves identifying key information in tactile images captured by a tactile sensor. While visual saliency datasets are commonly annotated by humans, manually labelling tactile images is challenging due to their counterintuitive patterns. To address this challenge, we propose a novel approach comprised of three interrelated networks: 1) a Contact Depth Network (ConDepNet), which generates a contact depth map to localize deformation in a real tactile image that contains target and noise features; 2) a Tactile Saliency Network (TacSalNet), which predicts a tactile saliency map to describe the target areas for an input contact depth map; 3) and a Tactile Noise Generator (TacNGen), which generates noise features to train the TacSalNet. Experimental results in contact pose estimation and edge-following in the presence of distractors showcase the accurate prediction of target features from real tactile images. Overall, our tactile saliency prediction approach gives robust sim-to-real tactile control in environments with unknown distractors. Videos for all the experiments are presented in: https://sites.google.com/view/tactilesaliency-anonymous/. |
Yijiong Lin · Mauro Comi · Alex Church · Dandan Zhang · Nathan F Lepora 🔗 |
-
|
Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning
(
Poster
)
>
Bimanual manipulation with tactile feedback will be key to human-level robot dexterity. However, this topic is less explored than single-arm settings, partly due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-action spaces. Here we introduce a dual-arm tactile robotic system (Bi-Touch) based on the Tactile Gym 2.0 setup that integrates two affordable industrial-level robot arms with low-cost high-resolution tactile sensors. We present a suite of bimanual manipulation tasks tailored towards tactile feedback: bi-pushing, bi-reorienting, and bi-gathering. To learn effective policies, we introduce appropriate reward functions for these tasks and propose a novel goal-update mechanism with deep reinforcement learning. We also apply these policies to real-world settings with a tactile sim-to-real approach. Our analysis highlights and addresses some challenges met during the sim-to-real application, e.g. the learned policy tended to squeeze an object in the bi-reorienting task due to the sim-to-real gap. Finally, we demonstrate the generalizability and robustness of this system by experimenting with different unseen objects with applied perturbations in the real world. Videos are available at https://sites.google.com/view/bitouch-anonymous/. |
Yijiong Lin · Alex Church · Max Yang · Haoran Li · John Lloyd · Dandan Zhang · Nathan F Lepora 🔗 |
-
|
Robot Synesthesia: In-Hand Manipulation with Visuotactile Sensing
(
Poster
)
>
Executing contact-rich manipulation tasks necessitates the fusion of tactile and visual feedback. However, the distinct nature of these modalities poses significant challenges. In this paper, we introduce a system that leverages visual and tactile sensory inputs to enable dexterous in-hand manipulation. Specifically, we propose \textbf{Robot Synesthesia}, a novel point cloud-based tactile representation inspired by human tactile-visual synesthesia. This approach allows for the simultaneous and seamless integration of both sensory inputs, offering richer spatial information and facilitating better reasoning about robot actions. Comprehensive ablations are performed on how the integration of vision and touch can improve reinforcement learning and Sim2Real performance. |
Ying Yuan · Haichuan Che · Yuzhe Qin · Binghao Huang · Zhao-Heng Yin · YI WU · Xiaolong Wang 🔗 |
-
|
TouchSDF: A DeepSDF Approach for 3D Shape Reconstruction Using Vision-Based Tactile Sensing
(
Poster
)
>
Humans rely on their visual and tactile senses to develop a comprehensive 3D understanding of their physical environment. Recently, there has been a growing interest in manipulating objects using data-driven approaches that utilise high-resolution vision-based tactile sensors. However, 3D shape reconstruction using tactile sensing has lagged behind visual shape reconstruction because of limitations in existing techniques, including the inability to generalise over unseen shapes, absence of real-world testing and limited expressive capacity imposed by fixed topologies of graphs or meshes. To address these challenges, we propose TouchSDF, a Deep Learning approach for tactile 3D shape reconstruction that leverages the rich information provided by a vision-based tactile sensor and the expressivity of the implicit neural representation DeepSDF. This combination allows TouchSDF to reconstruct smooth and continuous 3D shapes from tactile inputs in simulation and real-world settings, opening up research avenues for robust 3D-aware representations and improved multimodal perception for robot manipulation. |
Mauro Comi · Yijiong Lin · Alex Church · Laurence Aitchison · Nathan F Lepora 🔗 |
-
|
MimicTouch: Learning Human's Control Strategy with Multi-Modal Tactile Feedback
(
Poster
)
>
In the evolving landscape of robotics and automation, the application of touch processing is crucial, particularly in learning to execute intricate tasks like insertion. However, existing works focusing on tactile methods for insertion tasks predominantly rely on sensor data and do not utilize the rich insights provided by human tactile feedback. For utilizing human sensations, methodologies related to learning from humans predominantly leverage visual feedback, often overlooking the invaluable tactile insights that humans inherently employ to finish complex manipulations. Addressing this gap, we introduce "MimicTouch", a noval framework that mimics a human's tactile-guided control strategy. In this framework, we initially collect multi-modal tactile datasets from human demonstrators, incorporating human tactile-guided control strategies for task completion. The subsequent step involves instructing robots through imitation learning using multi-modal sensor data and retargeted human motions. To further mitigate the embodiment gap between humans and robots, we employ online residual reinforcement learning on the physical robot. Through comprehensive experiments, we validate the safety of MimicTouch in transferring a latent policy learned through imitation learning from human to robot. This ongoing work will pave the way for a broader spectrum of tactile-guided robotic applications. |
Kelin Yu · Yunhai Han · Matthew Zhu · Ye Zhao 🔗 |
-
|
Blind Robotic Grasp Stability Estimation Based on Tactile Measurements and Natural Language Prompts
(
Poster
)
>
We design and train a composition of neural network modules that predicts robotic grasp success based on tactile sensor measurements and natural language prompts identifying the object. We use a Franka Emika Panda robot arm equipped with two DIGIT sensors for grasping and language descriptions generated by chatGPT. Our short-term goal is to utilize this approach to improve the accuracy of a grasp stability estimator.The longer-term goal of this work is to enhance haptically driven robot control with language-based context, i.e. task-relevant information which might not be robustly inferred from vision. |
Jan-Malte Giannikos · Oliver Kroemer · David Leins · Alexandra Moringen 🔗 |
-
|
ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape Completion
(
Poster
)
>
In this paper, we present ViHOPE, a framework for estimating the 6D pose of an in-hand object using visuotactile perception. In our framework, we employ a conditional Generative Adversarial Network to complete the shape of an in-hand object based on volumetric representation. This completed shape is then utilized to estimate the 6D pose, demonstrating that our approach outperforms prior methods. We assess the effectiveness of our model by training and testing on a synthetic dataset. In both the visuotactile shape completion task and the visuotactile pose estimation task, our approach outperforms the state-of-the-art by a significant margin. We present our pivotal lesson learned: the value of explicitly completing object shapes. Furthermore, we ablate our framework to confirm gains from explicit shape completion and demonstrate that our framework produces models that are robust to sim-to-real transfer on a real-world robot platform. |
Hongyu Li · Snehal Dikhale · Soshi Iba · Nawid Jamali 🔗 |
-
|
Curved Tactile Sensor Simulation with Hydroelastic Contacts in MuJoCo
(
Poster
)
>
This work builds upon prior research on the highly realistic simulation of tactile sensors integrated into MuJoCo, using hydroelastic contact surfaces. Our modifications introduce an additional layer of abstraction, allowing generalization to any sensor shape and overcoming the limitation of exclusively simulating flat surfaces. Using a fingertip sensor as an example, we demonstrate how our extension is able to successfully simulate curved sensor surfaces. |
Florian Patzelt · David Leins · Robert Haschke 🔗 |