Skip to yearly menu bar Skip to main content


Demonstration

3D Surface-to-Structure Translation with Deep Convolutional Networks

Takumi Moriya · Kazuyuki Saito

Pacific Ballroom Concourse #D4

Abstract:

Our demonstration shows a system that estimates internal body structures from 3D surface models using deep convolutional neural networks trained on CT (computed tomography) images of the human body. To take pictures of structures inside the body, we need to use a CT scanner or an MRI (Magnetic Resonance Imaging) scanner. However, assuming that the mutual information between outer shape of the body and its inner structure is not zero, we can obtain an approximate internal structure from a 3D surface model based on MRI and CT image database. This suggests that we could know where and what kind of disease a person is likely to have in his/her body simply by 3D scanning surface of the body. As a first prototype, we developed a system for estimating internal body structures from surface models based on Visible Human Project DICOM CT Datasets from the University of Iowa Magnetic Resonance Research Facility 1. The estimation process given a surface model is shown in Figure 1. The input surface model is not limited to the human body. For instance, our method enables us to create Stanford Armadillo that has internal structures of the human body.

Live content is unavailable. Log in and register to view live content