Timezone: »
Federated learning, which shares the weights of the neural network across clients, is gaining attention in the healthcare sector as it enables training on a large corpus of decentralized data while maintaining data privacy. For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals. Unfortunately, the exchange of the weights quickly consumes the network bandwidth if highly expressive network architecture is employed. So-called split learning partially solves this problem by dividing a neural network into a client and a server part, so that the client part of the network takes up less extensive computation resources and bandwidth. However, it is not clear how to find the optimal split without sacrificing the overall network performance. To amalgamate these methods and thereby maximize their distinct strengths, here we show that the Vision Transformer, a recently developed deep learning architecture with straightforward decomposable configuration, is ideally suitable for split learning without sacrificing performance. Even under the non-independent and identically distributed data distribution which emulates a real collaboration between hospitals using CXR datasets from multiple sources, the proposed framework was able to attain performance comparable to data-centralized training. In addition, the proposed framework along with heterogeneous multi-task clients also improves individual task performances including the diagnosis of COVID-19, eliminating the need for sharing large weights with innumerable parameters. Our results affirm the suitability of Transformer for collaborative learning in medical imaging and pave the way forward for future real-world implementations.
Author Information
Sangjoon Park (Korea Advanced Institute of Science and Technology)
Gwanghyun Kim (Korea Advanced Institute of Science and Technology)
Jeongsol Kim (KAIST)
Boah Kim (KAIST)
Jong Chul Ye (BISPL, KAIST)
More from the Same Authors
-
2022 : Progressive Deblurring of Diffusion Models for Coarse-to-Fine Image Synthesis »
Sangyun Lee · Hyungjin Chung · Jaehyeon Kim · Jong Chul Ye -
2023 : Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models »
Hyeonho Jeong · Jong Chul Ye -
2023 Workshop: NeurIPS 2023 Workshop on Diffusion Models »
Bahjat Kawar · Valentin De Bortoli · Charlotte Bunne · James Thornton · Jiaming Song · Jong Chul Ye · Chenlin Meng -
2023 Poster: Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models »
Geon Yeong Park · Jeongsol Kim · Beomsu Kim · Sang Wan Lee · Jong Chul Ye -
2023 Poster: Direct Diffusion Bridge using Data Consistency for Inverse Problems »
Hyungjin Chung · Jeongsol Kim · Jong Chul Ye -
2022 Poster: Energy-Based Contrastive Learning of Visual Representations »
Beomsu Kim · Jong Chul Ye -
2022 Poster: Improving Diffusion Models for Inverse Problems using Manifold Constraints »
Hyungjin Chung · Byeongsu Sim · Dohoon Ryu · Jong Chul Ye -
2021 Poster: Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Clean Images »
Kwanyoung Kim · Jong Chul Ye -
2021 Poster: Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention »
Byung-Hoon Kim · Jong Chul Ye · Jae-Jin Kim