Oral Poster
E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection
Jiaqing Zhang · Mingxiang Cao · Xue Yang · Jie Lei · Weiying Xie · Daixun Li · Wenbo Huang · Yunsong Li
East Exhibit Hall A-C #4501
[
Abstract
]
Oral
presentation:
Oral Session 4D: Machine Vision
Thu 12 Dec 3:30 p.m. PST — 4:30 p.m. PST
Thu 12 Dec 4:30 p.m. PST
— 7:30 p.m. PST
Thu 12 Dec 3:30 p.m. PST — 4:30 p.m. PST
Abstract:
Multimodal image fusion and object detection are crucial for autonomous driving. While current methods have advanced the fusion of texture details and semantic information, their complex training processes hinder broader applications. Addressing this challenge, we introduce E2E-MFD, a novel end-to-end algorithm for multimodal fusion detection. E2E-MFD streamlines the process, achieving high performance with a single training phase. It employs synchronous joint optimization across components to avoid suboptimal solutions tied to individual tasks. Furthermore, it implements a comprehensive optimization strategy in the gradient matrix for shared parameters, ensuring convergence to an optimal fusion detection configuration. Our extensive testing on multiple public datasets reveals E2E-MFD's superior capabilities, showcasing not only visually appealing image fusion but also impressive detection outcomes, such as a 3.9\% and 2.0\% $\text{mAP}_{50}$ increase on horizontal object detection dataset M3FD and oriented object detection dataset DroneVehicle, respectively, compared to state-of-the-art approaches.
Live content is unavailable. Log in and register to view live content