A Conflict-Guided Evidential Multimodal Fusion for Semantic Segmentation
Résumé
This article presents a novel and robust approach to semantic segmentation based on the fusion of different image modalities (conventional and non-conventional images). The robustness of fusion methods and their ability to tolerate sensor failures are crucial challenges for their deployment in real-world environments. It is essential to develop unique fusion models that can operate even in the absence of certain modalities during inference. However, current fusion methods have a strong dependence on the RGB branch, resulting in significant performance losses in case of its unavailability. To address this issue we propose ECoLaF (Evidential Conflict-guided Late Fusion), a 'late fusion' method based on Dempster-Shafer theory. This method adaptively reduces the output of each modality according to their conflicts before fusing them. Experimental results show that our approach outperforms state-of-the-art methods in terms of robustness on the MCubeS and DeLiVER datasets, especially when the RGB sensor is not operational. This study offers new perspectives for improving the robustness of semantic segmentation in multimodal contexts. Code is available at https://github.com/deregnaucourtlucas/ECoLaF.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |