International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Modal Feature Disentanglement and Contribution Estimation for Multimodality Image Fusion (MFDCE-Fuse)
ABSTRACT Multimodality image fusion (MMIF) tasks aim to fuse complementary information from different modalities, such as salient objects and texture details, to improve image quality and information comprehensiveness. Most current MMIF methods adopt a “black-box” decoder to generate fused images, which leads to insufficient interpretability and difficulty in training. To deal with these problems, MMIF is converted into a modality contribution estimation task through a novel self-supervised fusion network named MFDCE-Fuse.
AUTHOR D. SRIDHAR, T.SIRISHA, K.HEMA, G.MOKSHAGNA, M.PRUDHVI NARAYANA U.G. Student, Department of ECE, SVIET Engineering College, Nandamuru, Pedana, Andhra Pradesh, India Associate Professor, Department of ECE, SVIET Engineering College, Nandamuru, Pedana, Andhra Pradesh, India
VOLUME 182
DOI DOI: 10.15680/IJIRCCE.2026.1403129
PDF pdf/129_ Modal Feature Disentanglement and Contribution Estimation for Multimodality Image Fusion (MFDCE-Fuse).pdf
KEYWORDS
References [1] Zhao, Z., Bai, H., Zhang, J., et al. (2023). CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for Multi-Modality Image Fusion. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5906-5916.
[2] Xu, H., Ma, J., Jiang, J., et al. (2020). U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1), 502- 518.
[3] Liu, J., Lin, R., Wu, G., et al. (2023). Coconet: Coupled contrastive learning network with multilevel feature ensemble for multimodality image fusion. International Journal of Computer Vision, 132(5), 1748-1775.
[4] Li, H., Wu, X. J., & Kittler, J. (2021). RFN-nest: An end-to-end residual fusion network for infrared and visible images. Information Fusion, 73, 72-86.
[5] Zhao, Z., Xu, S., Zhang, C., et al. (2020). DIDFuse: Deep image decomposition for infrared and visible image fusion. arXiv preprint arXiv:2003.09210.
[6] Ma, J., Yu, W., Liang, P., et al. (2019). FusionGAN: A generative adversarial network for infrared and visible image fusion. Information Fusion, 48, 11-26.
[7] Li, J., Huo, H., Li, C., et al. (2020). AttentionFGAN: Infrared and visible image fusion using attention-based generative adversarial networks. IEEE Transactions on Multimedia, 23, 1383-1396.
[8] Zhang, X., & Demiris, Y. (2023). Visible and infrared image fusion using deep learning IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-20.
[9] Zhao, Z., Bai, H., Zhu, Y., et al. (2023). Equivariant Multi-Modality Image Fusion. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Liu, Y., Chen, X., Ward, R. K., et al. (2016). Image fusion with convolutional sparse representation. IEEE Signal Processing Letters, 23(12), 1882-1886.
image
Copyright © IJIRCCE 2020.All right reserved