The search functionality is under construction.
The search functionality is under construction.

Semantic Guided Infrared and Visible Image Fusion

Wei WU, Dazhi ZHANG, Jilei HOU, Yu WANG, Tao LU, Huabing ZHOU

  • Full Text Views

    0

  • Cite this

Summary :

In this letter, we propose a semantic guided infrared and visible image fusion method, which can train a network to fuse different semantic objects with different fusion weights according to their own characteristics. First, we design the appropriate fusion weights for each semantic object instead of the whole image. Second, we employ the semantic segmentation technology to obtain the semantic region of each object, and generate special weight maps for the infrared and visible image via pre-designed fusion weights. Third, we feed the weight maps into the loss function to guide the image fusion process. The trained fusion network can generate fused images with better visual effect and more comprehensive scene representation. Moreover, we can enhance the modal features of various semantic objects, benefiting subsequent tasks and applications. Experiment results demonstrate that our method outperforms the state-of-the-art in terms of both visual effect and quantitative metrics.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E104-A No.12 pp.1733-1738
Publication Date
2021/12/01
Publicized
2021/06/10
Online ISSN
1745-1337
DOI
10.1587/transfun.2021EAL2020
Type of Manuscript
LETTER
Category
Image

Authors

Wei WU
  Hubei Key Laboratory of Intelligent Robot
Dazhi ZHANG
  the Research Institute of Nuclear Power Operation
Jilei HOU
  Hubei Key Laboratory of Intelligent Robot
Yu WANG
  Hubei Key Laboratory of Intelligent Robot
Tao LU
  Hubei Key Laboratory of Intelligent Robot
Huabing ZHOU
  Hubei Key Laboratory of Intelligent Robot

Keyword