In this letter, we propose a semantic guided infrared and visible image fusion method, which can train a network to fuse different semantic objects with different fusion weights according to their own characteristics. First, we design the appropriate fusion weights for each semantic object instead of the whole image. Second, we employ the semantic segmentation technology to obtain the semantic region of each object, and generate special weight maps for the infrared and visible image via pre-designed fusion weights. Third, we feed the weight maps into the loss function to guide the image fusion process. The trained fusion network can generate fused images with better visual effect and more comprehensive scene representation. Moreover, we can enhance the modal features of various semantic objects, benefiting subsequent tasks and applications. Experiment results demonstrate that our method outperforms the state-of-the-art in terms of both visual effect and quantitative metrics.
Wei WU
Hubei Key Laboratory of Intelligent Robot
Dazhi ZHANG
the Research Institute of Nuclear Power Operation
Jilei HOU
Hubei Key Laboratory of Intelligent Robot
Yu WANG
Hubei Key Laboratory of Intelligent Robot
Tao LU
Hubei Key Laboratory of Intelligent Robot
Huabing ZHOU
Hubei Key Laboratory of Intelligent Robot
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Wei WU, Dazhi ZHANG, Jilei HOU, Yu WANG, Tao LU, Huabing ZHOU, "Semantic Guided Infrared and Visible Image Fusion" in IEICE TRANSACTIONS on Fundamentals,
vol. E104-A, no. 12, pp. 1733-1738, December 2021, doi: 10.1587/transfun.2021EAL2020.
Abstract: In this letter, we propose a semantic guided infrared and visible image fusion method, which can train a network to fuse different semantic objects with different fusion weights according to their own characteristics. First, we design the appropriate fusion weights for each semantic object instead of the whole image. Second, we employ the semantic segmentation technology to obtain the semantic region of each object, and generate special weight maps for the infrared and visible image via pre-designed fusion weights. Third, we feed the weight maps into the loss function to guide the image fusion process. The trained fusion network can generate fused images with better visual effect and more comprehensive scene representation. Moreover, we can enhance the modal features of various semantic objects, benefiting subsequent tasks and applications. Experiment results demonstrate that our method outperforms the state-of-the-art in terms of both visual effect and quantitative metrics.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2021EAL2020/_p
Copy
@ARTICLE{e104-a_12_1733,
author={Wei WU, Dazhi ZHANG, Jilei HOU, Yu WANG, Tao LU, Huabing ZHOU, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Semantic Guided Infrared and Visible Image Fusion},
year={2021},
volume={E104-A},
number={12},
pages={1733-1738},
abstract={In this letter, we propose a semantic guided infrared and visible image fusion method, which can train a network to fuse different semantic objects with different fusion weights according to their own characteristics. First, we design the appropriate fusion weights for each semantic object instead of the whole image. Second, we employ the semantic segmentation technology to obtain the semantic region of each object, and generate special weight maps for the infrared and visible image via pre-designed fusion weights. Third, we feed the weight maps into the loss function to guide the image fusion process. The trained fusion network can generate fused images with better visual effect and more comprehensive scene representation. Moreover, we can enhance the modal features of various semantic objects, benefiting subsequent tasks and applications. Experiment results demonstrate that our method outperforms the state-of-the-art in terms of both visual effect and quantitative metrics.},
keywords={},
doi={10.1587/transfun.2021EAL2020},
ISSN={1745-1337},
month={December},}
Copy
TY - JOUR
TI - Semantic Guided Infrared and Visible Image Fusion
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1733
EP - 1738
AU - Wei WU
AU - Dazhi ZHANG
AU - Jilei HOU
AU - Yu WANG
AU - Tao LU
AU - Huabing ZHOU
PY - 2021
DO - 10.1587/transfun.2021EAL2020
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E104-A
IS - 12
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - December 2021
AB - In this letter, we propose a semantic guided infrared and visible image fusion method, which can train a network to fuse different semantic objects with different fusion weights according to their own characteristics. First, we design the appropriate fusion weights for each semantic object instead of the whole image. Second, we employ the semantic segmentation technology to obtain the semantic region of each object, and generate special weight maps for the infrared and visible image via pre-designed fusion weights. Third, we feed the weight maps into the loss function to guide the image fusion process. The trained fusion network can generate fused images with better visual effect and more comprehensive scene representation. Moreover, we can enhance the modal features of various semantic objects, benefiting subsequent tasks and applications. Experiment results demonstrate that our method outperforms the state-of-the-art in terms of both visual effect and quantitative metrics.
ER -