The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] visible image(2hit)

1-2hit
  • Infrared and Visible Image Fusion via Hybrid Variational Model Open Access

    Zhengwei XIA  Yun LIU  Xiaoyun WANG  Feiyun ZHANG  Rui CHEN  Weiwei JIANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/12/11
      Vol:
    E107-D No:4
      Page(s):
    569-573

    Infrared and visible image fusion can combine the thermal radiation information and the textures to provide a high-quality fused image. In this letter, we propose a hybrid variational fusion model to achieve this end. Specifically, an ℓ0 term is adopted to preserve the highlighted targets with salient gradient variation in the infrared image, an ℓ1 term is used to suppress the noise in the fused image and an ℓ2 term is employed to keep the textures of the visible image. Experimental results demonstrate the superiority of the proposed variational model and our results have more sharpen textures with less noise.

  • Semantic Guided Infrared and Visible Image Fusion

    Wei WU  Dazhi ZHANG  Jilei HOU  Yu WANG  Tao LU  Huabing ZHOU  

     
    LETTER-Image

      Pubricized:
    2021/06/10
      Vol:
    E104-A No:12
      Page(s):
    1733-1738

    In this letter, we propose a semantic guided infrared and visible image fusion method, which can train a network to fuse different semantic objects with different fusion weights according to their own characteristics. First, we design the appropriate fusion weights for each semantic object instead of the whole image. Second, we employ the semantic segmentation technology to obtain the semantic region of each object, and generate special weight maps for the infrared and visible image via pre-designed fusion weights. Third, we feed the weight maps into the loss function to guide the image fusion process. The trained fusion network can generate fused images with better visual effect and more comprehensive scene representation. Moreover, we can enhance the modal features of various semantic objects, benefiting subsequent tasks and applications. Experiment results demonstrate that our method outperforms the state-of-the-art in terms of both visual effect and quantitative metrics.