The search functionality is under construction.

IEICE TRANSACTIONS on Information

Open Access
Infrared and Visible Image Fusion via Hybrid Variational Model

Zhengwei XIA, Yun LIU, Xiaoyun WANG, Feiyun ZHANG, Rui CHEN, Weiwei JIANG

  • Full Text Views

    101

  • Cite this
  • Free PDF (9.7MB)

Summary :

Infrared and visible image fusion can combine the thermal radiation information and the textures to provide a high-quality fused image. In this letter, we propose a hybrid variational fusion model to achieve this end. Specifically, an 0 term is adopted to preserve the highlighted targets with salient gradient variation in the infrared image, an 1 term is used to suppress the noise in the fused image and an 2 term is employed to keep the textures of the visible image. Experimental results demonstrate the superiority of the proposed variational model and our results have more sharpen textures with less noise.

Publication
IEICE TRANSACTIONS on Information Vol.E107-D No.4 pp.569-573
Publication Date
2024/04/01
Publicized
2023/12/11
Online ISSN
1745-1361
DOI
10.1587/transinf.2023EDL8027
Type of Manuscript
LETTER
Category
Image Processing and Video Processing

1.  Introduction

Infrared images reflect the thermal radiations emitted by salient targets, but they usually lack the textures and are susceptible to noise interference. On the other hand, the visible images mainly record the spectral information and contain rich textures of various objects. However, the hidden targets are not easily captured in the visible image due to the impact of the obstacles and extreme environment. For these reasons, it is necessary to propose an effective fusion algorithm to generate a complementary fused image with abundant textures from the visible image and salient target regions from the infrared image.

To obtain a high-quality fused image, numerous infrared and visible image fusion algorithms have been presented. These existing approaches can be roughly divided into three categories: multi-scale transform based methods [1]-[3], learning-based methods [4]-[6] and variation-based methods [7]-[9]. The first type of fusion algorithms usually adopts the multi-scale transformation strategy to decompose the input images into multiple layers at different scales and transfers the most effective information into a single fused image according to the designed rules. However, the classic transforms for different source images may not be appropriate because the important information in the infrared and visible image is characterized by different representations. With the help of deep learning, numerous fusion networks such as FusionGAN [4] and RFN-Nest [5] are presented to provide good fusion performance. Unfortunately, they are essentially dependent upon the volume and diversity of training dataset and at the same time the training process requires extensive hardware resources. To overcome the dataset dependency, variation-based algorithms are proposed to formulate the fusion problem as the energy minimization problem of variational model. Although previous regularization constraints can effectively combine the meaningful features from different source images into a fused image, whereas they fail to consider the noise interference and the hidden noise may be amplified in the fused image.

As we know, the images captured by infrared and visible sensors inevitably contain the noise. Most of existing fusion algorithms ignore the effects of noise interference and the fused results generally suffer from noise amplification. Therefore, our motivation is to fuse the salient targets from the infrared image and abundant textures from the visible image and reduce the noise in the fused result. To achieve this goal, we propose a hybrid variational model consisting of a data fidelity term and three regularization terms to yield a high-quality fused image. The devised \(\ell_0\) norm and \(\ell_2\) norm respectively preserve the salient targets in the infrared image and rich textures in the visible image, and an \(\ell_1\) norm is adopted to enforce the fused image for noise suppression. Experimental results demonstrate that our proposed algorithm can provide a fused image with rich textures and thermal radiation information. Besides, it also can avoid the noise interference.

2.  Proposed Variational Fusion Model

Given a pair of registered visible image \(v\) and infrared image \(r\), a hybrid variational model is proposed to formulate the fusion problem as the following energy function minimization to obtain a fused image \(f\):

\[\begin{align} E\left( f \right) & = \lambda_1 \left\| f - v \right\|_2^2 + \lambda_2 \left\| \nabla f - \nabla r \right\|_0 \nonumber \\ & \hphantom{={}} + \lambda_3 \left\| \nabla f \right\|_1 + \lambda_4 \left\| \nabla f - G \right\|_2^2 \tag{1} \end{align}\]

where \(\lambda_1\), \(\lambda_2\), \(\lambda_3\) and \(\lambda_4\) are the positive parameters that balance the trade-off of different terms. \(\left\| \cdot \right\|_0\), \(\left\| \cdot \right\|_1\) and \(\left\| \cdot \right\|_2\) respectively represent \(\ell_0\), \(\ell_1\) and \(\ell_2\) norm. \(\nabla\) denotes the first-order gradient operator. \(G\) is the modified gradient of the visible image \(v\).

  • \(\left\| f - v \right\|_2^2\) constrains the fidelity between the fused image and the visible image, which can preserve the abundant details in the visible image.
  • \(\left\| \nabla f - \nabla r \right\|_0\) is used to preserve the salient structural gradients of the infrared image.
  • \(\left\| \nabla f \right\|_1\) corresponds to the TV constraint for noise suppression in the fused image.
  • \(\left\| \nabla f - G \right\|_2^2\) minimizes the distance between the gradient of the fused image and \(G\), which plays a role of strengthening the textures in the visible image. Similar to [10], the modified gradient \(G\) can be expressed as:
    \[\begin{equation*} G = \left( 1 + \lambda e^{-\left| \nabla v \right|/\sigma} \right) \circ \nabla v \tag{2} \end{equation*}\]
    where \(\lambda\) and \(\sigma\) dominate the degree of amplification. Generally, \(\lambda\) and \(\sigma\) are respectively set as 0.5 and 5.

As observed in Fig. 1, the final fused image \(f\) can be obtained by optimizing Eq. (1) iteratively. Our model can highlight the salient objects in the infrared image and capture the abundant textures while suppressing the noise.

Fig. 1  The scheme of the proposed variation fusion method.

2.1  Numerical Solver

Due to the existence of \(\ell_0\) and \(\ell_1\) norm regularization terms, the hybrid variational model (1) is difficult to solve directly. Therefore, the alternating direction minimization multiplier (ADMM) technique [11] is adopted to solve the fusion model (1). First, we introduce two auxiliary variables \(Z_1\) and \(Z_2\) and the objective function (1) can be rewritten as:

\[\begin{align} & E\left( {f,{Z_1},{Z_2}} \right) = \lambda_1 \left\| f - v \right\|_2^2 + \lambda_2 \left\| Z_1 \right\|_0 \nonumber \\ & \hphantom{E\left( {f,{Z_1},{Z_2}} \right) =} + \lambda_3 \left\| Z_2 \right\|_1 + \lambda_4 \left\| \nabla f - G \right\|_2^2 \tag{3} \\ & \textit{s.t.}\ Z_1 = \nabla f - \nabla r \ \textit{and}\ Z_2 = \nabla f \nonumber \end{align}\]

To remove the equality constraints, two Lagrange multiplier \(T_1\) and \(T_2\) are introduced into (3) and we have the following augmented Lagrangian function of (3):

\[\begin{align} & E\left( f, Z_1, Z_2, T_1, T_2 \right) = \lambda_1\left\| f - v \right\|_2^2 + \lambda_2\left\| Z_1 \right\|_0 \nonumber \\ & \qquad{} + \lambda_3 \left\| Z_2 \right\|_1 + \lambda_4 \left\| \nabla f - G \right\|_2^2 \nonumber \\ & \qquad{} + \Phi \left( T_1, Z_1 - \nabla f + \nabla r \right) + \Phi \left( T_2, Z_2 - \nabla f \right) \tag{4} \end{align}\]

where \(\Phi \left( P, Q \right) = \left\langle P, Q \right\rangle + \frac{\rho}{2}\left\| Q \right\|_2^2\). The function (4) can be optimized by minimizing the sub-problems with respect to \(f\), \(Z_1\) and \(Z_2\) while regarding other variables as constants. Then, we will provide the solutions for the \(k\)-th iteration of three sub-problems.

\(f\) sub-problem: Ignoring the terms irrelevant to \(f\) and the optimization function is transformed as:

\[\begin{align} & \operatorname*{arg\,min}_f \lambda_1 \left\| f - v \right\|_2^2 + \lambda_4 \left\| \nabla f - G \right\|_2^2 \nonumber \\ & \hphantom{\operatorname*{arg\,min}_f} + \Phi \left( T_1^{\left( k \right)}, Z_1^{\left( k \right)} - \nabla f + \nabla r \right) \nonumber \\ & \hphantom{\operatorname*{arg\,min}_f} + \Phi \left( T_2^{\left( k \right)}, Z_2^{\left( k \right)} - \nabla f \right) \tag{5} \end{align}\]

The above function (5) is a classic quadratic problem which can be solved by differentiating and setting it to 0. To make the computation efficient, we employ fast Fourier transform (FFT) to obtain the solution of function (5):

\[\begin{align} & f^{\left( k + 1 \right)} = F^{-1}\left( \frac{2\lambda_1 F\left( v \right) + 2\lambda_4 F\left( D^TG \right) + \rho^{\left( k \right)} H^{\left( k \right)}} {2\lambda_1 + \left( 2\lambda_4 + 2\rho^{\left( k \right)} \right)F\left( {{D^T}D} \right)} \right) \nonumber\\ \tag{6} \end{align}\]

where \(H^{\left( k \right)} = F\left( D^T\left( Z_1^{\left( k \right)} + Dr + \frac{T_1^{\left( k \right)}}{\rho^{\left( k \right)}} + Z_2^{\left( k \right)} + \frac{T_2^{\left( k \right)}}{\rho^{\left( k \right)}} \right) \right)\). \(F\) and \(F^{-1}\) are 2D FFT and inverse FFT operators. \(D\) denotes the discrete gradient operator.

\(Z_1\) sub-problem: Dropping the terms unrelated to \(Z_1\) brings about the following optimization problem:

\[\begin{equation*} \operatorname*{arg\,min}_{\mathrm{Z}_1} \lambda_2 \left\| {{Z_1}} \right\|_0 + \Phi \left( T_1^{\left( k \right)}, Z_1 - \nabla f^{\left( k + 1 \right)} + \nabla r \right) \tag{7} \end{equation*}\]

According to the analysis of \(\ell_0\) gradient sparsity term in [12], [13], the solution of function (7) can be obtained in an element-wise manner:

\[\begin{equation*} Z_1^{\left( {k + 1} \right)} = \left\{ \begin{array}{l@{}c@{}} 0, & \textit{if}\ \left( {{M^{\left( k \right)}}} \right)^2 \leq \frac{2\lambda_2}{\rho^{\left( k \right)}} \\ M^{\left( k \right)}, & otherwise \end{array} \right. \tag{8} \end{equation*}\]

where \(M^{\left( k \right)} = \nabla f^{\left( k + 1 \right)} - \nabla r - \frac{T_1^{\left( k \right)}}{\rho^{\left( k \right)}}\).

\(Z_2\) sub-problem: Collecting the terms with regard to \(Z_2\) and the optimization function (4) becomes:

\[\begin{equation*} \operatorname*{arg\,min}_{\mathrm{Z}_2} \lambda_3 \left\| {{Z_2}} \right\|_1 + \Phi \left( T_2^{\left( k \right)}, Z_2 - \nabla f^{\left( k + 1 \right)} \right) \tag{9} \end{equation*}\]

The closed form solution of function (9) is solved via performing the soft shrinkage operation:

\[\begin{equation*} Z_2^{\left( k + 1 \right)} = sign\left( t \right) \max \left( \left| t \right| - \frac{\lambda_3}{\rho^{\left( k \right)}}, 0 \right) \tag{10} \end{equation*}\]

where \(t = \nabla {f^{\left( k + 1 \right)}} - \frac{T_2^{\left( k \right)}}{\rho^{\left( k \right)}}\).

Finally, we update the multipliers (\(T_1\), \(T_2\)) and the scalar \(\rho\) through the following equation:

\[\begin{equation*} \left\{ \begin{array}{l@{}} T_1^{\left( k + 1 \right)} = T_1^{\left( k \right)} + \rho^{\left( k \right)} \left( Z_1^{\left( k + 1 \right)} - \nabla f^{\left( k + 1 \right)} + \nabla r \right) \\ {} T_2^{\left( k + 1 \right)} = T_2^{\left( k \right)} + \rho^{\left( k \right)} \left( Z_2^{\left( k + 1 \right)} - \nabla f^{\left( k + 1 \right)} \right) \\ {} \rho^{\left( k + 1 \right)} = \rho^{\left( k \right)}\mu,\ \mu > 1 \end{array} \right. \tag{11} \end{equation*}\]

The entire optimization procedure will be stopped when the maximum iteration (\(K=10\)) is reached.

3.  Experimental Results and Analysis

In this section, experiments are conducted to demonstrate the superiority of the proposed algorithm. We compare our method with seven famous fusion algorithms, including ration of low-pass pyramid (RP) [1], dual-tree complex wavelet transform (DTCWT) [14], multi-resolution singular value decomposition (MSVD) [15], cross bilateral filter (CBF) [2], gradient transfer fusion (GTF) [7], FusionGAN [4] and RFN-Nest [5]. The paired of aligned infrared and visible images with different scenarios are mainly collected from a publicly available TNO dataset [16]. All experiments except learning-based methods are performed in MATLAB R2019a on a laptop with 16G memory and Intel Core i5-8350U CPU. In our method, the parameters \(\lambda_1\), \(\lambda_2\), \(\lambda_3\) and \(\lambda_4\) are empirically set as \(10^{-3}\), 5, \(10^{-3}\) and 5.

As observed in Figs. 2-4, all the fusion algorithms can combine the textures and salient features into a single fused image to some extent, whereas the comparison methods still have some drawbacks. RP [1] obviously suffers from noise amplification and the obtained fused results are more noisy. DTCWT [14] is prone to producing the halos around the depth-discontinuous edges. The textures in the results provided by MSVD [15] are unclear and this method fails to capture details from the visible image. The results generated by CBF [2] contain image artifacts and the edge details cannot be injected into the fused image well. Although GTF [7] effectively acquires the thermal information in the infrared image, the textures in the visible image cannot be obtained. The results by learning-based fusion methods, namely FusionGAN [4] and RFN-Nest [5], are blurred, especially in the infrared objects. In contrast, our method maintains abundant textures from the visible image and captures the clear infrared targets from the infrared image.

Fig. 2  Fused results on “person” images. (a) and (b) are the aligned visible and infrared images. (c)-(j) are the fused results generated by RP [1], DTCWT [14], MSVD [15], CBF [2], GTF [7], FusionGAN [4], RFN-Nest [5] and our proposed method. (k)-(t) are the zoom-in results of red and yellow rectangles from (a)-(j), respectively.

Fig. 3  Fused results on “house” images. (a) and (b) are the aligned visible and infrared images. (c)-(j) are the fused results generated by RP [1], DTCWT [14], MSVD [15], CBF [2], GTF [7], FusionGAN [4], RFN-Nest [5] and our proposed method. (k)-(t) are the zoom-in results of red rectangles from (a)-(j), respectively.

Fig. 4  Fused results on “umbrella” images. (a) and (b) are the aligned visible and infrared images. (c)-(j) are the fused results generated by RP [1], DTCWT [14], MSVD [15], CBF [2], GTF [7], FusionGAN [4], RFN-Nest [5] and our proposed method. (k)-(t) are the zoom-in results of red and yellow rectangles from (a)-(j), respectively.

Moreover, the zoom-in results of red and yellow rectangles are given in Figs. 2-4 (k-t). RP may be subjected to the influence of noise (see Figs. 2-4 (m)) and DTCWT easily suffers from the halo artifacts around the edges (see Fig. 2 (n)). The results by MSVD lack details (see Figs. 2-4 (o)). CBF is susceptible to the artifacts around the edges (see Figs. 2-4 (p)). The results provided by GTF look more like the input infrared image (see Fig. 3 (q)). FusionGAN and RFN-Nest easily suffer from the blurring effects, resulting in the fuzzy edges (see Figs. 3 (r)-(s)). In comparisons, our fused results are visually more pleasant with much less artifacts and noise.

To objectively evaluate the performance of different fusion algorithms, we employ six widely used metrics for comparisons, namely entropy (EN) [17], modified fusion artifacts measure (\(N_{abf}\)) [18], visual information fidelity (VIF) [19], sum of correlation differences (SCD) [20], structural similarity index measure (SSIM) [21] and multi-scale SSIM(MS-SSIM) [22]. Generally, higher values of EN, VIF, SCD, SSIM and MS-SSIM and smaller values of \(N_{abf}\) represent the better fused results. Table 1 reveals the average scores of six metrics on 30 image pairs. Through analyzing Table 1, our fused results obtain the best or second-best scores on all six objective indicators which indicates that the proposed method achieves superior fusion performance.

Table 1  Quantitative comparisons of six metrics.

Figure 5 reveals the influences of four parameters for the fused results. The empirical values of four parameters are set as \(10^{-3}\), 5, \(10^{-3}\) and 5. In Fig. 5, as \(\lambda_1\) increases or \(\lambda_2\) decreases, the salient thermal features in the infrared image, such as the “person”, cannot be captured into the fused result. The increase of \(\lambda_3\) makes the fused result smoother gradually. The smaller value of \(\lambda_4\) leads to the fact that the thermal information in the infrared image is strengthened and the textures in the visible image are weakened. In addition, we also provide the effects of four parameters for four metrics in Fig. 5. From Table 2, the larger value of \(\lambda_1\) (\(\lambda_1 = 10^{-1}\)) or the smaller value of \(\lambda_2\) (\(\lambda_2 = 0.5\)) lead to the better values of EN. The larger value of \(\lambda_3\) (\(\lambda_3 = 10^{-1}\)) offers the better values of \(N_{abf}\) and SSIM. The values of \(N_{abf}\), SSIM and MS-SSIM using our empirical setting values rank the top three. Overall, the empirical values provide a good fused result in most cases.

Fig. 5  Influences of four parameters (\(\lambda_1\), \(\lambda_2\), \(\lambda_3\), \(\lambda_4\)) for our fused result.

Table 2  Influences of four parameters (\(\lambda_1\), \(\lambda_2\), \(\lambda_3\), \(\lambda_4\)) in Fig. 5.

In order to demonstrate the contribution of each regularization term in our proposed hybrid variational model (1), the ablation studies are performed. First, to verify effectiveness of \(\ell_0\) regularization term (\(\left\| \nabla f - \nabla r \right\|_0\)) in Eq. (1), we remove the above regularization term and acquire the following energy function:

\[\begin{equation*} E\left( f \right) = \lambda_1 \left\| f \mkern-1.5mu-\mkern-1.5mu v \right\|_2^2 \mkern-1.5mu+\mkern-1.5mu \lambda_3 \left\|\nabla f\right\|_1 \mkern-1.5mu+\mkern-1.5mu \lambda_4 \left\|\nabla f \mkern-1.5mu-\mkern-1.5mu G\right\|_2^2 \tag{12} \end{equation*}\]

Then, the usage of \(\ell_2\) regularization term (\(\left\| \nabla f - G \right\|_2^2\)) helps capture the gradients of the visible image into the fused image. To verify its effectiveness, we remove this regularization term as follows:

\[\begin{align} & E\left( f \right) = \lambda_1 \left\| f - v \right\|_2^2 \nonumber \\ & \hphantom{E\left( f \right) =} + \lambda_2 \left\| \nabla f - \nabla r \right\|_0 + \lambda_3 \left\| \nabla f \right\|_1 \tag{13} \end{align}\]

Finally, to demonstrate the contribution of the devised data term (\(\left\| f - v \right\|_2^2\)), we employ the data term \(\left\| {f - r} \right\|_2^2\) to replace it and we have the following model:

\[\begin{align} & E\left( f \right) = \lambda_1 \left\| f - r \right\|_2^2 + \lambda_2 \left\| \nabla f - \nabla r \right\|_0 \nonumber \\ & \hphantom{E\left( f \right) =} + \lambda_3 \left\| \nabla f \right\|_1 + \lambda_4 \left\| \nabla f - G \right\|_2^2 \tag{14} \end{align}\]

The fused results obtained by different models are respectively shown in Fig. 6 (c)-(f). In Fig. 6 (c), the fused image fails to capture the details of the infrared image (see in the red rectangle of Fig. 6 (c)), which shows that the \(\ell_0\) regularization term (\(\left\| \nabla f - \nabla r \right\|_0\)) contributes to extracting the details of the infrared image. The result in Fig. 6 (d) cannot effectively fuse the salient information of the visible image (see in the red rectangle of Fig. 6 (d)). This is because that the \(\ell_2\) regularization term (\(\left\| {\nabla f - G} \right\|_2^2\)) helps acquire the gradients of the visible image. As depicted in Fig. 6 (e), the usage of the data term (\(\left\| {f - r} \right\|_2^2\)) may result in loss of details, such as in the red rectangle of Fig. 6 (e). Compared to the above fused results, our full model can combine the salient object from the infrared image with the abundant textures from the visible image to generate a complementary fused image.

Fig. 6  Comparisons of fused results generated by different models. (a) Visible image. (b) Infrared image. (c) Fused result by the model (12). (d) Fused result by the model (13). (e) Fused result by the model (14). (f) Fused result by our full model (1).

In terms of the effectiveness of the TV regularization term (\(\left\| \nabla f \right\|_1\)), we remove it to obtain the following model:

\[\begin{align} & E\left( f \right) = \lambda_1 \left\| f - v \right\|_2^2 + \lambda_2 \left\| \nabla f - \nabla r \right\|_0 \nonumber \\ & \hphantom{E\left( f \right) =} + \lambda_4 \left\| \nabla f - G \right\|_2^2 \tag{15} \end{align}\]

To demonstrate the noise suppression ability of the regularization term (\(\left\| \nabla f \right\|_1\)), we add the random Gaussian noise into the visible image and infrared image to compare the performance of the model (15) and our full model (1), as shown in Fig. 7. The model (15) fails to alleviate the noise in the visible and infrared image because of the lack of the TV constrain term. In comparisons, the fused result generated by our full model (1) contains less noise, which demonstrates the effectiveness of the TV regularization term.

Fig. 7  Comparisons of noise suppression performance. (a) Visible image with noise, (b) Infrared image with noise. (c) Fused result by the model (15). (d) Fused result by our full model (1).

4.  Conclusion

In this letter, we have proposed a hybrid variational model for infrared and visible image fusion. Unlike previous fusion algorithms, the proposed method constructs hybrid regularization constraints to simultaneously extract salient gradients of target features in the infrared image and gradients of abundant details in the visible image, and suppress the amplified noise. Experimental results verify that our approach effectively generates the fused image with the highlighted targets and abundant textures.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 62301453, in part by the Southwest University Experimental Technology Research Project under Grant SYJ2023030, in part by the Natural Science Foundation of Henan, China under Grant 222300420582 and in part by the Xuchang University Science and Technology Research Project under Grant 2023ZD004.

References

[1] A. Toet, “Image fusion by a ratio of low-pass pyramid,” Pattern Recognit. Lett., vol.9, no.4, pp.245-253, 1989.
CrossRef

[2] B.K. Shreyamsha Kumar, “Image fusion based on pixel significance using cross bilateral filter,” Signal Image Video Process., vol.9, no.5, pp.1193-1204, 2015.
CrossRef

[3] Y. Chen, L. Cheng, H. Wu, F. Mo, and Z. Chen, “Infrared and visible image fusion based on iterative differential thermal information filter,” Opt. Lasers Eng., vol.148, 106776, 2022.
CrossRef

[4] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” Inform. Fusion, vol.48, pp.11-26, 2019.
CrossRef

[5] H. Li, X.-J. Wu, and J. Kittler, “Rfn-nest: An end-to-end residual fusion network for infrared and visible images,” Inform. Fusion, vol.73, pp.72-86, 2021.
CrossRef

[6] X. Liu, R. Wang, H. Huo, X. Yang, and J. Li, “An attention-guided and wavelet-constrained generative adversarial network for infrared and visible image fusion,” Infrared Physics & Technology, vol.129, 104570, 2023.
CrossRef

[7] J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” Inform. Fusion, vol.31, pp.100-109, 2016.
CrossRef

[8]  H. Zhang, X. Han, and H. Han, “Infrared and visible image fusion based on a rolling guidance filter,” (in Chinese) Infrared Technol., vol.44, no.6, pp.598-603, 2022.

[9] J. Chen, X. Li, and K. Wu, “Infrared and visible image fusion based on relative total variation decomposition,” Infrared Phys. Technol., vol.123, 104112, 2022.
CrossRef

[10] M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-revealing low-light image enhancement via robust retinex model,” IEEE Trans. Image Process., vol.27, no.6, pp.2828-2841, 2018.
CrossRef

[11] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol.3, no.1, pp.1-122, 2011.
CrossRef

[12] L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via L0 gradient minimization,” Proc. 2011 SIGGRAPH Asia conference, pp.1-12, 2011.
CrossRef

[13] Y. Liu, Z. Yan, J. Tan, and Y. Li, “Multi-purpose oriented single nighttime image haze removal based on unified variational retinex model,” IEEE Trans. Circuits Syst. Video Technol., vol.33, no.4, pp.1643-1657, 2023.
CrossRef

[14] J.J. Lewis, R.J. O'Callaghan, S.G. Nikolov, D.R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” Information fusion, vol.8, no.2, pp.119-130, 2007.
CrossRef

[15] V.P.S. Naidu, “Image fusion technique using multi-resolution singular value decomposition,” Def. Sci. J., vol.61, no.5, pp.479-484, 2011.
CrossRef

[16]  A. Toet, “Tno image fusion dataset.” https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029, 2014.

[17] J.W. Roberts, J.A. van Aardt, and F.B. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens., vol.2, no.1, 023522, 2008.
CrossRef

[18] B.K. Shreyamsha Kumar, “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” Signal, Image Video Process., vol.7, no.6, pp.1125-1143, 2013.
CrossRef

[19] H.R. Sheikh and A.C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process., vol.15, no.2, pp.430-444, 2006.
CrossRef

[20] V. Aslantas and E. Bendes, “A new image quality metric for image fusion: The sum of the correlations of differences,” AEU-Int. J. Electron. Commun., vol.69, no.12, pp.1890-1896, 2015.
CrossRef

[21] Z. Wang and A.C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., vol.9, no.3, pp.81-84, 2002.
CrossRef

[22] K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Trans. Image Process., vol.24, no.11, pp.3345-3356, 2015.
CrossRef

Authors

Zhengwei XIA
  Xuchang University
Yun LIU
  Southwest University
Xiaoyun WANG
  Xuchang University
Feiyun ZHANG
  Xuchang University
Rui CHEN
  Zhengzhou University of Light Industry
Weiwei JIANG
  Beijing University of Posts and Telecommunications

Keyword