The search functionality is under construction.
The search functionality is under construction.

Open Access
Nuclear Norm Minus Frobenius Norm Minimization with Rank Residual Constraint for Image Denoising

Hua HUANG, Yiwen SHAN, Chuan LI, Zhi WANG

  • Full Text Views

    201

  • Cite this
  • Free PDF (8.9MB)

Summary :

Image denoising is an indispensable process of manifold high level tasks in image processing and computer vision. However, the traditional low-rank minimization-based methods suffer from a biased problem since only the noisy observation is used to estimate the underlying clean matrix. To overcome this issue, a new low-rank minimization-based method, called nuclear norm minus Frobenius norm rank residual minimization (NFRRM), is proposed for image denoising. The propose method transforms the ill-posed image denoising problem to rank residual minimization problems through excavating the nonlocal self-similarity prior. The proposed NFRRM model can perform an accurate estimation to the underlying clean matrix through treating each rank residual component flexibly. More importantly, the global optimum of the proposed NFRRM model can be obtained in closed-form. Extensive experiments demonstrate that the proposed NFRRM method outperforms many state-of-the-art image denoising methods.

Publication
IEICE TRANSACTIONS on Information Vol.E107-D No.8 pp.992-1006
Publication Date
2024/08/01
Publicized
2024/04/09
Online ISSN
1745-1361
DOI
10.1587/transinf.2023EDP7265
Type of Manuscript
PAPER
Category
Fundamentals of Information Systems

1.  Introduction

Image denoising, which aims to recover the original image from its noisy observation, is a problem of great challenge and popularity. It serves as an indispensable process of manifold high level tasks in image processing and computer vision [1]-[8]. Due to its importance and ill-posed nature, image denoising has drawn lots of research attentions in recent years. And a variety of state-of-the-art methods have been proposed based on low-rank minimization [9]-[15], sparse representation [16]-[18] and deep learning [19]-[23].

Benefited from the development of convex and nonconvex optimization, low-rank minimization has shown excellent performance in image denoising [24], [25]. Essentially, low-rank minimization is performed on each noise matrix that has distinctly low-rank properties. After denoising all of those matrices, the denoised image would be obtained. In practice, the noisy matrices can be constructed by gathering the similar structures in the observed image [26]. Mathematically, the low-rank minimization associated with the inputted noisy matrix, denoted as \(\mathbf{Y}\), is

\[\begin{align} \min_{\mathbf{X}} \frac{1}{2\sigma_n^2} \lVert \mathbf{Y} - \mathbf{X} \rVert_F^2 + \lambda \cdot \mathop{\mathrm{rank}}(\mathbf{X}), \tag{1} \end{align}\]

where \(\mathbf{X}\) is the estimated matrix, \(\lVert {}\cdot{} \rVert_F\) is the Frobenius norm, \(\mathop{\mathrm{rank}}({}\cdot{})\) is the rank function, \(\sigma_n\) is the standard deviation of noise, \(\lambda>0\) is the regularization parameter. Unfortunately, such a rank minimization problem (1) is NP-hard, which cannot be solved in polynomial time. Inspired by compressed sensing [27], [28], nuclear norm is used to replace the rank function, leading to the following nuclear norm minimization (NNM) problem:

\[\begin{align} \min_{\mathbf{X}} \frac{1}{2\sigma_n^2} \lVert \mathbf{Y} - \mathbf{X} \rVert_F^2 + \lambda \lVert \mathbf{X} \rVert_{*}, \tag{2} \end{align}\]

where \(\lVert {}\cdot{} \rVert_{*}\) is the nuclear norm. As pointed out by Fazel [29], the nuclear norm is the convex envelop, i.e., the “best” convex approximation, to the rank of matrix \(\mathbf{X}\) on the set \(\lbrace \mathbf{X} \in \mathbb{R}^{m\times n}: \lVert \mathbf{X} \rVert_2 \le 1 \rbrace\), where \(\lVert \mathbf{X} \rVert_2\) is the spectral norm. Moreover, Candès et al. [30] proved that the underlying matrix can be recovered with high accuracy and large probability. Solving the convex NNM problem has two advantages. First, a local optimum is indeed the global optimum. Second, there are a number of efficient algorithms to solve it in polynomial time, such as the interior point method [31], singular value thresholding [32], and accelerated proximal gradient with line search [33]. Albeit the theoretical guarantees and efficient solvers, NNM still has some limitations. As pointed out by [9], [34], the nuclear norm performs equal treatments on all singular values. This practice ignores the prior that singular values with different magnitude have different importance, and thus should be treated differently. Consequently, NNM would result in severe deviation between the recovered matrix and the optimum. To alleviate this problem, a number of nonconvex low-rank regularizers have been studied, including weighted norms [14], [15], truncated nuclear norm [35], [36], and capped nuclear norm [37], [38]. Those nonconvex regularizers give more flexible treatments on singular values. And their superiority over nuclear norm is validated in many studies. The weighted nuclear norm [14] sets the weights to be inversely proportional to the magnitude of singular values. Therefore, larger singular values, which quantify more information, are penalized less. The resultant weighted nuclear norm minimization (WNNM) method is applied to image denoising, achieving significant improvements over the original NNM. More importantly, the global optimum of WNNM model can be got in closed-form, which makes WNNM highly efficient.

However, Xie et al. [15] pointed out that weighted nuclear norm still tends to over-penalize the dominant singular values, which would result in WNNM obtaining biased solutions. On the same course, they proposed weighted Schatten \(p\)-norm minimization (WSNM), which achieved state-of-the-art denoising performance. However, solving WSNM is expensive since it no longer allows the closed-form solution and requires dedicated iterative solvers [39].

One common defect of the models aforementioned is that the low-rank matrix, i.e., \(\mathbf{X}\) in (1), is estimated only from its noisy version \(\mathbf{Y}\). As the noise become stronger, it is more difficult to recognize the similar structures and gather into the noisy matrix. In other words, the low-rank nature of matrix \(\mathbf{Y}\) would be weaken. Consequently, the recovery would deviate from the optimum. To address this problem, dedicated constraints should be introduced. Rank residual constraint (RRC) [40] constructs a reference matrix for each noisy matrix. The reference matrices lead to a more accurate estimation to the underlying clean matrices. However, RRC model treats rank residual components equally, over-penalizing the dominant ones. To overcome this drawback, Zhang et al. [41] replaced the nuclear norm regularizer in RRC with weighted Schatten \(p\)-norm, and proposed the rank residual constraint with weighted Schatten \(p\)-norm (SRRC) model. SRRC model further improves the denoising performance over RRC. However, SRRC model requires iterative solvers [39], which might be expensive when recovering large images.

Considering the drawbacks of aforementioned models, in this paper we propose a new low-rank minimization model, called Nuclear norm minus Frobenius norm Rank Residual Minimization (NFRRM), and apply it to solve image denoising problem. The proposed model has two major advantages. First, it provides a more flexible treatment on different rank residual components, which indicates the underlying low-rank matrix can be estimated with more accuracy. Second, cheap closed-form solution is allowed [42], which means the proposed model can be solved with high efficiency. Through exploiting the framework of nonlocal self-similarity [26] prior, the proposed NFRRM model is applied to image denoising. Extensive experiments demonstrated that the proposed NFRRM method outperforms several state-of-the-art image denoising methods.

2.  Related Works

2.1  Nonlocal Self-Similarity

As an ill-posed problem, image denoising has no unique solution. And the solutions vary discontinuously for small input changes. Therefore, prior knowledge should be introduced to regularize the solution space. The nonlocal self-similarity (NSS) [26] is one of the exemplary image prior. NSS describes the property that there spread many similar patterns across a natural image. Through gathering those similar structures, a flurry of low-rank matrices, called patch matrices, would be constructed. Low-rank minimization models should be conducted on each patch matrix, estimating its underlying denoised version. Eventually, the denoised image would be obtained by concatenating all denoised matrices.

Figure 1 depicts the detailed procedure about generating a patch matrix from the observed image. Concretely, given a noisy image \(Y\), a number of key patches are assigned across it. Those key patches are spaced an equal distance apart, and each spans \(p\times p\) pixels. For each key patch, its \(k\) most similar neighbors (i.e., patches) are identified via \(k\)-Nearest Neighbors (\(k\)-NN) algorithm. The similar neighbors are found in a squared search region around the key patch, instead of the whole image. The distance between the key patch \(\mathbf{K}\in \mathbb{R}^{p\times p}\) and its \(i\)th neighbor \(\mathbf{P}_{i} \in \mathbb{R}^{p\times p}\) is measured by

\[\begin{align} d(\mathbf{K}, \mathbf{P}_{i}) = \lVert \mathbf{K}- \mathbf{P}_{i}\rVert_{F}, \tag{3} \end{align}\]

where \(i\in \lbrace 1,2,\ldots, S^2\rbrace\), and \(S\) is the side length of the squared search region. After that, the chosen \(k\) most similar patches are stretched to column vectors, denoted as \(\mathbf{y}_{i} \in \mathbb{R}^{p^2}\) for \(i\in \lbrace 1,2,\ldots, k\rbrace\). Those \(k\) vectors are stacked to form a patch matrix \(\mathbf{Y} = \mathbf{C} + \mathbf{N} \in \mathbb{R}^{p^2\times k}, \) where \(\mathbf{C}\), \(\mathbf{N}\) are the underlying clean matrix and the noise matrix, respectively. Note that the relationship between key patches and patch matrices is one-to-one. As patch matrix \(\mathbf{Y}\) groups similar structures, it has clear low-rank property. Therefore, low-rank minimization can be conducted on \(\mathbf{Y}\). And the corresponding denoised version \(\mathbf{X}\), which is desired to be as closed to the ground truth \(\mathbf{C}\) as possible, can be estimated. In summary, NSS prior bridges the gap between the low-rank minimization and image denoising problem, promoting the proposal of numerous low-rank based image denoising methods [13]-[15], [43], [44].

Fig. 1  Image denoising based on integrating NSS and low-rank minimization.

2.2  The Constraint of Rank Residual

As discussed in Introduction, defective results would be obtained by the traditional low-rank models since they estimate the underlying clean matrix only from its corresponding noisy observation. To address this issue, dedicated constraints should be introduced to regularize the solutions. In [40], Zha et al. proposed a novel low-rank minimization-based method with rank residual constraint (RRC) for image denoising and achieved the competitive results.

Mathematically, the rank residual, denoted as \(\Gamma\), is defined as the difference between the the estimated matrix \(\mathbf{X}\) and the underlying ground truth matrix \(\mathbf{C}\), that is \(\Gamma \stackrel{\text{def}}{=} \mathbf{X} - \mathbf{C}\). However, the clean matrix \(\mathbf{C}\) is unavailable in image denoising problem. Therefore, accurate estimator should be introduced, for example, the nonlocal means [45]. Denote the estimated clean matrix as \(\hat{\mathbf{C}}\). Then the rank residual can be modified as

\[\begin{align} \Gamma \stackrel{\text{def}}{=} \mathbf{X} - \hat{\mathbf{C}}. \tag{4} \end{align}\]

Finally, a general form of low-rank minimization model with rank residual can be formulated as

\[\begin{align} \min_{\mathbf{X}} \frac{1}{2} \lVert\mathbf{Y} - \mathbf{X}\rVert_F^2 + \lambda r(\mathbf{X} - \hat{\mathbf{C}}), \tag{5} \end{align}\]

where \(r( {}\cdot{} )\) denotes a low-rank regularizer that promotes the rank residual to be low-rank. It remains an open problem what regularizer fits for rank residual better [41].

2.3  Nuclear Norm Minus Frobenius Norm

Nuclear norm Minus Frobenius norm (NNFN) is a nonconvex regularizer with multiple merits. Mathematically, NNFN of matrix \(\mathbf{X} \in \mathbb{R}^{m\times n}\) is defined as

\[\begin{align} \lVert \mathbf{X} \rVert_{*-F} &= \lVert \mathbf{X} \rVert_{*} - \alpha \lVert \mathbf{X} \rVert_{F} \nonumber\\ &= \sum_{i=1}^{l} \sigma_i(\mathbf{X}) - \alpha\Bigl( \sum_{i=1}^{l} \sigma_i(\mathbf{X})^2 \Bigr)^{ \frac{1}{2} }, \tag{6} \end{align}\]

where \(\alpha\ge 0\), \(l = \min(m,n)\), and \(\sigma_i(\mathbf{X})\) is the \(i\)th singular value of \(\mathbf{X}\). As shown in Fig. 2, NNFN is adequate to treat singular values with much flexibility. More importantly, the optimization model with NNFN, which can be formulated as

\[\begin{align} \min_{\mathbf{X}} \frac{1}{2} \lVert \mathbf{Y} - \mathbf{X} \rVert_F^2 + \lambda \lVert \mathbf{X} \rVert_{*-F}, \tag{7} \end{align}\]

allows the global optimum being obtained in closed-form [42], [46]. The NNFN-based low-rank models have achieved promising performance in matrix completion [46], recommendation system [46], and color image denoising [13].

Fig. 2  The output pattern of the NNFN-based proximal operator with \(\lambda=10\). The horizontal axis represents the input, i.e., the \(i\)th singular value of the observed matrix \(\mathbf{Y}\). The vertical axis represents the output, i.e., the \(i\)th singular value of the output matrix \(\mathbf{X}\). Rational set of \(\alpha\) will boost the flexibility of shrinkage. Concretely, when \(\alpha=0\), NNFN reduces to nuclear norm. Thus a constant shrinkage is imposed on \(\sigma_i(\mathbf{X})\). As \(\alpha\) becomes larger, the resultant NNFN shrinks less on the large singular values, or even preserves them. When \(\alpha \rightarrow +\infty\), NNFN behaves like Capped nuclear norm, which only shrinks the small singular values.

3.  The Proposed Model

3.1  Problem Formulation

Image denoising aims to recover the original clean image \(C\) from its noisy observation \(Y\), which can be formulated as

\[\begin{align} Y = C + N, \tag{8} \end{align}\]

where \(N \in \mathbb{R}^{m\times n}\) is the Gaussian white noise with each entry \(N_{ij} \sim \mathcal{N}(0, \sigma_{n}^2)\). Problem (8) is severely ill-posed. Therefore, prior knowledge should be exploited to characterize the statistical features of the observed image \(Y\). In this paper, NSS prior is introduced to concentrate the similar structures, form the low-rank matrices, and deliver them to the proposed model. As shown in Fig. 3, two matrices should be prepared for our model.

Fig. 3  The schematic framework of the proposed NFRRM method.

  1. Patch matrix
    Given the observed image \(Y \in \mathbb{R}^{H\times W}\), we set \(M\) key patches across it. A key patch sizes \(p\times p\) pixels. And the key patches are \(s\) pixels apart. In this paper, we set \(s = \min(4, p-1)\). Hence we have

    \[\begin{align} M = \lceil (H-p)/s \rceil \times \lceil (W-p)/s \rceil, \tag{9} \end{align}\]
    where \(\lceil {}\cdot{} \rceil\) is the ceil function. For each key patch, \(k\) most similar patches are identified via \(k\)-NN algorithm. The selected \(k\) similar patches are then scratched to column vectors, denoted as \(\mathbf{y}_{i} \in \mathbb{R}^{p^2}\) for \(i \in \lbrace 1,2,\ldots, k\rbrace\). The vectors are stacked to form a patch matrix, denoted as \(\mathbf{Y} \in \mathbb{R}^{p^2\times k}\). The detailed procedure of generating the patch matrix can be found in Sect. 2.1.

  2. Reference matrix
    To calculate the rank residual in (4), a reference matrix should be constructed for each patch matrix. To recap, for each key patch, \(k\) most similar patches are extracted by \(k\)-NN. For each similar patch \(\mathbf{P}_i\) (\(i\in \lbrace1,2,\ldots,k\rbrace\)), its contribution, i.e., weight, to the reference matrix is

    \[\begin{align} w_i = \frac{1}{\lVert \mathbf{w} \rVert_1} e^{-\frac{\Vert\mathbf{K} - \mathbf{P}_{i}\Vert_2}{h} }, \tag{10} \end{align}\]
    where \(\mathbf{w} = [w_1, w_2, \ldots, w_k]^\top\), \(\mathbf{K}\) is the key patch, and \(h\) is a constant. Intuitively, the weight \(w_i\) depends on the similarity between the key patch \(\mathbf{K}\) and the neighbor \(\mathbf{P}_i\). And the similarity is modeled by a decreasing function of the Euclidean distance. After obtaining the weight vector \(\mathbf{w}\), the reference matrix \(\hat{\mathbf{C}}\) can be calculated. Concretely, the \(j\)th column of \(\hat{\mathbf{C}}\) is given by (\(j\in \lbrace1,2,\ldots,k\rbrace\))
    \[\begin{align} &\hat{\mathbf{C}}(1{:}p^2, j)= \mathbf{Y}(1 {:} p^2, 1 {:} k-j+1) \cdot\mathbf{w}(1 {:} k-j+1), \tag{11} \end{align}\]
    where \(\mathbf{Y} \in \mathbb{R}^{p^2\times k}\) is the noisy patch matrix of key patch \(\mathbf{K}\), the subscript in \(\mathbf{w}(1{:}k-j+1)\) means selecting the 1st through the \(p^2\)th elements of \(\mathbf{w}\). The mechanism in (11) stems from the nonlocal means [45]. And the constructed reference matrix \(\hat{\mathbf{C}} \in \mathbb{R}^{p^2\times k}\) is adequate to approximate the underlying clean patch matrix \(\mathbf{C}\) with sufficient accuracy. It is worth emphasizing that a key patch will correspond with only one patch matrix and one reference matrix.

To estimate the clean patch matrix \(\mathbf{X}\) with high accuracy and more efficiency, the Nuclear norm minus Frobenius norm Rank Residual Minimization (NFRRM) model is proposed. Mathematically, NFRRM model can be formulated as

\[\begin{align} \min_{\mathbf{X}} \frac{1}{2\sigma_{n}^{2}} \lVert\mathbf{Y} - \mathbf{X}\rVert_F^2 + \lambda \lVert\mathbf{X} - \hat{\mathbf{C}}\rVert_{*-F}, \tag{12} \end{align}\]

where \(\sigma_n\) is the standard deviation of noise, \(\lambda\) is the regularization parameter, \(\mathbf{Y}\) and \(\hat{\mathbf{C}}\) are the inputted patch matrix and reference matrix, respectively.

After the patch matrix \(\mathbf{X} \in \mathbb{R}^{p^2\times k}\) being estimated, it is decomposed back to \(k\) patches. Those denoised patches are then reverted back to their original places. Finally, the denoised image would be obtained by concatenating the results all denoised patch matrices.

The above procedure is carried out several rounds in order to obtain better denoising performance. Denote the inputted image and outputted image at \(t\)-th iteration as \(Y^{(t)}\) and \(X^{(t)}\), respectively. To reduce the method noise, the following iterative regularization is adopted:

\[\begin{align} Y^{(t)} = X^{(t-1)} + \delta(Y - X^{(t-1)}), \tag{13} \end{align}\]

where \(t\in \mathbb{N}_{+}\), and \(\delta\in[0,1)\) controls the step-back in consecutive iterations.

3.2  Optimization

The optimization problem in (12) can be equivalently rewritten as follows:

\[\begin{align} \min_{\mathbf{X}} \frac{1}{2} \lVert\bar{\mathbf{Y}} - \bar{\mathbf{X}}\rVert_F^2 + \lambda\sigma_{n}^{2} \lVert \bar{\mathbf{X}} \rVert_{*-F}, \tag{14} \end{align}\]

where \(\bar{\mathbf{Y}} = \mathbf{Y}-\hat{\mathbf{C}}\), and \(\bar{\mathbf{X}} = \mathbf{X}-\hat{\mathbf{C}}\). The following Theorem is proposed to show that the global optimum of problem (14) can be obtained in closed-form.

Theorem 1: Assume that \(\bar{\mathbf{Y}}\) admits singular value decomposition (SVD) as \(\mathbf{U}_{\bar{\mathbf{Y}}} \boldsymbol{\Sigma}_{\bar{\mathbf{Y}}} \mathbf{V}_{\bar{\mathbf{Y}}}^{\top}\), where \(\boldsymbol{\Sigma}_{\bar{\mathbf{Y}}} = \mathop{\mathrm{Diag}}(\boldsymbol{\sigma}(\bar{\mathbf{Y}}))\). Then the global optimum of problem (14) is

\[\begin{align} \mathbf{X}^{*} = \mathbf{U}_{\bar{\mathbf{Y}}} \mathop{\mathrm{Diag}}(\boldsymbol{\rho}^*) \mathbf{V}_{\bar{\mathbf{Y}}}^{\top} + \hat{\mathbf{C}}, \tag{15} \end{align}\]

with

\[\begin{align} \rho_{i}^{*} = \left(1 + \frac{\lambda \alpha \sigma_{n}^{2}}{\lVert \boldsymbol{s} \rVert_2} \right) \cdot s_i, \tag{16} \end{align}\]

where \(s_i= \max(\sigma_i(\bar{\mathbf{Y}}) - \lambda\sigma_{n}^{2}, 0)\).

Proof 1: Assume that \(\bar{\mathbf{X}} \in \mathbb{R}^{p\times q}\) admits SVD as \(\mathbf{U}_{\bar{\mathbf{X}}} \boldsymbol{\Sigma}_{\bar{\mathbf{X}}} \mathbf{V}_{\bar{\mathbf{X}}}^{\top}\), \(l = \min(p, q)\). Denote \(\lambda\!\cdot\!\sigma_n^2\) as \(\Lambda\). Then the loss term in problem (14) can be rewritten as

\[\begin{align} \frac{1}{2} \lVert \bar{\mathbf{Y}} - \bar{\mathbf{X}} \rVert_F^2 = \frac{1}{2} \bigl( \lVert \bar{\mathbf{Y}} \rVert_F^2 - 2\langle \bar{\mathbf{Y}}, \bar{\mathbf{X}} \rangle + \lVert \bar{\mathbf{X}} \rVert_F^2 \bigr). \tag{17} \end{align}\]

According to the Von Neumann’s trace inequality [47], we have

\[\begin{align} \langle \bar{\mathbf{Y}}, \bar{\mathbf{X}} \rangle \le \mathop{\mathrm{Tr}}(\sigma(\bar{\mathbf{Y}})^{\top} \sigma(\bar{\mathbf{X}})). \tag{18} \end{align}\]

The equality occurs if and only if

\[\begin{align} \mathbf{U}_{\bar{\mathbf{Y}}} = \mathbf{U}_{\bar{\mathbf{X}}} \quad \mathrm{and} \quad \mathbf{V}_{\bar{\mathbf{Y}}} = \mathbf{V}_{\bar{\mathbf{X}}}. \tag{19} \end{align}\]

Then, for problem (14), we have

\[\begin{align} &\min_{\mathbf{X}} \frac{1}{2} \lVert\bar{\mathbf{Y}} - \bar{\mathbf{X}}\rVert_F^2 + \Lambda \lVert \bar{\mathbf{X}} \rVert_{*-F} \tag{20} \\ ={}& \min_{\mathbf{X}} \frac{1}{2}\lVert\bar{\mathbf{Y}}\rVert_F^2 - \mathop{\mathrm{Tr}}(\sigma(\bar{\mathbf{Y}})^{\top} \sigma(\bar{\mathbf{X}})) + \frac{1}{2}\lVert\bar{\mathbf{X}}\rVert_F^2 \notag \\ &\quad{} + \Lambda \Biggl( \sum_{i=1}^{l}\sigma_i(\bar{\mathbf{X}}) - \alpha \Bigl(\sum_{i=1}^{l}\sigma_i(\bar{\mathbf{X}})^2\Bigr)^{\frac{1}{2}} \Biggr) \tag{21} \\ ={}& \min_{\mathbf{X}} \sum_{i=1}^{l} \Bigl( \frac{1}{2}\sigma_i(\bar{\mathbf{X}})^2 - \sigma_i(\bar{\mathbf{Y}}) \cdot \sigma_i(\bar{\mathbf{X}}) + \Lambda \cdot \sigma_i(\bar{\mathbf{X}}) \Bigr) \notag \\ &\quad{} - \Lambda \alpha \Bigl( \sum_{i=1}^{l}\sigma_i(\bar{\mathbf{X}})^2 \Bigr)^{\frac{1}{2}} \tag{22} \end{align}\]

Denote the objective function of (22) as \(J(\boldsymbol{\sigma}(\bar{\mathbf{X}}))\). The minimum point of \(J(\boldsymbol{\sigma}(\bar{\mathbf{X}}))\), denoted as \(\boldsymbol{\rho}^* \in \mathbb{R}^{l}\), is given by the following first-order optimality condition:

\[\begin{align} \frac{\partial J}{\partial {\boldsymbol{\sigma}(\bar{\mathbf{X}})}} = \mathbf{0}, \tag{23} \end{align}\]

where \(\mathbf{0} \in \mathbb{R}^{l}\) is a zero vector. Formula (23) can be rewritten as

\[\begin{align} \Bigl(1 - \frac{\Lambda \alpha}{\lVert \boldsymbol{\sigma}(\bar{\mathbf{X}}) \rVert_{2}} \Bigr) \boldsymbol{\sigma}(\bar{\mathbf{X}}) = \boldsymbol{\sigma}(\bar{\mathbf{Y}}) - \Lambda\mathbf{1}, \tag{24} \end{align}\]

where \(\mathbf{1} \in \mathbb{R}^{l}\) is a vector with all elements equal to 1. The solution of (24) is

\[\begin{align} \rho_i^* = \Bigl(1 + \frac{\Lambda \alpha}{\lVert \mathbf{s} \rVert_2} \Bigr) \cdot s_i, \tag{25} \end{align}\]

where \(s_i = \max(\sigma_i(\bar{\mathbf{Y}}) - \Lambda, 0)\). Therefore, the global optimum of \(\bar{\mathbf{X}}\) is

\[\begin{align} \bar{\mathbf{X}}^* = \mathbf{U}_{\bar{\mathbf{Y}}} \mathop{\mathrm{Diag}}(\boldsymbol{\rho}^*) \mathbf{V}_{\bar{\mathbf{Y}}}^{\top}. \tag{26} \end{align}\]

Thus the global optimum of the original problem (12) is

\[\begin{align} \mathbf{X}^* = \bar{\mathbf{X}}^* + \hat{\mathbf{C}}. \tag{27} \end{align}\]

Finally, the whole procedure of our image denoising method is summarized in Algorithm 1.

3.3  Complexity Analysis

We discuss the time complex of the steps in the inner for-loop of Algorithm 1. Step 6 costs \(\mathcal{O}(S^2 \log S)\), where \(S\) is the side length of the squared search region. Step 7 costs \(\mathcal{O}(p^4 k)\), where \(p\) is the side length of a patch and \(k\) is the number of most similar neighbors. Step 8 costs \(\mathcal{O}(p^2 k^2)\) since there are \(p^2 (1 + 2 + \cdots + k) = \frac{p^2}{2}k(k+1)\) entries to be calculated. Step 9 (SVD) costs \(\mathcal{O}(p^2 k^2)\). Step 10 and 11 cost \(\mathcal{O}(p^4k + p^2k^2)\) and \(\mathcal{O}(p^2k)\), respectively. Among them, the dominant cost lies in step 8. Therefore, the total time complexity of the proposed method is \(\mathcal{O}(p^2 k^2 \cdot M \cdot T)\). And solving the proposed NFRRM model (i.e., step 9 and 10) costs \(\mathcal{O}(p^2 k^2)\).

4.  Experimental Results

To validate the effectiveness of the proposed NFRRM method, extensive experiments are implemented on image denoising. Seven state-of-the-art methods are chosen for comparison, including the block matching and 3d filtering (BM3D) [16], global image denoising (GLIDE) [48], optimal graph Laplacian regularization (OGLR) [49], group sparsity residual constraint with nonlocal priors (GSRC) [18], rank residual constraint (RRC) [40], SRRC [41], and Nonconvex Structural Sparsity Residual Constraint (NSSRC) [50]. The codes of all competing methods are obtained from their authors. The default parameters are kept. Both peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are considered to quantify the denoising performance. The higher PSNR and SSIM indicate the better quality of the denoised image. The competition between methods are conducted on ten widely used images, the thumbnails of which are shown in Fig. 4. The noisy observations are generated by zero-mean Gaussian with standard deviation \(\sigma_{n} \in \lbrace 20, 30, 40, 50, 75, 100 \rbrace\).

Fig. 4  Ten test images. The first row, from left-to-right: House, Straw, Lake, Starfish, and Kodim. The second row, from left-to-right: Monarch, Plants, Brodatz, Parrots, and Boat.

For the proposed NFRRM method, parameters are set with respect to different noise levels. Concretely, the size of a patch \(p\times p\) is set as \(6\times6\), \(7\times 7\), \(7\times 7\), \(7\times 7\), \(8\times 8\), \(9\times 9\) for \(\sigma_n=20, 30, 40, 50, 75, 100\), respectively. The number of similar neighbors \(k\) is set as \(60\), \(60\), \(70\), \(80\), \(90\), \(100\) for \(\sigma_n=20, 30, 40, 50, 75, 100\), respectively. The size of search region \(S\times S = 25\times 25\) for all noise levels. The iterative regularization parameter \(\delta = 0.1\). The upper bound of iteration \(T = 81\). And the settings of \(\lambda\) and \(\alpha\) are listed in Table 3.

The PSNR results for all competing methods are presented in Table 1. The best results are highlighted in bold. The proposed NFRRM method achieves the highest PSNR in 42 out of 60 cases. Importantly, NFRRM outperforms other rank residual-based methods, i.e., RRC and SRRC, in all of 60 cases. The average improvements of NFRRM over RRC are 0.10 dB, 0.13 dB, 0.14 dB, 0.19 dB, 0.20 dB, 0.38 dB for \(\sigma_n=20, 30, 40, 50, 75, 100\), respectively. The improvement becomes more significant as the noise becomes larger. Moreover, NFRRM also outperforms the group sparse residual-based methods, i.e., GSRC and NSSRC.

The SSIM results are shown in Table 2. The proposed NFRRM achieves the best SSIM on 41 out of 60 cases. Concretely, NFRRM outperforms its counterparts, i.e., RRC and SRRC, in 53 out of 60 cases. The average improvements of NFRRM over SRRC are \(1.6\times 10^{-3}\), \(2.6\times 10^{-3}\), \(5.8\times 10^{-3}\), \(6.3\times 10^{-3}\), \(1.02\times 10^{-2}\), \(1.83\times 10^{-2}\) for \(\sigma_n=20, 30, 40, 50, 75, 100\), respectively. When \(\sigma_n = 20\), NFRRM fails to achieve the best average SSIM, narrowly losing to NSRRC. However, NFRRM achieves the highest average SSIM under all of the other noise levels. In summary, the proposed NFRRM method not only achieves the highest PSNR and SSIM in most cases, but also outperforms other state-of-the-art rank residual-based methods significantly.

Table 1  PSNR (dB) results of all competing methods.

Table 2  SSIM results of all competing methods.

Table 3  The settings of \(\lambda\) and \(\alpha\).

The visual comparison between all competing methods are shown in Figs. 5-10. The proposed NFRRM removes the Gaussian noise completely in all of the chosen six cases. Meanwhile, the textures, edges, and image details are preserved well by NFRRM. Concretely, NFRRM present a better recovery on textures of the sea bed in Fig. 5, the eave in Fig. 7, and the multiple kinds of surfaces in Fig. 10. In contrast, GSRC, RRC, and NSSRC over-smooth those textures, as shown in their highlighted windows. In the highlighted windows of Fig. 8, NFRRM reconstructs more structures on the mouthpart of the monarch and the flowers in the left-bottom corner. In contrast, the OGLR and GLIDE over-smooth the image and generate too much artifacts. In Fig. 6 and Fig. 9, NFRRM preserves the edges better. While BM3D, GLIDE, and OGLR failed to reconstruct those edges and details. In summary, the proposed NFRRM method presents strong denoising capability, recovering the images with promising visual qualities while achieving high PSNR and SSIM.

Fig. 5  Denoising results of Starfish with \(\sigma_n = 20\). The subtitle format is “method: PSNR, SSIM”.

Fig. 6  Denoising results of Lake with \(\sigma_n = 30\). The subtitle format is “method: PSNR, SSIM”.

Fig. 7  Denoising results of Kodim with \(\sigma_n = 40\). The subtitle format is “method: PSNR, SSIM”.

Fig. 8  Denoising results of Monarch with \(\sigma_n = 40\). The subtitle format is “method: PSNR, SSIM”.

Fig. 9   Denoising results of Plants with \(\sigma_n = 50\). The subtitle format is “method: PSNR, SSIM”.

Fig. 10   Denoising results of Brodatz with \(\sigma_n = 75\). The subtitle format is “method: PSNR, SSIM”.

Fig. 11  The effects of \(\lambda\). (a) \(\lambda=0.01\): \(\text{PSNR}=18.56\) dB, \(\text{SSIM}=0.3929\). (b) \(\lambda=1\): \(\text{PSNR}=27.97\) dB, \(\text{SSIM}=0.8329\). (c) \(\lambda=100\): \(\text{PSNR}=11.43\), \(\text{SSIM}=0.5027\).

We also compare the proposed NFRRM model with a state-of-the-art deep learning model BoostNet [51]. For fairness, we use one NFRRM model (\(\lambda=0.93\), \(\alpha=1.855\)) and one BoostNet model (trained for exactly \(\sigma_n = 50\)) to test all of noise levels \(\sigma_n \in \lbrace 20,30,40,50,75,100 \rbrace\). The PSNR and SSIM results are listed in Table 4 and Table 5, respectively. For \(\sigma_n = 50\), NFRRM is narrowly inferior to BoostNet since it only obtains higher PSNR and SSIM in 4 images while BoostNet obtains 6. However, for \(\sigma_n \in \lbrace 20,30,40,75,100 \rbrace\), the proposed NFRRM outperforms BoostNet significantly, achieving the higher PSNR and SSIM on all of the 50 test cases. Moreover, the denoising results of BoostNet (trained for \(\sigma_n = 50\)) become unreasonable when the tested noise levels becomes far from 50. As shown in Fig. 12, BoostNet (for \(\sigma_n = 50\)) over-smooths the images corrupted by \(\sigma_n \in \lbrace 20,30,40 \rbrace\); while it remains too much noise in the images which are previously corrupted by \(\sigma_n = \lbrace 75,100 \rbrace\). On the contrary, the proposed NFRRM shows much more robustness since a same NFRRM model is adequate to obtain satisfactory denoising results in all of the six noise levels.

Table 4  PSNR results of a BoostNet model (trained for \(\sigma_n = 50\)) and a NFRRM model (\(\lambda=0.93\), \(\alpha=1.855\)).

Table 5  SSIM results of a BoostNet model (trained for \(\sigma_n = 50\)) and a NFRRM model (\(\lambda=0.93\), \(\alpha=1.855\)).

Fig. 12  Denoising results on “Starfish” of BoostNet model (trained for \(\sigma_n = 50\)) and NFRRM model (\(\lambda=0.93\), \(\alpha=1.855\)).

5.  Sensitivity Analyses of Hyper-Parameters

In this section, we discuss the sensitivity of \(\lambda\), \(\alpha\), and \(s\). All of the tests are carried out on the image “Starfish” corrupted by noise \(\sim \mathcal{N}(0, 30^2)\).

5.1  Regularization Parameter \(\lambda\)

Regularization parameter \(\lambda\) balances the influence of the fidelity term (\(\lVert \cdot \rVert_F^2\)) and the regularization term (\(\lVert \cdot \rVert_{*-F}\)). As the value of \(\lambda\) becomes too small, the NFRRM model would mostly minimize the fidelity term, which makes the yielded image contain too much noise, as shown in Fig. 11 (a). As the value of \(\lambda\) becomes too large, the NFRRM model would mostly minimize the regularization term. Consequently, the yielded image would be over-smoothed and lose lots of details, as shown in Fig. 11 (c).

Figure 13 investigates the sensitivity of \(\lambda\). As \(\lambda\) becomes larger, the denoising performance (PSNR and SSIM) improves sharply, and reaches the peak at \(\lambda=1\). Then the performance continues to decrease as \(\lambda \rightarrow +\infty\). In this paper, the suitable values of \(\lambda\) are determined via experiments.

Fig. 13  The effects of \(\lambda\) on the denoising performance of NFRRM. The corrupted observation is generated by adding zero-mean Gaussian noise with standard deviation \(\sigma_n = 30\) on image “Starfish”. Other parameters are all fixed.

5.2  Parameter \(\alpha\)

Parameter \(\alpha\) controls the shrinkage on singular values. As it becomes too small (\(\alpha \rightarrow 0\)), the NNFN would behave like the nuclear norm, which is characterized by the green line in Fig. 2. In that case, the leading singular values would be over-shrunk during the regularization process of NFRRM model. Consequently, the output image would be over-smoothed, losing lots of details, as shown in Fig. 14 (b). As \(\alpha\) becomes too large (\(\alpha \rightarrow +\infty\)), too many singular values would be zero-shrunk, which is characterized by the yellow line in Fig. 2. Consequently, the output image would contain noise and artifacts at the edges, as shown in Fig. 14 (d).

Fig. 14  The impact of \(\alpha\). (a) The corrupted observation (\(\sigma_n = 20\)), \(\text{PSNR} = 20.81\) dB, \(\text{SSIM} = 0.4731\). (b) \(\alpha = 0\): \(\text{PSNR} = 27.63\) dB, \(\text{SSIM} = 0.8292\). (c) \(\alpha = 2.4\): \(\text{PSNR} = 28.04\) dB, \(\text{SSIM} = 0.8349\). (d) \(\alpha = 10\): \(\text{PSNR} = 27.38\) dB, \(\text{SSIM} = 0.8258\).

Figure 15 investigates the sensitivity of \(\alpha\). As can be seen, the best \(\alpha\) dwells in the midst. In this paper, the suitable \(\alpha\) is also determined via experiments.

Fig. 15  PSNR and SSIM results on “Starfish” with different \(\alpha\).

An empirical strategy is devised to choose \(\alpha\) efficiently. If the corrupted image has a higher SSIM value (denoted as \(\mathit{SSIM}_0\)), a larger \(\alpha\) is preferred, and vice versa. Considering the ten images corrupted by \(\sigma_n=50\). Their initial SSIM values, i.e., \(\mathit{SSIM}_0\), are sorted in an ascending order. Then, those 10 images are divided into two groups according to their \(\mathit{SSIM}_0\), as shown in Table 6. For each corrupted image, we search its “best \(\alpha\)”, i.e., an \(\alpha\) that can make the NFRRM produce the highest SSIM on that image. And the “best \(\alpha\)” for each image are also listed in Table 6. As can be seen, the averages of “best \(\alpha\)” between two groups have a wide gap. And for the image in the second group, their “best \(\alpha\)” tend to be larger. Therefore, the aforementioned empirical strategy for choosing \(\alpha\) can be obtained. This empirical strategy is efficient since it only uses the SSIM value of the corrupted image, which can be easily obtained.

Table 6  The best \(\alpha\) for each image.

5.3  The Interval of Key Patches \(s\)

Figure 16 investigates the sensitivity of \(s\). As \(s\) becomes small (\(8\rightarrow 5\)), the denoising performance would be improved significantly. That is attributed to that more key patches are generated, and hence the nonlocal self-similarity of the image is further exploited. However, as \(s\) becomes too small (\(5 \rightarrow 1\)), the running time would increase sharply, while the PSNR improves little. Therefore, the choice of \(s\) should be judicious in order to meet the balance between denoising performance and running time.

Fig. 16  Effects of \(s\) on denoising performance of NFRRM. Running time is in seconds.

6.  Conclusion

In this paper, a new image denoising method, called nuclear norm minus Frobenius norm rank residual minimization (NFRRM), was proposed. The proposed method converts the ill-posed image denoising problem to a nonconvex optimization problem by exploiting the frameworks of image NSS prior. The sound NNFN regularizer was chosen to model the rank residual. With it, the proposed NFRRM model can treat different rank residual components with high flexibility. We derived that the global optimum of the nonconvex optimization problem can be easily obtained in closed-form. Extensive experimental results demonstrated that the proposed method outperforms several state-of-the-art image denoising methods.

Acknowledgements

This work was supported by the Science and Technology Research Program of Chongqing Municipal Education Commission under Grant KJQN202303413.

References

[1] Z. Liu, D. Hu, Z. Wang, J. Gou, and T. Jia, “LatLRR for subspace clustering via reweighted Frobenius norm minimization,” Expert Syst. Appl., vol.224, Art. no.119977, Aug. 2023.
CrossRef

[2] Y. Chang, L. Yan, T. Wu, and S. Zhong, “Remote sensing image stripe noise removal: From image decomposition perspective,” IEEE Trans. Geosci. Remote Sens., vol.54, no.12, pp.7018-7031, 2016.
CrossRef

[3] G. Bi, G. Si, Y. Zhao, B. Qi, and H. Lv, “Haze removal for a single remote sensing image using low-rank and sparse prior,” IEEE Trans. Geosci. Remote Sensing, vol.60, pp.1-13, Dec. 2022.
CrossRef

[4] X. Ding, C. Shen, T. Zeng, and Y. Peng, “SAB Net: A semantic attention boosting framework for semantic segmentation,” IEEE Trans. Neural Netw. Learn. Syst., pp.1-13, 2022.
CrossRef

[5] P.-H. Hsiao, F.-J. Chang, and Y.-Y. Lin, “Learning discriminatively reconstructed source data for object recognition with few examples,” IEEE Trans. Image Process., vol.25, no.8, pp.3518-3532, 2016.
CrossRef

[6] X. Jiang and J. Lai, “Sparse and dense hybrid representation via dictionary decomposition for face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol.37, no.5, pp.1067-1079, May 2015.
CrossRef

[7] S. Liao, A.K. Jain, and S.Z. Li, “Partial face recognition: Alignment-free approach,” IEEE Trans. Pattern Anal. Mach. Intell., vol.35, no.5, pp.1193-1205, May 2013.
CrossRef

[8] S. Shen, “Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes,” IEEE Trans. Image Process., vol.22, no.5, pp.1901-1914, May 2013.
CrossRef

[9] Z. Wang, Y. Liu, X. Luo, J. Wang, C. Gao, D. Peng, and W. Chen, “Large-scale affine matrix rank minimization with a novel nonconvex regularizer,” IEEE Trans. Neural Netw. Learn. Syst., vol.33, no.9, pp.4661-4675, Sept. 2022.
CrossRef

[10] Z. Wang, W. Wang, J. Wang, and S. Chen, “Fast and efficient algorithm for matrix completion via closed-form 2/3-thresholding operator,” Neurocomputing, vol.330, pp.212-222, Feb. 2019.
CrossRef

[11] Z. Wang, D. Hu, X. Luo, W. Wang, J. Wang, and W. Chen, “Performance guarantees of transformed Schatten-1 regularization for exact low-rank matrix recovery,” Int. J. Mach. Learn. Cybern., vol.12, pp.3379-3395, June 2021.
CrossRef

[12] Z. Wang, C. Gao, X. Luo, M. Tang, J. Wang, and W. Chen, “Accelerated inexact matrix completion algorithm via closed-form q-thresholding (q=1/2, 2/3) operator,” Int. J. Mach. Learn. Cybern., vol.11, pp.2327-2339, April 2020.
CrossRef

[13] Y. Shan, D. Hu, Z. Wang, and T. Jia, “Multi-channel nuclear norm minus Frobenius norm minimization for color image denoising,” Signal Process., vol.207, Art. no.108959, June 2023.
CrossRef

[14] S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted nuclear norm minimization and its applications to low level vision,” Int. J. Comput. Vis., vol.121, pp.183-208, July 2017.
CrossRef

[15] Y. Xie, S. Gu, Y. Liu, W. Zuo, W. Zhang, and L. Zhang, “Weighted Schatten p-norm minimization for image denoising and background subtraction,” IEEE Trans. Image Process., vol.25, no.10, pp.4842-4857, Aug. 2016.
CrossRef

[16] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process., vol.16, no.8, pp.2080-2095, July 2007.
CrossRef

[17] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process., vol.15, no.12, pp.3736-3745, 2006.
CrossRef

[18] Z. Zha, X. Yuan, B. Wen, J. Zhou, and C. Zhu, “Group sparsity residual constraint with non-local priors for image restoration,” IEEE Trans. Image Process., vol.29, pp.8960-8975, 2020.
CrossRef

[19] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., vol.26, no.7, pp.3142-3155, Feb. 2017.
CrossRef

[20] K. Zhang, W. Zuo, and L. Zhang, “FFDNet: Toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process., vol.27, no.9, pp.4608-4622, May 2018.
CrossRef

[21] M. Zhao, G. Cao, X. Huang, and L. Yang, “Hybrid transformer-CNN for real image denoising,” IEEE Signal Process. Lett., vol.29, pp.1252-1256, May 2022.
CrossRef

[22] F. Jia, L. Ma, Y. Yang, and T. Zeng, “Pixel-attention CNN with color correlation loss for color image denoising,” IEEE Signal Process. Lett., vol.28, pp.1600-1604, July 2021.
CrossRef

[23] Y. Song, Y. Zhu, and X. Du, “Grouped multi-scale network for real-world image denoising,” IEEE Signal Process. Lett., vol.27, pp.2124-2128, Nov. 2020.
CrossRef

[24] T. Liu, D. Hu, Z. Wang, J. Gou, and W. Chen, “Hyperspectral image denoising using nonconvex fraction function,” IEEE Geosci. Remote Sens. Lett., vol.20, pp.1-5, Art. no.5508105, Aug. 2023.
CrossRef

[25] Y. Shi, T. Liu, D. Hu, C. Li, and Z. Wang, “Nonconvex regularization with multi-weighted strategy for real color image denoising,” Int. J. Intell. Syst., vol.2023, Art. no.8813500, Sept. 2023.
CrossRef

[26] S. Wang, L. Zhang, and Y. Liang, “Nonlocal spectral prior model for low-level vision,” Asi. Conf. Comput. Vis., pp.231-244, Nov. 2012.
CrossRef

[27] D.L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol.52, no.4, pp.1289-1306, 2006.
CrossRef

[28] E.J. Candes and M.B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., vol.25, no.2, pp.21-30, 2008.
CrossRef

[29] M. Fazel, Matrix rank minimization with applications, Ph.D. thesis, Stanford University, 2002.

[30] E.J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?,” J. ACM, vol.58, no.3, Art. no.11, pp.1-37, 2011.
CrossRef

[31] S. Boyd and L. Vandenberghe, Convex Optimization, Canbridge University Press, Cambridge, U.K., 2004.
CrossRef

[32] J.-F. Cai, E.J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. Optim., vol.20, no.4, pp.1956-1982, March 2010.
CrossRef

[33] K. Toh and S. Yun, “An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems,” Pac. J. Optim., vol.6, no.615-640, p.15, 2010.

[34] F. Nie, H. Huang, and C. Ding, “Low-rank matrix recovery via efficient Schatten p-norm minimization,” AAAI Conf. Artif. Intell., vol.26, no.1, pp.655-661, 2021.
CrossRef

[35] Y. Hu, D. Zhang, J. Ye, X. Li, and X. He, “Fast and accurate matrix completion via truncated nuclear norm regularization,” IEEE Trans. Pattern Anal. Mach. Intell., vol.35, no.9, pp.2117-2130, 2013.
CrossRef

[36] T.-H. Oh, Y.-W. Tai, J.-C. Bazin, H. Kim, and I.S. Kweon, “Partial sum minimization of singular values in robust PCA: Algorithm and applications,” IEEE Trans. Pattern Anal. Mach. Intell., vol.38, no.4, pp.744-758, April 2016.
CrossRef

[37] Q. Sun, S. Xiang, and J. Ye, “Robust principal component analysis via capped norms,” 19th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, New York, NY, USA, pp.311-319, Association for Computing Machinery, 2013.
CrossRef

[38] F. Zhang, Z. Yang, Y. Chen, J. Yang, and G. Yang, “Matrix completion via capped nuclear norm,” IET Image Process., vol.12, no.6, pp.959-966, 2018.
CrossRef

[39] W. Zuo, D. Meng, L. Zhang, X. Feng, and D. Zhang, “A generalized iterated shrinkage algorithm for non-convex sparse coding,” IEEE Int. Conf. Comput. Vis., pp.217-224, 2013.
CrossRef

[40] Z. Zha, X. Yuan, B. Wen, J. Zhou, J. Zhang, and C. Zhu, “From rank estimation to rank approximation: Rank residual constraint for image restoration,” IEEE Trans. Image Process., vol.29, pp.3254-3269, 2020.
CrossRef

[41] T. Zhang, D. Wu, and X. Mo, “The rank residual constraint model with weighted Schatten p-norm minimization for image denoising,” Circuits Syst. Signal Process., vol.42, pp.4740-4758, 2023.
CrossRef

[42] Y. Lou and M. Yan, “Fast L1-L2 minimization via a proximal operator,” SIAM J. Sci. Comput., vol.74, no.2, pp.767-785, 2018.
CrossRef

[43] J. Xu, L. Zhang, D. Zhang, and X. Feng, “Multi-channel weighted nuclear norm minimization for real color image denoising,” IEEE Int. Conf. Comput. Vis., pp.1105-1113, 2017.
CrossRef

[44] X. Huang, B. Du, and W. Liu, “Multichannel color image denoising via weighted Schatten p-norm minimization,” Int. Joint Conf. Artif. Intell., pp.637-644, 2020.
CrossRef

[45] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” IEEE Conf. Comput. Vis. Pattern Recognit., vol.2, pp.60-65, 2005.
CrossRef

[46] Y. Wang, Q. Yao, and J. Kwok, “A scalable, adaptive and sound nonconvex regularizer for low-rank matrix learning,” Int. World Wide Web Conf., pp.1798-1808, 2021.
CrossRef

[47] L. Mirsky, “A trace inequality of John von Neumann,” Monatshefte für mathematik, vol.79, pp.303-306, 1975.
CrossRef

[48] H. Talebi and P. Milanfar, “Global image denoising,” IEEE Trans. Image Process., vol.23, no.2, pp.755-768, 2014.
CrossRef

[49] J. Pang and G. Cheung, “Graph Laplacian regularization for image denoising: Analysis in the continuous domain,” IEEE Trans. Image Process., vol.26, no.4, pp.1770-1785, 2017.
CrossRef

[50] Z. Zha, X. Yuan, B. Wen, J. Zhang, and C. Zhu, “Nonconvex structural sparsity residual constraint for image restoration,” IEEE Trans. Cybern., vol.52, no.11, pp.12440-12453, 2022.
CrossRef

[51] D.M. Vo, T.P. Le, D.M. Nguyen, and S.-W. Lee, “BoostNet: A boosted convolutional neural network for image blind denoising,” IEEE Access, vol.9, pp.115145-115164, 2021.
CrossRef

Authors

Hua HUANG
  Chongqing Vocational Institute of Engineering

received the M.S. degree in computer science and engineering from the Chongqing University, Chongqing, China, in 2014. He is currently an Associate Professor with the Information Construction and Management Center, Chongqing Vocational Institute of Engineering, Chongqing, China.

Yiwen SHAN
  Southwest University

is currently working toward the B.S. degree in the College of Computer and Information Science, Southwest University, Chongqing, China. His research focuses on machine learning.

Chuan LI
  Chongqing College of International Business and Economics

received the M.S. degree in computer science and technology from Southwest University in 2013, Chongqing, China. He is currently a Ph.D. student in the College of Computer and Information Science, Southwest University, Chongqing, China. His current research interests include data mining, Education Informatization.

Zhi WANG
  Chongqing College of International Business and Economics

received the B.S. and M.S. degrees in computer science and engineering, and the Ph.D. degree in statistics from the Southwest University, Chongqing, China, in 2005, 2009, and 2020, respectively. He is currently an Associate Professor with the College of Computer and Information Science, Southwest University, Chongqing, China. He has published more than 20 academic articles in reputable journals and conferences, including IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Neural Networks and Learning Systems, IEEE Geoscience and Remote Sensing Letters, Neurocomputing, Signal Processing, Expert Systems With Applications, International Journal of Machine Learning and Cybernetics, International Journal of Intelligent Systems, IJCNN, PRICAI, HPCC, and so on. His research interests include low-rank matrix/tensor recovery, machine learning, and convex/nonconvex optimization. He is a member of the IEEE.

Keyword