The search functionality is under construction.

The search functionality is under construction.

This paper presents a convex-analytic framework to learn sparse graphs from data. While our problem formulation is inspired by an extension of the graphical lasso using the so-called combinatorial graph Laplacian framework, a key difference is the use of a nonconvex alternative to the *l*_{1} norm to attain graphs with better interpretability. Specifically, we use the weakly-convex minimax concave penalty (the difference between the *l*_{1} norm and the Huber function) which is known to yield sparse solutions with lower estimation bias than *l*_{1} for regression problems. In our framework, the graph Laplacian is replaced in the optimization by a linear transform of the vector corresponding to its upper triangular part. Via a reformulation relying on Moreau's decomposition, we show that overall convexity is guaranteed by introducing a quadratic function to our cost function. The problem can be solved efficiently by the primal-dual splitting method, of which the admissible conditions for provable convergence are presented. Numerical examples show that the proposed method significantly outperforms the existing graph learning methods with reasonable computation time.

- Publication
- IEICE TRANSACTIONS on Fundamentals Vol.E106-A No.1 pp.23-34

- Publication Date
- 2023/01/01

- Publicized
- 2022/07/01

- Online ISSN
- 1745-1337

- DOI
- 10.1587/transfun.2021EAP1153

- Type of Manuscript
- PAPER

- Category
- Graphs and Networks

Tatsuya KOYAKUMARU

Keio University

Masahiro YUKAWA

Keio University

Eduardo PAVEZ

University of Southern California

Antonio ORTEGA

University of Southern California

The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.

Copy

Tatsuya KOYAKUMARU, Masahiro YUKAWA, Eduardo PAVEZ, Antonio ORTEGA, "Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov Random Fields" in IEICE TRANSACTIONS on Fundamentals,
vol. E106-A, no. 1, pp. 23-34, January 2023, doi: 10.1587/transfun.2021EAP1153.

Abstract: This paper presents a convex-analytic framework to learn sparse graphs from data. While our problem formulation is inspired by an extension of the graphical lasso using the so-called combinatorial graph Laplacian framework, a key difference is the use of a nonconvex alternative to the *l*_{1} norm to attain graphs with better interpretability. Specifically, we use the weakly-convex minimax concave penalty (the difference between the *l*_{1} norm and the Huber function) which is known to yield sparse solutions with lower estimation bias than *l*_{1} for regression problems. In our framework, the graph Laplacian is replaced in the optimization by a linear transform of the vector corresponding to its upper triangular part. Via a reformulation relying on Moreau's decomposition, we show that overall convexity is guaranteed by introducing a quadratic function to our cost function. The problem can be solved efficiently by the primal-dual splitting method, of which the admissible conditions for provable convergence are presented. Numerical examples show that the proposed method significantly outperforms the existing graph learning methods with reasonable computation time.

URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2021EAP1153/_p

Copy

@ARTICLE{e106-a_1_23,

author={Tatsuya KOYAKUMARU, Masahiro YUKAWA, Eduardo PAVEZ, Antonio ORTEGA, },

journal={IEICE TRANSACTIONS on Fundamentals},

title={Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov Random Fields},

year={2023},

volume={E106-A},

number={1},

pages={23-34},

abstract={This paper presents a convex-analytic framework to learn sparse graphs from data. While our problem formulation is inspired by an extension of the graphical lasso using the so-called combinatorial graph Laplacian framework, a key difference is the use of a nonconvex alternative to the *l*_{1} norm to attain graphs with better interpretability. Specifically, we use the weakly-convex minimax concave penalty (the difference between the *l*_{1} norm and the Huber function) which is known to yield sparse solutions with lower estimation bias than *l*_{1} for regression problems. In our framework, the graph Laplacian is replaced in the optimization by a linear transform of the vector corresponding to its upper triangular part. Via a reformulation relying on Moreau's decomposition, we show that overall convexity is guaranteed by introducing a quadratic function to our cost function. The problem can be solved efficiently by the primal-dual splitting method, of which the admissible conditions for provable convergence are presented. Numerical examples show that the proposed method significantly outperforms the existing graph learning methods with reasonable computation time.},

keywords={},

doi={10.1587/transfun.2021EAP1153},

ISSN={1745-1337},

month={January},}

Copy

TY - JOUR

TI - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov Random Fields

T2 - IEICE TRANSACTIONS on Fundamentals

SP - 23

EP - 34

AU - Tatsuya KOYAKUMARU

AU - Masahiro YUKAWA

AU - Eduardo PAVEZ

AU - Antonio ORTEGA

PY - 2023

DO - 10.1587/transfun.2021EAP1153

JO - IEICE TRANSACTIONS on Fundamentals

SN - 1745-1337

VL - E106-A

IS - 1

JA - IEICE TRANSACTIONS on Fundamentals

Y1 - January 2023

AB - This paper presents a convex-analytic framework to learn sparse graphs from data. While our problem formulation is inspired by an extension of the graphical lasso using the so-called combinatorial graph Laplacian framework, a key difference is the use of a nonconvex alternative to the *l*_{1} norm to attain graphs with better interpretability. Specifically, we use the weakly-convex minimax concave penalty (the difference between the *l*_{1} norm and the Huber function) which is known to yield sparse solutions with lower estimation bias than *l*_{1} for regression problems. In our framework, the graph Laplacian is replaced in the optimization by a linear transform of the vector corresponding to its upper triangular part. Via a reformulation relying on Moreau's decomposition, we show that overall convexity is guaranteed by introducing a quadratic function to our cost function. The problem can be solved efficiently by the primal-dual splitting method, of which the admissible conditions for provable convergence are presented. Numerical examples show that the proposed method significantly outperforms the existing graph learning methods with reasonable computation time.

ER -