In this study, we introduce shift-invariant sparse image representations using tree-structured dictionaries. Sparse coding is a generative signal model that approximates signals by the linear combinations of atoms in a dictionary. Since a sparsity penalty is introduced during signal approximation and dictionary learning, the dictionary represents the primal structures of the signals. Under the shift-invariance constraint, the dictionary comprises translated structuring elements (SEs). The computational cost and number of atoms in the dictionary increase along with the increasing number of SEs. In this paper, we propose an algorithm for shift-invariant sparse image representation, in which SEs are learnt with a tree-structured approach. By using a tree-structured dictionary, we can reduce the computational cost of the image decomposition to the logarithmic order of the number of SEs. We also present the results of our experiments on the SE learning and the use of our algorithm in image recovery applications.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Makoto NAKASHIZUKA, Hidenari NISHIURA, Youji IIGUNI, "Shift-Invariant Sparse Image Representations Using Tree-Structured Dictionaries" in IEICE TRANSACTIONS on Fundamentals,
vol. E92-A, no. 11, pp. 2809-2818, November 2009, doi: 10.1587/transfun.E92.A.2809.
Abstract: In this study, we introduce shift-invariant sparse image representations using tree-structured dictionaries. Sparse coding is a generative signal model that approximates signals by the linear combinations of atoms in a dictionary. Since a sparsity penalty is introduced during signal approximation and dictionary learning, the dictionary represents the primal structures of the signals. Under the shift-invariance constraint, the dictionary comprises translated structuring elements (SEs). The computational cost and number of atoms in the dictionary increase along with the increasing number of SEs. In this paper, we propose an algorithm for shift-invariant sparse image representation, in which SEs are learnt with a tree-structured approach. By using a tree-structured dictionary, we can reduce the computational cost of the image decomposition to the logarithmic order of the number of SEs. We also present the results of our experiments on the SE learning and the use of our algorithm in image recovery applications.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E92.A.2809/_p
Copy
@ARTICLE{e92-a_11_2809,
author={Makoto NAKASHIZUKA, Hidenari NISHIURA, Youji IIGUNI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Shift-Invariant Sparse Image Representations Using Tree-Structured Dictionaries},
year={2009},
volume={E92-A},
number={11},
pages={2809-2818},
abstract={In this study, we introduce shift-invariant sparse image representations using tree-structured dictionaries. Sparse coding is a generative signal model that approximates signals by the linear combinations of atoms in a dictionary. Since a sparsity penalty is introduced during signal approximation and dictionary learning, the dictionary represents the primal structures of the signals. Under the shift-invariance constraint, the dictionary comprises translated structuring elements (SEs). The computational cost and number of atoms in the dictionary increase along with the increasing number of SEs. In this paper, we propose an algorithm for shift-invariant sparse image representation, in which SEs are learnt with a tree-structured approach. By using a tree-structured dictionary, we can reduce the computational cost of the image decomposition to the logarithmic order of the number of SEs. We also present the results of our experiments on the SE learning and the use of our algorithm in image recovery applications.},
keywords={},
doi={10.1587/transfun.E92.A.2809},
ISSN={1745-1337},
month={November},}
Copy
TY - JOUR
TI - Shift-Invariant Sparse Image Representations Using Tree-Structured Dictionaries
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 2809
EP - 2818
AU - Makoto NAKASHIZUKA
AU - Hidenari NISHIURA
AU - Youji IIGUNI
PY - 2009
DO - 10.1587/transfun.E92.A.2809
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E92-A
IS - 11
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - November 2009
AB - In this study, we introduce shift-invariant sparse image representations using tree-structured dictionaries. Sparse coding is a generative signal model that approximates signals by the linear combinations of atoms in a dictionary. Since a sparsity penalty is introduced during signal approximation and dictionary learning, the dictionary represents the primal structures of the signals. Under the shift-invariance constraint, the dictionary comprises translated structuring elements (SEs). The computational cost and number of atoms in the dictionary increase along with the increasing number of SEs. In this paper, we propose an algorithm for shift-invariant sparse image representation, in which SEs are learnt with a tree-structured approach. By using a tree-structured dictionary, we can reduce the computational cost of the image decomposition to the logarithmic order of the number of SEs. We also present the results of our experiments on the SE learning and the use of our algorithm in image recovery applications.
ER -