The search functionality is under construction.

The search functionality is under construction.

In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization *without* the computational constraints.

- Publication
- IEICE TRANSACTIONS on Fundamentals Vol.E93-A No.2 pp.467-475

- Publication Date
- 2010/02/01

- Publicized

- Online ISSN
- 1745-1337

- DOI
- 10.1587/transfun.E93.A.467

- Type of Manuscript
- PAPER

- Category
- Digital Signal Processing

The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.

Copy

Masahiro YUKAWA, Wolfgang UTSCHICK, "A Fast Stochastic Gradient Algorithm: Maximal Use of Sparsification Benefits under Computational Constraints" in IEICE TRANSACTIONS on Fundamentals,
vol. E93-A, no. 2, pp. 467-475, February 2010, doi: 10.1587/transfun.E93.A.467.

Abstract: In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization *without* the computational constraints.

URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E93.A.467/_p

Copy

@ARTICLE{e93-a_2_467,

author={Masahiro YUKAWA, Wolfgang UTSCHICK, },

journal={IEICE TRANSACTIONS on Fundamentals},

title={A Fast Stochastic Gradient Algorithm: Maximal Use of Sparsification Benefits under Computational Constraints},

year={2010},

volume={E93-A},

number={2},

pages={467-475},

abstract={In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization *without* the computational constraints.},

keywords={},

doi={10.1587/transfun.E93.A.467},

ISSN={1745-1337},

month={February},}

Copy

TY - JOUR

TI - A Fast Stochastic Gradient Algorithm: Maximal Use of Sparsification Benefits under Computational Constraints

T2 - IEICE TRANSACTIONS on Fundamentals

SP - 467

EP - 475

AU - Masahiro YUKAWA

AU - Wolfgang UTSCHICK

PY - 2010

DO - 10.1587/transfun.E93.A.467

JO - IEICE TRANSACTIONS on Fundamentals

SN - 1745-1337

VL - E93-A

IS - 2

JA - IEICE TRANSACTIONS on Fundamentals

Y1 - February 2010

AB - In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization *without* the computational constraints.

ER -