Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it difficult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns for classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.
Hiroto SAIGO
Kyushu Institute of Technology
Hisashi KASHIMA
University of Tokyo
Koji TSUDA
Advanced Industrial Science and Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Hiroto SAIGO, Hisashi KASHIMA, Koji TSUDA, "Fast Iterative Mining Using Sparsity-Inducing Loss Functions" in IEICE TRANSACTIONS on Information,
vol. E96-D, no. 8, pp. 1766-1773, August 2013, doi: 10.1587/transinf.E96.D.1766.
Abstract: Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it difficult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns for classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E96.D.1766/_p
Copy
@ARTICLE{e96-d_8_1766,
author={Hiroto SAIGO, Hisashi KASHIMA, Koji TSUDA, },
journal={IEICE TRANSACTIONS on Information},
title={Fast Iterative Mining Using Sparsity-Inducing Loss Functions},
year={2013},
volume={E96-D},
number={8},
pages={1766-1773},
abstract={Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it difficult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns for classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.},
keywords={},
doi={10.1587/transinf.E96.D.1766},
ISSN={1745-1361},
month={August},}
Copy
TY - JOUR
TI - Fast Iterative Mining Using Sparsity-Inducing Loss Functions
T2 - IEICE TRANSACTIONS on Information
SP - 1766
EP - 1773
AU - Hiroto SAIGO
AU - Hisashi KASHIMA
AU - Koji TSUDA
PY - 2013
DO - 10.1587/transinf.E96.D.1766
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E96-D
IS - 8
JA - IEICE TRANSACTIONS on Information
Y1 - August 2013
AB - Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it difficult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns for classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.
ER -