Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.
Chihiro WATANABE
NTT Communication Science Laboratories
Kaoru HIRAMATSU
NTT Geospace Corporation, NEXTSITE Asakusa Building
Kunio KASHINO
NTT Communication Science Laboratories
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Chihiro WATANABE, Kaoru HIRAMATSU, Kunio KASHINO, "Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 2, pp. 390-397, February 2020, doi: 10.1587/transinf.2019EDP7136.
Abstract: Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDP7136/_p
Copy
@ARTICLE{e103-d_2_390,
author={Chihiro WATANABE, Kaoru HIRAMATSU, Kunio KASHINO, },
journal={IEICE TRANSACTIONS on Information},
title={Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition},
year={2020},
volume={E103-D},
number={2},
pages={390-397},
abstract={Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.},
keywords={},
doi={10.1587/transinf.2019EDP7136},
ISSN={1745-1361},
month={February},}
Copy
TY - JOUR
TI - Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition
T2 - IEICE TRANSACTIONS on Information
SP - 390
EP - 397
AU - Chihiro WATANABE
AU - Kaoru HIRAMATSU
AU - Kunio KASHINO
PY - 2020
DO - 10.1587/transinf.2019EDP7136
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2020
AB - Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.
ER -