1-3hit |
Hiroki TANJI Ryo TANAKA Kyohei TABATA Yoshito ISEKI Takahiro MURAKAMI Yoshihisa ISHIDA
In this paper, we present update rules for convolutive nonnegative matrix factorization (NMF) in which cost functions are based on the squared Euclidean distance, the Kullback-Leibler (KL) divergence and the Itakura-Saito (IS) divergence. We define an auxiliary function for each cost function and derive the update rule. We also apply this method to the single-channel signal separation in speech signals. Experimental results showed that the convergence of our KL divergence-based method was better than that in the conventional method, and our method achieved single-channel signal separation successfully.
Masaki MURATA Hiroki TANJI Kazuhide YAMAMOTO Stijn DE SAEGER Yasunori KAKIZAWA Kentaro TORISAWA
In this study, we extracted articles describing problems, articles describing their solutions, and articles describing their causes from a Japanese Q&A style Web forum using a supervised machine learning with 0.70, 0.86, and 0.56 F values, respectively. We confirmed that these values are significantly better than their baselines. This extraction will be useful to construct an application that can search for problems provided by users and display causes and potential solutions.
Hiroki TANJI Takahiro MURAKAMI
The design and adjustment of the divergence in audio applications using nonnegative matrix factorization (NMF) is still open problem. In this study, to deal with this problem, we explore a representation of the divergence using neural networks (NNs). Instead of the divergence, our approach extends the multiplicative update algorithm (MUA), which estimates the NMF parameters, using NNs. The design of the extended MUA incorporates NNs, and the new algorithm is referred to as the deep MUA (DeMUA) for NMF. While the DeMUA represents the algorithm for the NMF, interestingly, the divergence is obtained from the incorporated NN. In addition, we propose theoretical guides to design the incorporated NN such that it can be interpreted as a divergence. By appropriately designing the NN, MUAs based on existing divergences with a single hyper-parameter can be represented by the DeMUA. To train the DeMUA, we applied it to audio denoising and supervised signal separation. Our experimental results show that the proposed architecture can learn the MUA and the divergences in sparse denoising and speech separation tasks and that the MUA based on generalized divergences with multiple parameters shows favorable performances on these tasks.