We propose a new method for improving the recognition performance of phonemes, speech emotions, and music genres using multi-task learning. When tasks are closely related, multi-task learning can improve the performance of each task by learning common feature representation for all the tasks. However, the recognition tasks considered in this study demand different input signals of speech and music at different time scales, resulting in input features with different characteristics. In addition, a training dataset with multiple labels for all information sources is not available. Considering these issues, we conduct multi-task learning in a sequential training process using input features with a single label for one information source. A comparative evaluation confirms that the proposed method for multi-task learning provides higher performance for all recognition tasks than individual learning for each task as in conventional methods.
Jae-Won KIM
Kwangwoon University
Hochong PARK
Kwangwoon University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Jae-Won KIM, Hochong PARK, "Multi-Task Learning for Improved Recognition of Multiple Types of Acoustic Information" in IEICE TRANSACTIONS on Information,
vol. E104-D, no. 10, pp. 1762-1765, October 2021, doi: 10.1587/transinf.2021EDL8029.
Abstract: We propose a new method for improving the recognition performance of phonemes, speech emotions, and music genres using multi-task learning. When tasks are closely related, multi-task learning can improve the performance of each task by learning common feature representation for all the tasks. However, the recognition tasks considered in this study demand different input signals of speech and music at different time scales, resulting in input features with different characteristics. In addition, a training dataset with multiple labels for all information sources is not available. Considering these issues, we conduct multi-task learning in a sequential training process using input features with a single label for one information source. A comparative evaluation confirms that the proposed method for multi-task learning provides higher performance for all recognition tasks than individual learning for each task as in conventional methods.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDL8029/_p
Copy
@ARTICLE{e104-d_10_1762,
author={Jae-Won KIM, Hochong PARK, },
journal={IEICE TRANSACTIONS on Information},
title={Multi-Task Learning for Improved Recognition of Multiple Types of Acoustic Information},
year={2021},
volume={E104-D},
number={10},
pages={1762-1765},
abstract={We propose a new method for improving the recognition performance of phonemes, speech emotions, and music genres using multi-task learning. When tasks are closely related, multi-task learning can improve the performance of each task by learning common feature representation for all the tasks. However, the recognition tasks considered in this study demand different input signals of speech and music at different time scales, resulting in input features with different characteristics. In addition, a training dataset with multiple labels for all information sources is not available. Considering these issues, we conduct multi-task learning in a sequential training process using input features with a single label for one information source. A comparative evaluation confirms that the proposed method for multi-task learning provides higher performance for all recognition tasks than individual learning for each task as in conventional methods.},
keywords={},
doi={10.1587/transinf.2021EDL8029},
ISSN={1745-1361},
month={October},}
Copy
TY - JOUR
TI - Multi-Task Learning for Improved Recognition of Multiple Types of Acoustic Information
T2 - IEICE TRANSACTIONS on Information
SP - 1762
EP - 1765
AU - Jae-Won KIM
AU - Hochong PARK
PY - 2021
DO - 10.1587/transinf.2021EDL8029
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E104-D
IS - 10
JA - IEICE TRANSACTIONS on Information
Y1 - October 2021
AB - We propose a new method for improving the recognition performance of phonemes, speech emotions, and music genres using multi-task learning. When tasks are closely related, multi-task learning can improve the performance of each task by learning common feature representation for all the tasks. However, the recognition tasks considered in this study demand different input signals of speech and music at different time scales, resulting in input features with different characteristics. In addition, a training dataset with multiple labels for all information sources is not available. Considering these issues, we conduct multi-task learning in a sequential training process using input features with a single label for one information source. A comparative evaluation confirms that the proposed method for multi-task learning provides higher performance for all recognition tasks than individual learning for each task as in conventional methods.
ER -