1-3hit |
Siyang YU Kazuaki KONDO Yuichi NAKAMURA Takayuki NAKAJIMA Masatake DANTSUJI
This article introduces our investigation on learning state estimation in e-learning on the condition that visual observation and recording of a learner's behaviors is possible. In this research, we examined methods of adaptation for a new learner for whom a small number of ground truth data can be obtained.
Siyang YU Kazuaki KONDO Yuichi NAKAMURA Takayuki NAKAJIMA Masatake DANTSUJI
Self-paced e-learning provides much more freedom in time and locale than traditional education as well as diversity of learning contents and learning media and tools. However, its limitations must not be ignored. Lack of information on learners' states is a serious issue that can lead to severe problems, such as low learning efficiency, motivation loss, and even dropping out of e-learning. We have designed a novel e-learning support system that can visually observe learners' non-verbal behaviors and estimate their learning states and that can be easily integrated into practical e-learning environments. Three pairs of internal states closely related to learning performance, concentration-distraction, difficulty-ease, and interest-boredom, were selected as targets of recognition. In addition, we investigated the practical problem of estimating the learning states of a new learner whose characteristics are not known in advance. Experimental results show the potential of our system.
Shohei YANO Haruhide HOKARI Shoji SHIMADA
Out-of-head sound localization achieved via binaural earphones is indispensable for a virtual sound system. It is necessary to measure the two transfer functions of each subject, Spatial Sound Transfer Function (SSTF) and Ear Canal Transfer Function (ECTF), for achieving sound localization. It is well known that the quality of sound localization may be poor if the individual transfer functions are not accurately reproduced. This is because each subject has his/her own transfer functions. It is very important to clarify which function includes more individual information, SSTF or ECTF, in order to implement a simpler model. Therefore, we introduce the quantity of "Personal differences" for investigating the subject's transfer functions included in SSTF and ECTF. We measure the transfer functions SSTF and ECTF of 60 subjects in a soundproofed room, and analysis of the data using Principal Component Analysis (PCA) and three subjective assessment tests. This study finds that ECTF differs more widely from person to person than SSTF.