This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).
Kenya SAKAMOTO
Osaka University
Shizuka SHIRAI
Osaka University
Noriko TAKEMURA
Osaka University,Kyushu Institute of Technology
Jason ORLOSKY
Osaka University,Augusta University
Hiroyuki NAGATAKI
Osaka University
Mayumi UEDA
Osaka University,University of Marketing and Distribution Sciences
Yuki URANISHI
Osaka University
Haruo TAKEMURA
Osaka University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Kenya SAKAMOTO, Shizuka SHIRAI, Noriko TAKEMURA, Jason ORLOSKY, Hiroyuki NAGATAKI, Mayumi UEDA, Yuki URANISHI, Haruo TAKEMURA, "Subjective Difficulty Estimation of Educational Comics Using Gaze Features" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 5, pp. 1038-1048, May 2023, doi: 10.1587/transinf.2022EDP7100.
Abstract: This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7100/_p
Copy
@ARTICLE{e106-d_5_1038,
author={Kenya SAKAMOTO, Shizuka SHIRAI, Noriko TAKEMURA, Jason ORLOSKY, Hiroyuki NAGATAKI, Mayumi UEDA, Yuki URANISHI, Haruo TAKEMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Subjective Difficulty Estimation of Educational Comics Using Gaze Features},
year={2023},
volume={E106-D},
number={5},
pages={1038-1048},
abstract={This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).},
keywords={},
doi={10.1587/transinf.2022EDP7100},
ISSN={1745-1361},
month={May},}
Copy
TY - JOUR
TI - Subjective Difficulty Estimation of Educational Comics Using Gaze Features
T2 - IEICE TRANSACTIONS on Information
SP - 1038
EP - 1048
AU - Kenya SAKAMOTO
AU - Shizuka SHIRAI
AU - Noriko TAKEMURA
AU - Jason ORLOSKY
AU - Hiroyuki NAGATAKI
AU - Mayumi UEDA
AU - Yuki URANISHI
AU - Haruo TAKEMURA
PY - 2023
DO - 10.1587/transinf.2022EDP7100
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2023
AB - This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).
ER -