Accurately describing user behaviors with appropriate sensors is always important when developing computing cost-effective systems. This paper employs datasets recorded for fine-grained reading detection using the J!NS MEME, an eye-wear device with electrooculography (EOG), accelerometer, and gyroscope sensors. We generate models for all possible combinations of the three sensors and employ self-supervised learning and supervised learning in order to gain an understanding of optimal sensor settings. The results show that only the EOG sensor performs roughly as well as the best performing combination of other sensors. This gives an insight into selecting the appropriate sensors for fine-grained reading detection, enabling cost-effective computation.
Md. Rabiul ISLAM
Osaka Prefecture University,BSMRSTU
Andrew W. VARGO
Osaka Prefecture University
Motoi IWATA
Osaka Prefecture University
Masakazu IWAMURA
Osaka Prefecture University
Koichi KISE
Osaka Prefecture University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Md. Rabiul ISLAM, Andrew W. VARGO, Motoi IWATA, Masakazu IWAMURA, Koichi KISE, "Exploring Sensor Modalities to Capture User Behaviors for Reading Detection" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 9, pp. 1629-1633, September 2022, doi: 10.1587/transinf.2020ZDL0003.
Abstract: Accurately describing user behaviors with appropriate sensors is always important when developing computing cost-effective systems. This paper employs datasets recorded for fine-grained reading detection using the J!NS MEME, an eye-wear device with electrooculography (EOG), accelerometer, and gyroscope sensors. We generate models for all possible combinations of the three sensors and employ self-supervised learning and supervised learning in order to gain an understanding of optimal sensor settings. The results show that only the EOG sensor performs roughly as well as the best performing combination of other sensors. This gives an insight into selecting the appropriate sensors for fine-grained reading detection, enabling cost-effective computation.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2020ZDL0003/_p
Copy
@ARTICLE{e105-d_9_1629,
author={Md. Rabiul ISLAM, Andrew W. VARGO, Motoi IWATA, Masakazu IWAMURA, Koichi KISE, },
journal={IEICE TRANSACTIONS on Information},
title={Exploring Sensor Modalities to Capture User Behaviors for Reading Detection},
year={2022},
volume={E105-D},
number={9},
pages={1629-1633},
abstract={Accurately describing user behaviors with appropriate sensors is always important when developing computing cost-effective systems. This paper employs datasets recorded for fine-grained reading detection using the J!NS MEME, an eye-wear device with electrooculography (EOG), accelerometer, and gyroscope sensors. We generate models for all possible combinations of the three sensors and employ self-supervised learning and supervised learning in order to gain an understanding of optimal sensor settings. The results show that only the EOG sensor performs roughly as well as the best performing combination of other sensors. This gives an insight into selecting the appropriate sensors for fine-grained reading detection, enabling cost-effective computation.},
keywords={},
doi={10.1587/transinf.2020ZDL0003},
ISSN={1745-1361},
month={September},}
Copy
TY - JOUR
TI - Exploring Sensor Modalities to Capture User Behaviors for Reading Detection
T2 - IEICE TRANSACTIONS on Information
SP - 1629
EP - 1633
AU - Md. Rabiul ISLAM
AU - Andrew W. VARGO
AU - Motoi IWATA
AU - Masakazu IWAMURA
AU - Koichi KISE
PY - 2022
DO - 10.1587/transinf.2020ZDL0003
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2022
AB - Accurately describing user behaviors with appropriate sensors is always important when developing computing cost-effective systems. This paper employs datasets recorded for fine-grained reading detection using the J!NS MEME, an eye-wear device with electrooculography (EOG), accelerometer, and gyroscope sensors. We generate models for all possible combinations of the three sensors and employ self-supervised learning and supervised learning in order to gain an understanding of optimal sensor settings. The results show that only the EOG sensor performs roughly as well as the best performing combination of other sensors. This gives an insight into selecting the appropriate sensors for fine-grained reading detection, enabling cost-effective computation.
ER -