1-5hit |
Takeshi SAGA Hiroki TANAKA Hidemi IWASAKA Satoshi NAKAMURA
Social Skills Training (SST) has been used for years to improve individuals' social skills toward building a better daily life. In SST carried out by humans, the social skills level is usually evaluated through a verbal interview conducted by the trainer. Although this evaluation is based on psychiatric knowledge and professional experience, its quality depends on the trainer's capabilities. Therefore, to standardize such evaluations, quantifiable metrics are required. To meet this need, the second edition of the Social Responsiveness Scale (SRS-2) offers a viable solution because it has been extensively tested and standardized by empirical research works. This paper describes the development of an automated method to evaluate a person's social skills level based on SRS-2. We use multimodal features, including BERT-based features, and perform score estimation with a 0.76 Pearson correlation coefficient while using feature selection. In addition, we examine the linguistic aspects of BERT-based features through subjective evaluations. Consequently, the BERT-based features show a strong negative correlation with human subjective scores of fluency, appropriate word choice, and understandable speech structure.
Feifan HAN Kazunori KOBAYASHI Safumi SUZUKI Hiroki TANAKA Hidenari FUJIKATA Masahiro ASADA
This paper theoretically presents that a terahertz (THz) oscillator using a resonant tunneling diode (RTD) and a rectangular cavity, which has previously been proposed, can radiate high output power by the impedance matching between RTD and load through metal-insulator-metal (MIM) capacitors. Based on an established equivalent-circuit model, an equation for output power has been deduced. By changing MIM capacitors, a matching point can be derived for various sizes of rectangular-cavity resonator. Simulation results show that high output power is possible by long cavity. For example, a high output power of 5 mW is expected at 1 THz.
Hiroki TANAKA Sakriani SAKTI Graham NEUBIG Tomoki TODA Satoshi NAKAMURA
Non-verbal communication incorporating visual, audio, and contextual information is important to make sense of and navigate the social world. Individuals who have trouble with social situations often have difficulty recognizing these sorts of non-verbal social signals. In this article, we propose a training tool NOCOA+ (Non-verbal COmmuniation for Autism plus) that uses utterances in visual and audio modalities in non-verbal communication training. We describe the design of NOCOA+, and further perform an experimental evaluation in which we examine its potential as a tool for computer-based training of non-verbal communication skills for people with social and communication difficulties. In a series of four experiments, we investigated 1) the effect of temporal context on the ability to recognize social signals in testing context, 2) the effect of modality of presentation of social stimulus on ability to recognize non-verbal information, 3) the correlation between autistic traits as measured by the autism spectrum quotient (AQ) and non-verbal behavior recognition skills measured by NOCOA+, 4) the effectiveness of computer-based training in improving social skills. We found that context information was helpful for recognizing non-verbal behaviors, and the effect of modality was different. The results also showed a significant relationship between the AQ communication and socialization scores and non-verbal communication skills, and that social skills were significantly improved through computer-based training.
Hiroki WATANABE Hiroki TANAKA Sakriani SAKTI Satoshi NAKAMURA
Brain-computer interfaces (BCIs) have been used by users to convey their intentions directly with brain signals. For example, a spelling system that uses EEGs allows letters on a display to be selected. In comparison, previous studies have investigated decoding speech information such as syllables, words from single-trial brain signals during speech comprehension, or articulatory imagination. Such decoding realizes speech recognition with a relatively short time-lag and without relying on a display. Previous magnetoencephalogram (MEG) research showed that a template matching method could be used to classify three English sentences by using phase patterns in theta oscillations. This method is based on the synchronization between speech rhythms and neural oscillations during speech processing, that is, theta oscillations synchronized with syllabic rhythms and low-gamma oscillations with phonemic rhythms. The present study aimed to approximate this classification method to a BCI application. To this end, (1) we investigated the performance of the EEG-based classification of three Japanese sentences and (2) evaluated the generalizability of our models to other different users. For the purpose of improving accuracy, (3) we investigated the performances of four classifiers: template matching (baseline), logistic regression, support vector machine, and random forest. In addition, (4) we propose using novel features including phase patterns in a higher frequency range. Our proposed features were constructed in order to capture synchronization in a low-gamma band, that is, (i) phases in EEG oscillations in the range of 2-50 Hz from all electrodes used for measuring EEG data (all) and (ii) phases selected on the basis of feature importance (selected). The classification results showed that, except for random forest, most classifiers perform similarly. Our proposed features improved the classification accuracy with statistical significance compared with a baseline feature, which is a phase pattern in neural oscillations in the range of 4-8 Hz from the right hemisphere. The best mean accuracy across folds was 55.9% using template matching trained by all features. We concluded that the use of phase information in a higher frequency band improves the performance of EEG-based sentence classification and that this model is applicable to other different users.
Kana MIYAMOTO Hiroki TANAKA Satoshi NAKAMURA
Music is often used for emotion induction because it can change the emotions of people. However, since we subjectively feel different emotions when listening to music, we propose an emotion induction system that generates music that is adapted to each individual. Our system automatically generates suitable music for emotion induction based on the emotions predicted from an electroencephalogram (EEG). We examined three elements for constructing our system: 1) a music generator that creates music that induces emotions that resemble the inputs, 2) emotion prediction using EEG in real-time, and 3) the control of a music generator using the predicted emotions for making music that is suitable for inducing emotions. We constructed our proposed system using these elements and evaluated it. The results showed its effectiveness for inducing emotions and suggest that feedback loops that tailor stimuli to individuals can successfully induce emotions.