1-2hit |
Chaima DHAHRI Kazunori MATSUMOTO Keiichiro HOASHI
Upcoming mood prediction plays an important role in different topics such as bipolar depression disorder in psychology and quality-of-life and recommendations on health-related quality of life research. The mood in this study is defined as the general emotional state of a user. In contrast to emotions which is more specific and varying within a day, the mood is described as having either a positive or negative valence[1]. We propose an autonomous system that predicts the upcoming user mood based on their online activities over cyber, social and physical spaces without using extra-devices and sensors. Recently, many researchers have relied on online social networks (OSNs) to detect user mood. However, all the existing works focused on inferring the current mood and only few works have focused on predicting the upcoming mood. For this reason, we define a new goal of predicting the upcoming mood. We, first, collected ground truth data during two months from 383 subjects. Then, we studied the correlation between extracted features and user's mood. Finally, we used these features to train two predictive systems: generalized and personalized. The results suggest a statistically significant correlation between tomorrow's mood and today's activities on OSNs, which can be used to develop a decent predictive system with an average accuracy of 70% and a recall of 75% for the correlated users. This performance was increased to an average accuracy of 79% and a recall of 80% for active users who have more than 30 days of history data. Moreover, we showed that, for non-active users, referring to a generalized system can be a solution to compensate the lack of data at the early stage of the system, but when enough data for each user is available, a personalized system is used to individually predict the upcoming mood.
Yasser MOHAMMAD Kazunori MATSUMOTO Keiichiro HOASHI
Activity recognition from sensors is a classification problem over time-series data. Some research in the area utilize time and frequency domain handcrafted features that differ between datasets. Another categorically different approach is to use deep learning methods for feature learning. This paper explores a middle ground in which an off-the-shelf feature extractor is used to generate a large number of candidate time-domain features followed by a feature selector that was designed to reduce the bias toward specific classification techniques. Moreover, this paper advocates the use of features that are mostly insensitive to sensor orientation and show their applicability to the activity recognition problem. The proposed approach is evaluated using six different publicly available datasets collected under various conditions using different experimental protocols and shows comparable or higher accuracy than state-of-the-art methods on most datasets but usually using an order of magnitude fewer features.