The search functionality is under construction.

Author Search Result

[Author] Jun SAKUMA(4hit)

1-4hit
  • Study on Record Linkage of Anonymizied Data

    Hiroaki KIKUCHI  Takayasu YAMAGUCHI  Koki HAMADA  Yuji YAMAOKA  Hidenobu OGURI  Jun SAKUMA  

     
    INVITED PAPER

      Vol:
    E101-A No:1
      Page(s):
    19-28

    Data anonymization is required before a big-data business can run effectively without compromising the privacy of personal information it uses. It is not trivial to choose the best algorithm to anonymize some given data securely for a given purpose. In accurately assessing the risk of data being compromised, there needs to be a balance between utility and security. Therefore, using common pseudo microdata, we propose a competition for the best anonymization and re-identification algorithm. The paper reported the result of the competition and the analysis on the effective of anonymization technique. The competition result reveals that there is a tradeoff between utility and security, and 20.9% records were re-identified in average.

  • CAMRI Loss: Improving the Recall of a Specific Class without Sacrificing Accuracy

    Daiki NISHIYAMA  Kazuto FUKUCHI  Youhei AKIMOTO  Jun SAKUMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/01/23
      Vol:
    E106-D No:4
      Page(s):
    523-537

    In real world applications of multiclass classification models, misclassification in an important class (e.g., stop sign) can be significantly more harmful than in other classes (e.g., no parking). Thus, it is crucial to improve the recall of an important class while maintaining overall accuracy. For this problem, we found that improving the separation of important classes relative to other classes in the feature space is effective. Existing methods that give a class-sensitive penalty for cross-entropy loss do not improve the separation. Moreover, the methods designed to improve separations between all classes are unsuitable for our purpose because they do not consider the important classes. To achieve the separation, we propose a loss function that explicitly gives loss for the feature space, called class-sensitive additive angular margin (CAMRI) loss. CAMRI loss is expected to reduce the variance of an important class due to the addition of a penalty to the angle between the important class features and the corresponding weight vectors in the feature space. In addition, concentrating the penalty on only the important class hardly sacrifices separating the other classes. Experiments on CIFAR-10, GTSRB, and AwA2 showed that CAMRI loss could improve the recall of a specific class without sacrificing accuracy. In particular, compared with GTSRB's second-worst class recall when trained with cross-entropy loss, CAMRI loss improved recall by 9%.

  • Prediction with Model-Based Neutrality

    Kazuto FUKUCHI  Toshihiro KAMISHIMA  Jun SAKUMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/05/15
      Vol:
    E98-D No:8
      Page(s):
    1503-1516

    With recent developments in machine learning technology, the predictions by systems incorporating machine learning can now have a significant impact on the lives and activities of individuals. In some cases, predictions made by machine learning can result unexpectedly in unfair treatments to individuals. For example, if the results are highly dependent on personal attributes, such as gender or ethnicity, hiring decisions might be discriminatory. This paper investigates the neutralization of a probabilistic model with respect to another probabilistic model, referred to as a viewpoint. We present a novel definition of neutrality for probabilistic models, η-neutrality, and introduce a systematic method that uses the maximum likelihood estimation to enforce the neutrality of a prediction model. Our method can be applied to various machine learning algorithms, as demonstrated by η-neutral logistic regression and η-neutral linear regression.

  • Locally Differentially Private Minimum Finding

    Kazuto FUKUCHI  Chia-Mu YU  Jun SAKUMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2022/05/11
      Vol:
    E105-D No:8
      Page(s):
    1418-1430

    We investigate a problem of finding the minimum, in which each user has a real value, and we want to estimate the minimum of these values under the local differential privacy constraint. We reveal that this problem is fundamentally difficult, and we cannot construct a consistent mechanism in the worst case. Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum. As a measure of easiness, we introduce a parameter α that characterizes the fatness of the minimum-side tail of the user data distribution. As a result, we reveal that the mechanism can achieve O((ln6N/ε2N)1/2α) error without knowledge of α and the error rate is near-optimal in the sense that any mechanism incurs Ω((1/ε2N)1/2α) error. Furthermore, we demonstrate that our mechanism outperforms a naive mechanism by empirical evaluations on synthetic datasets. Also, we conducted experiments on the MovieLens dataset and a purchase history dataset and demonstrate that our algorithm achieves Õ((1/N)1/2α) error adaptively to α.