The search functionality is under construction.

Author Search Result

[Author] Tomoaki MIMOTO(3hit)

1-3hit
  • Privacy-Preserving Correlation Coefficient

    Tomoaki MIMOTO  Hiroyuki YOKOYAMA  Toru NAKAMURA  Takamasa ISOHARA  Masayuki HASHIMOTO  Ryosuke KOJIMA  Aki HASEGAWA  Yasushi OKUNO  

     
    PAPER

      Pubricized:
    2023/02/08
      Vol:
    E106-D No:5
      Page(s):
    868-876

    Differential privacy is a confidentiality metric and quantitatively guarantees the confidentiality of individuals. A noise criterion, called sensitivity, must be calculated when constructing a probabilistic disturbance mechanism that satisfies differential privacy. Depending on the statistical process, the sensitivity may be very large or even impossible to compute. As a result, the usefulness of the constructed mechanism may be significantly low; it might even be impossible to directly construct it. In this paper, we first discuss situations in which sensitivity is difficult to calculate, and then propose a differential privacy with additional dummy data as a countermeasure. When the sensitivity in the conventional differential privacy is calculable, a mechanism that satisfies the proposed metric satisfies the conventional differential privacy at the same time, and it is possible to evaluate the relationship between the respective privacy parameters. Next, we derive sensitivity by focusing on correlation coefficients as a case study of a statistical process for which sensitivity is difficult to calculate, and propose a probabilistic disturbing mechanism that satisfies the proposed metric. Finally, we experimentally evaluate the effect of noise on the sensitivity of the proposed and direct methods. Experiments show that privacy-preserving correlation coefficients can be derived with less noise compared to using direct methods.

  • A Taxonomy of Secure Two-Party Comparison Protocols and Efficient Constructions

    Nuttapong ATTRAPADUNG  Goichiro HANAOKA  Shinsaku KIYOMOTO  Tomoaki MIMOTO  Jacob C. N. SCHULDT  

     
    PAPER-Cryptography and Information Security

      Vol:
    E102-A No:9
      Page(s):
    1048-1060

    Secure two-party comparison plays a crucial role in many privacy-preserving applications, such as privacy-preserving data mining and machine learning. In particular, the available comparison protocols with the appropriate input/output configuration have a significant impact on the performance of these applications. In this paper, we firstly describe a taxonomy of secure two-party comparison protocols which allows us to describe the different configurations used for these protocols in a systematic manner. This taxonomy leads to a total of 216 types of comparison protocols. We then describe conversions among these types. While these conversions are based on known techniques and have explicitly or implicitly been considered previously, we show that a combination of these conversion techniques can be used to convert a perhaps less-known two-party comparison protocol by Nergiz et al. (IEEE SocialCom 2010) into a very efficient protocol in a configuration where the two parties hold shares of the values being compared, and obtain a share of the comparison result. This setting is often used in multi-party computation protocols, and hence in many privacy-preserving applications as well. We furthermore implement the protocol and measure its performance. Our measurement suggests that the protocol outperforms the previously proposed protocols for this input/output configuration, when off-line pre-computation is not permitted.

  • Anonymization Technique Based on SGD Matrix Factorization

    Tomoaki MIMOTO  Seira HIDANO  Shinsaku KIYOMOTO  Atsuko MIYAJI  

     
    PAPER-Cryptographic Techniques

      Pubricized:
    2019/11/25
      Vol:
    E103-D No:2
      Page(s):
    299-308

    Time-sequence data is high dimensional and contains a lot of information, which can be utilized in various fields, such as insurance, finance, and advertising. Personal data including time-sequence data is converted to anonymized datasets, which need to strike a balance between both privacy and utility. In this paper, we consider low-rank matrix factorization as one of anonymization methods and evaluate its efficiency. We convert time-sequence datasets to matrices and evaluate both privacy and utility. The record IDs in time-sequence data are changed at regular intervals to reduce re-identification risk. However, since individuals tend to behave in a similar fashion over periods of time, there remains a risk of record linkage even if record IDs are different. Hence, we evaluate the re-identification and linkage risks as privacy risks of time-sequence data. Our experimental results show that matrix factorization is a viable anonymization method and it can achieve better utility than existing anonymization methods.