The search functionality is under construction.

Author Search Result

[Author] Shuichi KATSUMATA(2hit)

1-2hit
  • Constructing Subspace Membership Encryption through Inner Product Encryption

    Shuichi KATSUMATA  Noboru KUNIHIRO  

     
    PAPER

      Vol:
    E100-A No:9
      Page(s):
    1804-1815

    Subspace membership encryption (SME), a generalization of inner product encryption (IPE), was recently formalized by Boneh, Raghunathan, and Segev in Asiacrypt 2013. The main motivation for SME was that traditional predicate encryptions did not yield function privacy, a security notion introduced by Boneh et al. in Crypto 2013 that captures the privacy of the predicate associated to the secret key. Although they gave a generic construction of SME based on any IPE, we show that their construction of SME for small attribute space was incorrect and provide an attack that breaks the attribute hiding security, a baseline security notion for predicate encryptions that captures the privacy of the attribute associated with the ciphertext. Then, we propose a generalized construction of SME and prove that the attribute hiding security can not be achieved even in the newly defined setting. Finally, we further extend our generalized construction of SME and propose a SME that achieves the attribute hiding property even when the attribute space is small. In exchange our proposed scheme does not yield function privacy and the construction is rather inefficient. Although we did not succeed in constructing a SME both yielding function privacy and attribute hiding security, ours is the first attribute hiding SME scheme whose attribute space is polynomial in the security parameter, and we formalized a richer framework for constructing SMEs and discovered a trade-off like relationship between the two security notions.

  • Model Inversion Attacks for Online Prediction Systems: Without Knowledge of Non-Sensitive Attributes

    Seira HIDANO  Takao MURAKAMI  Shuichi KATSUMATA  Shinsaku KIYOMOTO  Goichiro HANAOKA  

     
    PAPER-Forensics and Risk Analysis

      Pubricized:
    2018/08/22
      Vol:
    E101-D No:11
      Page(s):
    2665-2676

    The number of IT services that use machine learning (ML) algorithms are continuously and rapidly growing, while many of them are used in practice to make some type of predictions from personal data. Not surprisingly, due to this sudden boom in ML, the way personal data are handled in ML systems are starting to raise serious privacy concerns that were previously unconsidered. Recently, Fredrikson et al. [USENIX 2014] [CCS 2015] proposed a novel attack against ML systems called the model inversion attack that aims to infer sensitive attribute values of a target user. In their work, for the model inversion attack to be successful, the adversary is required to obtain two types of information concerning the target user prior to the attack: the output value (i.e., prediction) of the ML system and all of the non-sensitive values used to learn the output. Therefore, although the attack does raise new privacy concerns, since the adversary is required to know all of the non-sensitive values in advance, it is not completely clear how much risk is incurred by the attack. In particular, even though the users may regard these values as non-sensitive, it may be difficult for the adversary to obtain all of the non-sensitive attribute values prior to the attack, hence making the attack invalid. The goal of this paper is to quantify the risk of model inversion attacks in the case when non-sensitive attributes of a target user are not available to the adversary. To this end, we first propose a general model inversion (GMI) framework, which models the amount of auxiliary information available to the adversary. Our framework captures the model inversion attack of Fredrikson et al. as a special case, while also capturing model inversion attacks that infer sensitive attributes without the knowledge of non-sensitive attributes. For the latter attack, we provide a general methodology on how we can infer sensitive attributes of a target user without knowledge of non-sensitive attributes. At a high level, we use the data poisoning paradigm in a conceptually novel way and inject malicious data into the ML system in order to modify the internal ML model being used into a target ML model; a special type of ML model which allows one to perform model inversion attacks without the knowledge of non-sensitive attributes. Finally, following our general methodology, we cast ML systems that internally use linear regression models into our GMI framework and propose a concrete algorithm for model inversion attacks that does not require knowledge of the non-sensitive attributes. We show the effectiveness of our model inversion attack through experimental evaluation using two real data sets.