The search functionality is under construction.

Author Search Result

[Author] Takao MURAKAMI(4hit)

1-4hit
  • A General Framework and Algorithms for Score Level Indexing and Fusion in Biometric Identification

    Takao MURAKAMI  Kenta TAKAHASHI  Kanta MATSUURA  

     
    PAPER-Information Network

      Vol:
    E97-D No:3
      Page(s):
    510-523

    Biometric identification has recently attracted attention because of its convenience: it does not require a user ID nor a smart card. However, both the identification error rate and response time increase as the number of enrollees increases. In this paper, we combine a score level fusion scheme and a metric space indexing scheme to improve the accuracy and response time in biometric identification, using only scores as information sources. We firstly propose a score level indexing and fusion framework which can be constructed from the following three schemes: (I) a pseudo-score based indexing scheme, (II) a multi-biometric search scheme, and (III) a score level fusion scheme which handles missing scores. A multi-biometric search scheme can be newly obtained by applying a pseudo-score based indexing scheme to multi-biometric identification. We secondly propose the NBS (Naive Bayes search) scheme as a multi-biometric search scheme and discuss its optimality with respect to the retrieval error rate. We evaluated our proposal using the datasets of multiple fingerprints and face scores from multiple matchers. The results showed that our proposal significantly improved the accuracy of the unimodal biometrics while reducing the average number of score computations in both the datasets.

  • Modality Selection Attacks and Modality Restriction in Likelihood-Ratio Based Biometric Score Fusion

    Takao MURAKAMI  Yosuke KAGA  Kenta TAKAHASHI  

     
    PAPER-Biometrics

      Vol:
    E100-A No:12
      Page(s):
    3023-3037

    The likelihood-ratio based score level fusion (LR fusion) scheme is known as one of the most promising multibiometric fusion schemes. This scheme verifies a user by computing a log-likelihood ratio (LLR) for each modality, and comparing the total LLR to a threshold. It can happen in practice that genuine LLRs tend to be less than 0 for some modalities (e.g., the user is a “goat”, who is inherently difficult to recognize, for some modalities; the user suffers from temporary physical conditions such as injuries and illness). The LR fusion scheme can handle such cases by allowing the user to select a subset of modalities at the authentication phase and setting LLRs corresponding to missing query samples to 0. A recent study, however, proposed a modality selection attack, in which an impostor inputs only query samples whose LLRs are greater than 0 (i.e., takes an optimal strategy), and proved that this attack degrades the overall accuracy even if the genuine user also takes this optimal strategy. In this paper, we investigate the impact of the modality selection attack in more details. Specifically, we investigate whether the overall accuracy is improved by eliminating “goat” templates, whose LLRs tend to be less than 0 for genuine users, from the database (i.e., restricting modality selection). As an overall performance measure, we use the KL (Kullback-Leibler) divergence between a genuine score distribution and an impostor's one. We first prove the modality restriction hardly increases the KL divergence when a user can select a subset of modalities (i.e., selective LR fusion). We second prove that the modality restriction increases the KL divergence when a user needs to input all biometric samples (i.e., non-selective LR fusion). We conduct experiments using three real datasets (NIST BSSR1 Set1, Biosecure DS2, and CASIA-Iris-Thousand), and discuss directions of multibiometric fusion systems.

  • Model Inversion Attacks for Online Prediction Systems: Without Knowledge of Non-Sensitive Attributes

    Seira HIDANO  Takao MURAKAMI  Shuichi KATSUMATA  Shinsaku KIYOMOTO  Goichiro HANAOKA  

     
    PAPER-Forensics and Risk Analysis

      Pubricized:
    2018/08/22
      Vol:
    E101-D No:11
      Page(s):
    2665-2676

    The number of IT services that use machine learning (ML) algorithms are continuously and rapidly growing, while many of them are used in practice to make some type of predictions from personal data. Not surprisingly, due to this sudden boom in ML, the way personal data are handled in ML systems are starting to raise serious privacy concerns that were previously unconsidered. Recently, Fredrikson et al. [USENIX 2014] [CCS 2015] proposed a novel attack against ML systems called the model inversion attack that aims to infer sensitive attribute values of a target user. In their work, for the model inversion attack to be successful, the adversary is required to obtain two types of information concerning the target user prior to the attack: the output value (i.e., prediction) of the ML system and all of the non-sensitive values used to learn the output. Therefore, although the attack does raise new privacy concerns, since the adversary is required to know all of the non-sensitive values in advance, it is not completely clear how much risk is incurred by the attack. In particular, even though the users may regard these values as non-sensitive, it may be difficult for the adversary to obtain all of the non-sensitive attribute values prior to the attack, hence making the attack invalid. The goal of this paper is to quantify the risk of model inversion attacks in the case when non-sensitive attributes of a target user are not available to the adversary. To this end, we first propose a general model inversion (GMI) framework, which models the amount of auxiliary information available to the adversary. Our framework captures the model inversion attack of Fredrikson et al. as a special case, while also capturing model inversion attacks that infer sensitive attributes without the knowledge of non-sensitive attributes. For the latter attack, we provide a general methodology on how we can infer sensitive attributes of a target user without knowledge of non-sensitive attributes. At a high level, we use the data poisoning paradigm in a conceptually novel way and inject malicious data into the ML system in order to modify the internal ML model being used into a target ML model; a special type of ML model which allows one to perform model inversion attacks without the knowledge of non-sensitive attributes. Finally, following our general methodology, we cast ML systems that internally use linear regression models into our GMI framework and propose a concrete algorithm for model inversion attacks that does not require knowledge of the non-sensitive attributes. We show the effectiveness of our model inversion attack through experimental evaluation using two real data sets.

  • Information-Theoretic Performance Evaluation of Multibiometric Fusion under Modality Selection Attacks

    Takao MURAKAMI  Yosuke KAGA  Kenta TAKAHASHI  

     
    PAPER-Cryptography and Information Security

      Vol:
    E99-A No:5
      Page(s):
    929-942

    The likelihood-ratio based score level fusion (LR-based fusion) scheme has attracted much attention, since it maximizes accuracy if a log-likelihood ratio (LLR) is accurately estimated. In reality, it can happen that a user cannot input some query samples due to temporary physical conditions such as injuries and illness. It can also happen that some modalities tend to cause false rejection (i.e. the user is a “goat” for these modalities). The LR-based fusion scheme can handle these situations by setting LLRs corresponding to missing query samples to 0. In this paper, we refer to such a mode as a “modality selection mode”, and address an issue of accuracy in this mode. Specifically, we provide the following contributions: (1) We firstly propose a “modality selection attack”, in which an impostor inputs only query samples whose LLRs are more than 0 (i.e. takes an optimal strategy) to impersonate others. We also show that the impostor can perform this attack against the SPRT (Sequential Probability Ratio Test)-based fusion scheme, which is an extension of the LR-based fusion scheme to a sequential fusion scenario. (2) We secondly consider the case when both genuine users and impostors take this optimal strategy, and show that the overall accuracy in this case is “worse” than the case when they input all query samples. More specifically, we prove that the KL (Kullback-Leibler) divergence between a genuine distribution of integrated scores and an impostor's one, which can be compared with password entropy, is smaller in the former case. We also show to what extent the KL divergence losses for each modality. (3) We finally evaluate to what extent the overall accuracy becomes worse using the NIST BSSR1 Set 2 and Set 3 datasets, and discuss directions of multibiometric applications based on the experimental results.