The search functionality is under construction.

Author Search Result

[Author] Jason PAUL CRUZ(2hit)

1-2hit
  • APVAS: Reducing the Memory Requirement of AS_PATH Validation by Introducing Aggregate Signatures into BGPsec

    Ouyang JUNJIE  Naoto YANAI  Tatsuya TAKEMURA  Masayuki OKADA  Shingo OKAMURA  Jason Paul CRUZ  

     
    PAPER

      Pubricized:
    2023/01/11
      Vol:
    E106-A No:3
      Page(s):
    170-184

    The BGPsec protocol, which is an extension of the border gateway protocol (BGP) for Internet routing known as BGPsec, uses digital signatures to guarantee the validity of routing information. However, the use of digital signatures in routing information on BGPsec causes a lack of memory in BGP routers, creating a gaping security hole in today's Internet. This problem hinders the practical realization and implementation of BGPsec. In this paper, we present APVAS (AS path validation based on aggregate signatures), a new protocol that reduces the memory consumption of routers running BGPsec when validating paths in routing information. APVAS relies on a novel aggregate signature scheme that compresses individually generated signatures into a single signature. Furthermore, we implement a prototype of APVAS on BIRD Internet Routing Daemon and demonstrate its efficiency on actual BGP connections. Our results show that the routing tables of the routers running BGPsec with APVAS have 20% lower memory consumption than those running the conventional BGPsec. We also confirm the effectiveness of APVAS in the real world by using 800,000 routes, which are equivalent to the full route information on a global scale.

  • Efficient Secure Neural Network Prediction Protocol Reducing Accuracy Degradation

    Naohisa NISHIDA  Tatsumi OBA  Yuji UNAGAMI  Jason PAUL CRUZ  Naoto YANAI  Tadanori TERUYA  Nuttapong ATTRAPADUNG  Takahiro MATSUDA  Goichiro HANAOKA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E103-A No:12
      Page(s):
    1367-1380

    Machine learning models inherently memorize significant amounts of information, and thus hiding not only prediction processes but also trained models, i.e., model obliviousness, is desirable in the cloud setting. Several works achieved model obliviousness with the MNIST dataset, but datasets that include complicated samples, e.g., CIFAR-10 and CIFAR-100, are also used in actual applications, such as face recognition. Secret sharing-based secure prediction for CIFAR-10 is difficult to achieve. When a deep layer architecture such as CNN is used, the calculation error when performing secret calculation becomes large and the accuracy deteriorates. In addition, if detailed calculations are performed to improve accuracy, a large amount of calculation is required. Therefore, even if the conventional method is applied to CNN as it is, good results as described in the paper cannot be obtained. In this paper, we propose two approaches to solve this problem. Firstly, we propose a new protocol named Batch-normalizedActivation that combines BatchNormalization and Activation. Since BatchNormalization includes real number operations, when performing secret calculation, parameters must be converted into integers, which causes a calculation error and decrease accuracy. By using our protocol, calculation errors can be eliminated, and accuracy degradation can be eliminated. Further, the processing is simplified, and the amount of calculation is reduced. Secondly, we explore a secret computation friendly and high accuracy architecture. Related works use a low-accuracy, simple architecture, but in reality, a high accuracy architecture should be used. Therefore, we also explored a high accuracy architecture for the CIFAR10 dataset. Our proposed protocol can compute prediction of CIFAR-10 within 15.05 seconds with 87.36% accuracy while providing model obliviousness.