1-2hit |
Hanae NOZAKI Atsushi SHIMBO Shinichi KAWAMURA
This paper proposes a new algorithm to achieve about two-times speedup of modular exponentiation which is implemented by Montgomery multiplication based on Residue Number Systems (RNS). In RNS Montgomery multiplication, its performance is determined by two base transformations dominantly. For the purpose of realizing parallel processing of these base transformations, i. e. "duplicate processing," we present two procedures of RNS Montgomery multiplication, in which RNS bases a and b are interchanged, and perform them alternately in modular exponentiation iteration. In an investigation of implementation, 1.87-times speedup has been obtained for 1024-bit modular multiplication. The proposed RNS Montgomery multiplication algorithm has an advantage in achieving the performance corresponding to that the upper limit of the number of parallel processing units is doubled.
In the field of machine learning security, as one of the attack surfaces especially for edge devices, the application of side-channel analysis such as correlation power/electromagnetic analysis (CPA/CEMA) is expanding. Aiming to evaluate the leakage resistance of neural network (NN) model parameters, i.e. weights and biases, we conducted a feasibility study of CPA/CEMA on floating-point (FP) operations, which are the basic operations of NNs. This paper proposes approaches to recover weights and biases using CPA/CEMA on multiplication and addition operations, respectively. It is essential to take into account the characteristics of the IEEE 754 representation in order to realize the recovery with high precision and efficiency. We show that CPA/CEMA on FP operations requires different approaches than traditional CPA/CEMA on cryptographic implementations such as the AES.