The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SIS(3079hit)

161-180hit(3079hit)

  • Model Reverse-Engineering Attack against Systolic-Array-Based DNN Accelerator Using Correlation Power Analysis Open Access

    Kota YOSHIDA  Mitsuru SHIOZAKI  Shunsuke OKURA  Takaya KUBOTA  Takeshi FUJINO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    152-161

    A model extraction attack is a security issue in deep neural networks (DNNs). Information on a trained DNN model is an attractive target for an adversary not only in terms of intellectual property but also of security. Thus, an adversary tries to reveal the sensitive information contained in the trained DNN model from machine-learning services. Previous studies on model extraction attacks assumed that the victim provides a machine-learning cloud service and the adversary accesses the service through formal queries. However, when a DNN model is implemented on an edge device, adversaries can physically access the device and try to reveal the sensitive information contained in the implemented DNN model. We call these physical model extraction attacks model reverse-engineering (MRE) attacks to distinguish them from attacks on cloud services. Power side-channel analyses are often used in MRE attacks to reveal the internal operation from power consumption or electromagnetic leakage. Previous studies, including ours, evaluated MRE attacks against several types of DNN processors with power side-channel analyses. In this paper, information leakage from a systolic array which is used for the matrix multiplication unit in the DNN processors is evaluated. We utilized correlation power analysis (CPA) for the MRE attack and reveal weight parameters of a DNN model from the systolic array. Two types of the systolic array were implemented on field-programmable gate array (FPGA) to demonstrate that CPA reveals weight parameters from those systolic arrays. In addition, we applied an extended analysis approach called “chain CPA” for robust CPA analysis against the systolic arrays. Our experimental results indicate that an adversary can reveal trained model parameters from a DNN accelerator even if the DNN model parameters in the off-chip bus are protected with data encryption. Countermeasures against side-channel leaks will be important for implementing a DNN accelerator on a FPGA or application-specific integrated circuit (ASIC).

  • Solving the MQ Problem Using Gröbner Basis Techniques

    Takuma ITO  Naoyuki SHINOHARA  Shigenori UCHIYAMA  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    135-142

    Multivariate public key cryptosystem (MPKC) is one of the major post quantum cryptosystems (PQC), and the National Institute of Standards and Technology (NIST) recently selected four MPKCs as candidates of their PQC. The security of MPKC depends on the hardness of solving systems of algebraic equations over finite fields. In particular, the multivariate quadratic (MQ) problem is that of solving such a system consisting of quadratic polynomials and is regarded as an important research subject in cryptography. In the Fukuoka MQ challenge project, the hardness of the MQ problem is discussed, and algorithms for solving the MQ problem and the computational results obtained by these algorithms are reported. Algorithms for computing Gröbner basis are used as the main tools for solving the MQ problem. For example, the F4 algorithm and M4GB algorithm have succeeded in solving many instances of the MQ problem provided by the project. In this paper, based on the F4-style algorithm, we present an efficient algorithm to solve the MQ problems with dense polynomials generated in the Fukuoka MQ challenge project. We experimentally show that our algorithm requires less computational time and memory for these MQ problems than the F4 algorithm and M4GB algorithm. We succeeded in solving Type II and III problems of Fukuoka MQ challenge using our algorithm when the number of variables was 37 in both problems.

  • Body Part Connection, Categorization and Occlusion Based Tracking with Correction by Temporal Positions for Volleyball Spike Height Analysis

    Xina CHENG  Ziken LI  Songlin DU  Takeshi IKENAGA  

     
    PAPER-Vision

      Vol:
    E103-A No:12
      Page(s):
    1503-1511

    The spike height of volleyball players is important in volleyball analysis as the quantitative criteria to evaluation players' motions, which not only provides rich information to audiences in live broadcast of sports events but also makes contribution to evaluate and improve the performance of players in strategy analysis and players training. In the volleyball game scene, the high similarity between hands, the deformation and the occlusion are three main problems that influence the acquisition performance of spike height. To solve these problems, this paper proposes a body part connection, categorization and occlusion based observation model and a temporal position based correction method. Firstly, skin pixel filter based connection detection solves the problem of high similarity between hands by judging whether a hand is connected to the spike player. Secondly, the body part categorization based observation uses the probability distribution map of hand to determine the category of each body part to solve the deformation problem. Thirdly, the occlusion part detection based observation eliminates the influence of the views with occluded body part by detecting the occluded views with a trained classifier of body part. At last, the temporal position based result correction combines the estimated results, which refers the historical positions, and the posterior result to obtain an optimal result by degree of confidence. The experiments are based on the videos of final and semi-final games of 2014 Japan Inter High School Men's Volleyball in Tokyo Metropolitan Gymnasium, which includes 196 spike sequences of 4 teams. The experiment results of proposed methods are that: 93.37% of test sequences can be successfully detected the spike height, and in which the average error of spike height is 5.96cm.

  • PCA-LDA Based Color Quantization Method Taking Account of Saliency

    Yoshiaki UEDA  Seiichi KOJIMA  Noriaki SUETAKE  

     
    LETTER-Image

      Vol:
    E103-A No:12
      Page(s):
    1613-1617

    In this letter, we propose a color quantization method based on saliency. In the proposed method, the salient colors are selected as representative colors preferentially by using saliency as weights. Through experiments, we verify the effectiveness of the proposed method.

  • Predicting Violence Rating Based on Pairwise Comparison

    Ying JI  Yu WANG  Jien KATO  Kensaku MORI  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2020/08/28
      Vol:
    E103-D No:12
      Page(s):
    2578-2589

    With the rapid development of multimedia, violent video can be easily accessed in games, movies, websites, and so on. Identifying violent videos and rating violence extent is of great importance to media filtering and children protection. Many previous studies only address the problems of violence scene detection and violent action recognition, yet violence rating problem is still not solved. In this paper, we present a novel video-level rating prediction method to estimate violence extent automatically. It has two main characteristics: (1) a two-stream network is fine-tuned to construct effective representations of violent videos; (2) a violence rating prediction machine is designed to learn the strength relationship among different videos. Furthermore, we present a novel violent video dataset with a total of 1,930 human-involved violent videos designed for violence rating analysis. Each video is annotated with 6 fine-grained objective attributes, which are considered to be closely related to violence extent. The ground-truth of violence rating is given by pairwise comparison method. The dataset is evaluated in both stability and convergence. Experiment results on this dataset demonstrate the effectiveness of our method compared with the state-of-art classification methods.

  • Performance Analysis of the Interval Algorithm for Random Number Generation in the Case of Markov Coin Tossing Open Access

    Yasutada OOHAMA  

     
    PAPER-Shannon Theory

      Vol:
    E103-A No:12
      Page(s):
    1325-1336

    In this paper we analyze the interval algorithm for random number generation proposed by Han and Hoshi in the case of Markov coin tossing. Using the expression of real numbers on the interval [0,1), we first establish an explicit representation of the interval algorithm with the representation of real numbers on the interval [0,1) based one number systems. Next, using the expression of the interval algorithm, we give a rigorous analysis of the interval algorithm. We discuss the difference between the expected number of the coin tosses in the interval algorithm and their upper bound derived by Han and Hoshi and show that it can be characterized explicitly with the established expression of the interval algorithm.

  • A Fault Detection and Diagnosis Method for Via-Switch Crossbar in Non-Volatile FPGA

    Ryutaro DOI  Xu BAI  Toshitsugu SAKAMOTO  Masanori HASHIMOTO  

     
    PAPER

      Vol:
    E103-A No:12
      Page(s):
    1447-1455

    FPGA that exploits via-switches, which are a kind of non-volatile resistive RAMs, for crossbar implementation is attracting attention due to its high integration density and energy efficiency. Via-switch crossbar is responsible for the signal routing in the interconnections by changing on/off-states of via-switches. To verify the via-switch crossbar functionality after manufacturing, fault testing that checks whether we can turn on/off via-switches normally is essential. This paper confirms that a general differential pair comparator successfully discriminates on/off-states of via-switches, and clarifies fault modes of a via-switch by transistor-level SPICE simulation that injects stuck-on/off faults to atom switch and varistor, where a via-switch consists of two atom switches and two varistors. We then propose a fault diagnosis methodology for via-switches in the crossbar that diagnoses the fault modes according to the comparator response difference between the normal and faulty via-switches. The proposed method achieves 100% fault detection by checking the comparator responses after turning on/off the via-switch. In case that the number of faulty components in a via-switch is one, the ratio of the fault diagnosis, which exactly identifies the faulty varistor and atom switch inside the faulty via-switch, is 100%, and in case of up to two faults, the fault diagnosis ratio is 79%.

  • Compressed Sensing Framework Applying Independent Component Analysis after Undersampling for Reconstructing Electroencephalogram Signals Open Access

    Daisuke KANEMOTO  Shun KATSUMATA  Masao AIHARA  Makoto OHKI  

     
    PAPER-Biometrics

      Pubricized:
    2020/06/22
      Vol:
    E103-A No:12
      Page(s):
    1647-1654

    This paper proposes a novel compressed sensing (CS) framework for reconstructing electroencephalogram (EEG) signals. A feature of this framework is the application of independent component analysis (ICA) to remove the interference from artifacts after undersampling in a data processing unit. Therefore, we can remove the ICA processing block from the sensing unit. In this framework, we used a random undersampling measurement matrix to suppress the Gaussian. The developed framework, in which the discrete cosine transform basis and orthogonal matching pursuit were used, was evaluated using raw EEG signals with a pseudo-model of an eye-blink artifact. The normalized mean square error (NMSE) and correlation coefficient (CC), obtained as the average of 2,000 results, were compared to quantitatively demonstrate the effectiveness of the proposed framework. The evaluation results of the NMSE and CC showed that the proposed framework could remove the interference from the artifacts under a high compression ratio.

  • DNN-Based Full-Band Speech Synthesis Using GMM Approximation of Spectral Envelope

    Junya KOGUCHI  Shinnosuke TAKAMICHI  Masanori MORISE  Hiroshi SARUWATARI  Shigeki SAGAYAMA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2020/09/03
      Vol:
    E103-D No:12
      Page(s):
    2673-2681

    We propose a speech analysis-synthesis and deep neural network (DNN)-based text-to-speech (TTS) synthesis framework using Gaussian mixture model (GMM)-based approximation of full-band spectral envelopes. GMMs have excellent properties as acoustic features in statistic parametric speech synthesis. Each Gaussian function of a GMM fits the local resonance of the spectrum. The GMM retains the fine spectral envelope and achieve high controllability of the structure. However, since conventional speech analysis methods (i.e., GMM parameter estimation) have been formulated for a narrow-band speech, they degrade the quality of synthetic speech. Moreover, a DNN-based TTS synthesis method using GMM-based approximation has not been formulated in spite of its excellent expressive ability. Therefore, we employ peak-picking-based initialization for full-band speech analysis to provide better initialization for iterative estimation of the GMM parameters. We introduce not only prediction error of GMM parameters but also reconstruction error of the spectral envelopes as objective criteria for training DNN. Furthermore, we propose a method for multi-task learning based on minimizing these errors simultaneously. We also propose a post-filter based on variance scaling of the GMM for our framework to enhance synthetic speech. Experimental results from evaluating our framework indicated that 1) the initialization method of our framework outperformed the conventional one in the quality of analysis-synthesized speech; 2) introducing the reconstruction error in DNN training significantly improved the synthetic speech; 3) our variance-scaling-based post-filter further improved the synthetic speech.

  • Formulation of a Test Pattern Measure That Counts Distinguished Fault-Pairs for Circuit Fault Diagnosis

    Tsutomu INAMOTO  Yoshinobu HIGAMI  

     
    PAPER

      Vol:
    E103-A No:12
      Page(s):
    1456-1463

    In this paper, we aim to develop technologies for the circuit fault diagnosis and propose a formulation of a measure of a test pattern for the circuit fault diagnosis. Given a faulty circuit, the fault diagnosis is to deduce locations of faults that had occurred in the circuit. The fault diagnosis is executed in software before the failure analysis by which engineers inspect physical defects, and helps to improve the manufacturing process which yielded faulty circuits. The heart of the fault diagnosis is to distinguish between candidate faults by using test patterns, which are applied to the circuit-under-diagnosis (CUD), and thus test patterns that can distinguish as many faults as possible need to be generated. This fact motivates us to consider the test pattern measure based on the number of fault-pairs that become distinguished by a test pattern. To the best of the authors' knowledge, that measure requires the computational time of complexity order O(NF2), where NF denotes the number of candidate faults. Since NF is generally large for real industrial circuits, the computational time of the measure is long even when a high-performance computer is used. The formulation proposed in this paper makes it possible to calculate the measure in the computational complexity of O(NF log NF), and thus that measure is useful for the test pattern selection in the fault diagnosis. In computational experiments, the effectiveness of the formulation is demonstrated as samples of computational times of the measure calculated by the traditional and the proposed formulae and thorough comparisons between several greedy heuristics which are based on the measure.

  • On the Signal-to-Noise Ratio for Boolean Functions

    Yu ZHOU  Wei ZHAO  Zhixiong CHEN  Weiqiong WANG  Xiaoni DU  

     
    LETTER-Cryptography and Information Security

      Pubricized:
    2020/05/25
      Vol:
    E103-A No:12
      Page(s):
    1659-1665

    The notion of the signal-to-noise ratio (SNR), proposed by Guilley, et al. in 2004, is a property that attempts to characterize the resilience of (n, m)-functions F=(f1,...,fm) (cryptographic S-boxes) against differential power analysis. But how to study the signal-to-noise ratio for a Boolean function still appears to be an important direction. In this paper, we give a tight upper and tight lower bounds on SNR for any (balanced) Boolean function. We also deduce some tight upper bounds on SNR for balanced Boolean function satisfying propagation criterion. Moreover, we obtain a SNR relationship between an n-variable Boolean function and two (n-1)-variable decomposition functions. Meanwhile, we give SNR(f⊞g) and SNR(f⊡g) for any balanced Boolean functions f, g. Finally, we give a lower bound on SNR(F), which determined by SNR(fi) (1≤i≤m), for (n, m)-function F=(f1,f2,…,fm).

  • High Level Congestion Detection from C/C++ Source Code for High Level Synthesis Open Access

    Masato TATSUOKA  Mineo KANEKO  

     
    PAPER

      Vol:
    E103-A No:12
      Page(s):
    1437-1446

    High level synthesis (HLS) is a source-code-driven Register Transfer Level (RTL) design tool, and the performance, the power consumption, and the area of a generated RTL are limited partly by the description of a HLS input source code. In order to break through such kind of limitation and to get a further optimized RTL, the optimization of the input source code is indispensable. Routing congestion is one of such problems we need to consider the refinement of a HLS input source code. In this paper, we propose a novel HLS flow that performs code improvements by detecting congested parts directly from HLS input source code without using physical logic synthesis, and regenerating the input source code for HLS. In our approach, the origin of the wire congestion is detected from the HLS input source code by applying pattern matching on Program-Dependence Graph (PDG) constructed from the HLS input source code, the possibility of wire congestion is reported.

  • Design and Performance Analysis of a Skin-Stretcher Device for Urging Head Rotation

    Takahide ITO  Yuichi NAKAMURA  Kazuaki KONDO  Espen KNOOP  Jonathan ROSSITER  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2020/08/03
      Vol:
    E103-D No:11
      Page(s):
    2314-2322

    This paper introduces a novel skin-stretcher device for gently urging head rotation. The device pulls and/or pushes the skin on the user's neck by using servo motors. The user is induced to rotate his/her head based on the sensation caused by the local stretching of skin. This mechanism informs the user when and how much the head rotation is requested; however it does not force head rotation, i.e., it allows the user to ignore the stimuli and to maintain voluntary movements. We implemented a prototype device and analyzed the performance of the skin stretcher as a human-in-the-loop system. Experimental results define its fundamental characteristics, such as input-output gain, settling time, and other dynamic behaviors. Features are analyzed, for example, input-output gain is stable within the same installation condition, but various between users.

  • Validation Measurement of Hybrid Propagation Analysis Suitable for Airport Surface in VHF Band and Its Application to Realistic Situations

    Ryosuke SUGA  Satoshi KURODA  Atsushi KEZUKA  

     
    PAPER

      Pubricized:
    2020/04/10
      Vol:
    E103-C No:11
      Page(s):
    582-587

    Authors had proposed a hybrid electromagnetic field analysis method suitable for an airport surface so far. In this paper, the hybrid method is validated by measurements by using a 1/50 scale-model of an airport considering several layouts of the buildings and sloping ground. The measured power distributions agreed with the analyzed ones within 5 dB errors excepting null points and the null positions of the distribution is also estimated within one wavelength errors.

  • Testing Homogeneity for Normal Mixture Models: Variational Bayes Approach

    Natsuki KARIYA  Sumio WATANABE  

     
    PAPER-Information Theory

      Vol:
    E103-A No:11
      Page(s):
    1274-1282

    The test of homogeneity for normal mixtures has been used in various fields, but its theoretical understanding is limited because the parameter set for the null hypothesis corresponds to singular points in the parameter space. In this paper, we shed a light on this issue from a new perspective, variational Bayes, and offer a theory for testing homogeneity based on it. Conventional theory has not reveal the stochastic behavior of the variational free energy, which is necessary for constructing a hypothesis test, has remained unknown. We clarify it for the first time and construct a new test base on it. Numerical experiments show the validity of our results.

  • Contact Current Density Analysis Inside Human Body in Low-Frequency Band Using Geometric Multi-Grid Solver

    Masamune NOMURA  Yuki NAKAMURA  Hiroo TARAO  Amane TAKEI  

     
    PAPER

      Pubricized:
    2020/03/24
      Vol:
    E103-C No:11
      Page(s):
    588-596

    This paper describes the effectiveness of the geometric multi-grid method in a current density analysis using a numerical human body model. The scalar potential finite difference (SPFD) method is used as a numerical method for analyzing the current density inside a human body due to contact with charged objects in a low-frequency band, and research related to methods to solve faster large-scale simultaneous equations based on the SPFD method has been conducted. In previous research, the block incomplete Cholesky conjugate gradients (ICCG) method is proposed as an effective method to solve the simultaneous equations faster. However, even though the block ICCG method is used, many iterations are still needed. Therefore, in this study, we focus on the geometric multi-grid method as a method to solve the problem. We develop the geometric-multi-grid method and evaluate performances by comparing it with the block ICCG method in terms of computation time and the number of iterations. The results show that the number of iterations needed for the geometric multi-grid method is much less than that for the block ICCG method. In addition, the computation time is much shorter, depending on the number of threads and the number of coarse grids. Also, by using multi-color ordering, the parallel performance of the geometric multi-grid method can be greatly improved.

  • The Absolute Consistency Problem for Relational Schema Mappings with Functional Dependencies

    Yasunori ISHIHARA  Takashi HAYATA  Toru FUJIWARA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2020/08/06
      Vol:
    E103-D No:11
      Page(s):
    2278-2288

    This paper discusses a static analysis problem, called absolute consistency problem, for relational schema mappings. A given schema mapping is said to be absolutely consistent if every source instance has a corresponding target instance. Absolute consistency is an important property because it guarantees that data exchange never fails for any source instance. Originally, for XML schema mappings, the absolute consistency problem was defined and its complexity was investigated by Amano et al. However, as far as the authors know, there are no known results for relational schema mappings. In this paper, we focus on relational schema mappings such that both the source and the target schemas have functional dependencies, under the assumption that mapping rules are defined by constant-free tuple-generating dependencies. In this setting, we show that the absolute consistency problem is in coNP. We also show that it is solvable in polynomial time if the tuple-generating dependencies are full and the size of the left-hand side of each functional dependency is bounded by some constant. Finally, we show that the absolute consistency problem is coNP-hard even if the source schema has no functional dependency and the target schema has only one; or each of the source and the target schemas has only one functional dependency such that the size of the left-hand side of the functional dependency is at most two.

  • Proposing High-Smart Approach for Content Authentication and Tampering Detection of Arabic Text Transmitted via Internet

    Fahd N. AL-WESABI  

     
    PAPER-Information Network

      Pubricized:
    2020/07/17
      Vol:
    E103-D No:10
      Page(s):
    2104-2112

    The security and reliability of Arabic text exchanged via the Internet have become a challenging area for the research community. Arabic text is very sensitive to modify by malicious attacks and easy to make changes on diacritics i.e. Fat-ha, Kasra and Damma, which are represent the syntax of Arabic language and can make the meaning is differing. In this paper, a Hybrid of Natural Language Processing and Zero-Watermarking Approach (HNLPZWA) has been proposed for the content authentication and tampering detection of Arabic text. The HNLPZWA approach embeds and detects the watermark logically without altering the original text document to embed a watermark key. Fifth level order of word mechanism based on hidden Markov model is integrated with digital zero-watermarking techniques to improve the tampering detection accuracy issues of the previous literature proposed by the researchers. Fifth-level order of Markov model is used as a natural language processing technique in order to analyze the Arabic text. Moreover, it extracts the features of interrelationship between contexts of the text and utilizes the extracted features as watermark information and validates it later with attacked Arabic text to detect any tampering occurred on it. HNLPZWA has been implemented using PHP with VS code IDE. Tampering detection accuracy of HNLPZWA is proved with experiments using four datasets of varying lengths under multiple random locations of insertion, reorder and deletion attacks of experimental datasets. The experimental results show that HNLPZWA is more sensitive for all kinds of tampering attacks with high level accuracy of tampering detection.

  • Recent Advances in Practical Secure Multi-Party Computation Open Access

    Satsuya OHATA  

     
    INVITED PAPER-cryptography

      Vol:
    E103-A No:10
      Page(s):
    1134-1141

    Secure multi-party computation (MPC) allows a set of parties to compute a function jointly while keeping their inputs private. MPC has been actively studied, and there are many research results both in the theoretical and practical research fields. In this paper, we introduce the basic matters on MPC and show recent practical advances. We first explain the settings, security notions, and cryptographic building blocks of MPC. Then, we show and discuss current situations on higher-level secure protocols, privacy-preserving data analysis, and frameworks/compilers for implementing MPC applications with low-cost.

  • Local Riesz Pyramid for Faster Phase-Based Video Magnification

    Shoichiro TAKEDA  Megumi ISOGAI  Shinya SHIMIZU  Hideaki KIMATA  

     
    PAPER

      Pubricized:
    2020/06/22
      Vol:
    E103-D No:10
      Page(s):
    2036-2046

    Phase-based video magnification methods can magnify and reveal subtle motion changes invisible to the naked eye. In these methods, each image frame in a video is decomposed into an image pyramid, and subtle motion changes are then detected as local phase changes with arbitrary orientations at each pixel and each pyramid level. One problem with this process is a long computational time to calculate the local phase changes, which makes high-speed processing of video magnification difficult. Recently, a decomposition technique called the Riesz pyramid has been proposed that detects only local phase changes in the dominant orientation. This technique can remove the arbitrariness of orientations and lower the over-completeness, thus achieving high-speed processing. However, as the resolution of input video increases, a large amount of data must be processed, requiring a long computational time. In this paper, we focus on the correlation of local phase changes between adjacent pyramid levels and present a novel decomposition technique called the local Riesz pyramid that enables faster phase-based video magnification by automatically processing the minimum number of sufficient local image areas at several pyramid levels. Through this minimum pyramid processing, our proposed phase-based video magnification method using the local Riesz pyramid achieves good magnification results within a short computational time.

161-180hit(3079hit)