The search functionality is under construction.

Author Search Result

[Author] Takeshi FUJINO(18hit)

1-18hit
  • Hierarchical-Masked Image Filtering for Privacy-Protection

    Takeshi KUMAKI  Takeshi FUJINO  

     
    PAPER-Privacy, anonymity, and fundamental theory

      Pubricized:
    2017/07/21
      Vol:
    E100-D No:10
      Page(s):
    2327-2338

    This paper presents a hierarchical-masked image filtering method for privacy-protection. Cameras are widely used for various applications, e.g., crime surveillance, environment monitoring, and marketing. However, invasion of privacy has become a serious social problem, especially regarding the use of surveillance cameras. Many surveillance cameras point at many people; thus, a large amount of our private information of our daily activities are under surveillance. However, several surveillance cameras currently on the market and related research often have a complicated or institutional masking privacy-protection functionality. To overcome this problem, a Hierarchical-Masked image Filtering (HMF) method is proposed, which has unmaskable (mask reversal) capability and is applicable to current surveillance camera systems for privacy-information protection and can satisfy privacy-protection related requirements. This method has five main features: unmasking of the original image from only the masked image and a cipher key, hierarchical-mask level control using parameters for the length of a pseudorandom number, robustness against malicious attackers, fast processing on an embedded processor, and applicability of mask operation to current surveillance camera systems. Previous studies have difficulty in providing these features. To evaluate HMF on actual equipment, an HMF-based prototype system is developed that mainly consists of a USB web camera, ultra-compact single board computer, and notebook PC. Through experiments, it is confirmed that the proposed method achieves mask level control and is robust against attacks. The increase in processing time of the HMF-based prototype system compared with a conventional non-masking system is only about 1.4%. This paper also reports on the comparison of the proposed method with conventional privacy protection methods and favorable responses of people toward the HMF-based prototype system both domestically and abroad. Therefore, the proposed HMF method can be applied to embedded systems such as those equipped with surveillance cameras for protecting privacy.

  • A Low Power Embedded DRAM Macro for Battery-Operated LSIs

    Takeshi FUJINO  Akira YAMAZAKI  Yasuhiko TAITO  Mitsuya KINOSHITA  Fukashi MORISHITA  Teruhiko AMANO  Masaru HARAGUCHI  Makoto HATAKENAKA  Atsushi AMO  Atsushi HACHISUKA  Kazutami ARIMOTO  Hideyuki OZAKI  

     
    PAPER-Power Optimization

      Vol:
    E86-A No:12
      Page(s):
    2991-3000

    A low power 16 Mb embedded DRAM (eDRAM) macro is fabricated using 0.15 µm logic -based embedded DRAM process technology. A 0.5 µm2 CUB (apacitor nder it-line) DRAM cell is newly developed for this process. Novel start-up and dynamic fuse-data loading circuit are developed to realize easy customization of memory capacities with minimum area penalty. A new write-mask control circuit using write-gate sense-amplifier is adopted in order to apply column shift-redundancy circuit. Various low power technologies including unique "non-precharge read-data bus" method are applied. In the test-chip adopting new process-technology and three original circuit-design techniques, random column operation of 166 MHz and data retention power of 123 µW are demonstrated at 1.5 V power supply.

  • Fault Injection Attacks Utilizing Waveform Pattern Matching against Neural Networks Processing on Microcontroller Open Access

    Yuta FUKUDA  Kota YOSHIDA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2021/09/22
      Vol:
    E105-A No:3
      Page(s):
    300-310

    Deep learning applications have often been processed in the cloud or on servers. Still, for applications that require privacy protection and real-time processing, the execution environment is moved to edge devices. Edge devices that implement a neural network (NN) are physically accessible to an attacker. Therefore, physical attacks are a risk. Fault attacks on these devices are capable of misleading classification results and can lead to serious accidents. Therefore, we focus on the softmax function and evaluate a fault attack using a clock glitch against NN implemented in an 8-bit microcontroller. The clock glitch is used for fault injection, and the injection timing is controlled by monitoring the power waveform. The specific waveform is enrolled in advance, and the glitch timing pulse is generated by the sum of absolute difference (SAD) matching algorithm. Misclassification can be achieved by appropriately injecting glitches triggered by pattern detection. We propose a countermeasure against fault injection attacks that utilizes the randomization of power waveforms. The SAD matching is disabled by random number initialization on the summation register of the softmax function.

  • Adversarial Scan Attack against Scan Matching Algorithm for Pose Estimation in LiDAR-Based SLAM Open Access

    Kota YOSHIDA  Masaya HOJO  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2021/10/26
      Vol:
    E105-A No:3
      Page(s):
    326-335

    Autonomous robots are controlled using physical information acquired by various sensors. The sensors are susceptible to physical attacks, which tamper with the observed values and interfere with control of the autonomous robots. Recently, sensor spoofing attacks targeting subsequent algorithms which use sensor data have become large threats. In this paper, we introduce a new attack against the LiDAR-based simultaneous localization and mapping (SLAM) algorithm. The attack uses an adversarial LiDAR scan to fool a pose graph and a generated map. The adversary calculates a falsification amount for deceiving pose estimation and physically injects the spoofed distance against LiDAR. The falsification amount is calculated by gradient method against a cost function of the scan matching algorithm. The SLAM algorithm generates the wrong map from the deceived movement path estimated by scan matching. We evaluated our attack on two typical scan matching algorithms, iterative closest point (ICP) and normal distribution transform (NDT). Our experimental results show that SLAM can be fooled by tampering with the scan. Simple odometry sensor fusion is not a sufficient countermeasure. We argue that it is important to detect or prevent tampering with LiDAR scans and to notice inconsistencies in sensors caused by physical attacks.

  • Experimental Study of Fault Injection Attack on Image Sensor Interface for Triggering Backdoored DNN Models Open Access

    Tatsuya OYAMA  Shunsuke OKURA  Kota YOSHIDA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2021/10/26
      Vol:
    E105-A No:3
      Page(s):
    336-343

    A backdoor attack is a type of attack method inducing deep neural network (DNN) misclassification. An adversary mixes poison data, which consist of images tampered with adversarial marks at specific locations and of adversarial target classes, into a training dataset. The backdoor model classifies only images with adversarial marks into an adversarial target class and other images into the correct classes. However, the attack performance degrades sharply when the location of the adversarial marks is slightly shifted. An adversarial mark that induces the misclassification of a DNN is usually applied when a picture is taken, so the backdoor attack will have difficulty succeeding in the physical world because the adversarial mark position fluctuates. This paper proposes a new approach in which an adversarial mark is applied using fault injection on the mobile industry processor interface (MIPI) between an image sensor and the image recognition processor. Two independent attack drivers are electrically connected to the MIPI data lane in our attack system. While almost all image signals are transferred from the sensor to the processor without tampering by canceling the attack signal between the two drivers, the adversarial mark is injected into a given location of the image signal by activating the attack signal generated by the two attack drivers. In an experiment, the DNN was implemented on a Raspberry pi 4 to classify MNIST handwritten images transferred from the image sensor over the MIPI. The adversarial mark successfully appeared in a specific small part of the MNIST images using our attack system. The success rate of the backdoor attack using this adversarial mark was 91%, which is much higher than the 18% rate achieved using conventional input image tampering.

  • Model Reverse-Engineering Attack against Systolic-Array-Based DNN Accelerator Using Correlation Power Analysis Open Access

    Kota YOSHIDA  Mitsuru SHIOZAKI  Shunsuke OKURA  Takaya KUBOTA  Takeshi FUJINO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    152-161

    A model extraction attack is a security issue in deep neural networks (DNNs). Information on a trained DNN model is an attractive target for an adversary not only in terms of intellectual property but also of security. Thus, an adversary tries to reveal the sensitive information contained in the trained DNN model from machine-learning services. Previous studies on model extraction attacks assumed that the victim provides a machine-learning cloud service and the adversary accesses the service through formal queries. However, when a DNN model is implemented on an edge device, adversaries can physically access the device and try to reveal the sensitive information contained in the implemented DNN model. We call these physical model extraction attacks model reverse-engineering (MRE) attacks to distinguish them from attacks on cloud services. Power side-channel analyses are often used in MRE attacks to reveal the internal operation from power consumption or electromagnetic leakage. Previous studies, including ours, evaluated MRE attacks against several types of DNN processors with power side-channel analyses. In this paper, information leakage from a systolic array which is used for the matrix multiplication unit in the DNN processors is evaluated. We utilized correlation power analysis (CPA) for the MRE attack and reveal weight parameters of a DNN model from the systolic array. Two types of the systolic array were implemented on field-programmable gate array (FPGA) to demonstrate that CPA reveals weight parameters from those systolic arrays. In addition, we applied an extended analysis approach called “chain CPA” for robust CPA analysis against the systolic arrays. Our experimental results indicate that an adversary can reveal trained model parameters from a DNN accelerator even if the DNN model parameters in the off-chip bus are protected with data encryption. Countermeasures against side-channel leaks will be important for implementing a DNN accelerator on a FPGA or application-specific integrated circuit (ASIC).

  • Unified Coprocessor Architecture for Secure Key Storage and Challenge-Response Authentication

    Koichi SHIMIZU  Daisuke SUZUKI  Toyohiro TSURUMARU  Takeshi SUGAWARA  Mitsuru SHIOZAKI  Takeshi FUJINO  

     
    PAPER-Hardware Based Security

      Vol:
    E97-A No:1
      Page(s):
    264-274

    In this paper we propose a unified coprocessor architecture that, by using a Glitch PUF and a block cipher, efficiently unifies necessary functions for secure key storage and challenge-response authentication. Based on the fact that a Glitch PUF uses a random logic for the purpose of generating glitches, the proposed architecture is designed around a block cipher circuit such that its round functions can be shared with a Glitch PUF as a random logic. As a concrete example, a circuit structure using a Glitch PUF and an AES circuit is presented, and evaluation results for its implementation on FPGA are provided. In addition, a physical random number generator using the same circuit is proposed. Evaluation results by the two major test suites for randomness, NIST SP 800-22 and Diehard, are provided, proving that the physical random number generator passes the test suites.

  • High Uniqueness Arbiter-Based PUF Circuit Utilizing RG-DTM Scheme for Identification and Authentication Applications

    Mitsuru SHIOZAKI  Kota FURUHASHI  Takahiko MURAYAMA  Akitaka FUKUSHIMA  Masaya YOSHIKAWA  Takeshi FUJINO  

     
    PAPER

      Vol:
    E95-C No:4
      Page(s):
    468-477

    Silicon Physical Unclonable Functions (PUFs) have been proposed to exploit inherent characteristics caused by process variations, such as transistor size, threshold voltage and so on, and to produce an inexpensive and tamper-resistant device such as IC identification, authentication and key generation. We have focused on the arbiter-PUF utilizing the relative delay-time difference between the equivalent paths. The conventional arbiter-PUF has a technical issue, which is low uniqueness caused by the ununiformity on response-generation. To enhance the uniqueness, a novel arbiter-based PUF utilizing the Response Generation according to the Delay Time Measurement (RG-DTM) scheme, has been proposed. In the conventional arbiter-PUF, the response 0 or 1 is assigned according to the single threshold of relative delay-time difference. On the contrary, the response 0 or 1 is assigned according to the multiple threshold of relative delay-time difference in the RG-DTM PUF. The conventional and RG-DTM PUF were designed and fabricated with 0.18 µm CMOS technology. The Hamming distances (HDs) between different chips, which indicate the uniqueness, were calculated by 256-bit responses from the identical challenges on each chip. The ideal distribution of HDs, which indicates high uniqueness, is achieved in the RG-DTM PUF using 16 thresholds of relative delay-time differences. The generative stability, which is the fluctuation of responses in the same environment, and the environmental stability, which is the changes of responses in the different environment were also evaluated. There is a trade-off between high uniqueness and high stability, however, the experimental data shows that the RG-DTM PUF has extremely smaller false matching probability in the identification compared to the conventional PUF.

  • Via Programmable Structured ASIC Architecture “VPEX3” and CAD Design System

    Ryohei HORI  Taisuke UEOKA  Taku OTANI  Masaya YOSHIKAWA  Takeshi FUJINO  

     
    PAPER-Physical Level Design

      Vol:
    E95-A No:12
      Page(s):
    2182-2190

    A low-cost and low-power via-programmable structured ASIC architecture named “VPEX3” and a VPEX3-specific CAD system are developed. In the VPEX3 architecture, which is an improved version of the old VPEX and VPEX2 architectures, an arbitrary logic function including sequential logic can be programmed by three via layers. The logic elements (LEs) of VPEX3 are 60% smaller than those of the previous VPEX2, which can be programmed by two via layers. In this paper, we describe a global architecture named Logic Array Block (LAB) composed of LE matrices. The clock lines are buffered in the buffering region on the left and right sides of LAB. Next, a VPEX3-specific CAD system utilizing an academic placement tool named “CAPO” and the “FGR” global router is developed. Since these tools are originally designed for ASICs, we developed CAD tools for supporting a structured ASIC architecture. In particular, we developed a detailed router that assigns via positions on the via-programmable routing fabric. Our CAD system successfully converts the HDL design to GDS-II data format including via-1, 2, 3 layouts, and the successful verification of LVS and DRC on GDSII is achieved. The performance of the VPEX3 architecture and the CAD system is evaluated using ISCAS benchmark circuits. The developed CAD system is used to successfully design a test chip composed of 130110 LEs.

  • Improved Via-Programmable Structured ASIC VPEX3 and Its Evaluation

    Ryohei HORI  Tatsuya KITAMORI  Taisuke UEOKA  Masaya YOSHIKAWA  Takeshi FUJINO  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E95-A No:9
      Page(s):
    1518-1528

    Various kinds of structured ASICs have been proposed that can customize logic functions using a few photomasks, which decreases the initial cost, especially that of expensive photo-masks. In the past, we have developed a via programmable structured ASIC “VPEX2” (Via Programmable logic device using EXclusive-or array) that is capable of changing logics on 2 via (the 1st and 3rd via) layers. The logic element (LE) of VPEX2 is composed of EXOR gate and 2 NOT gates. However, “VPEX2” architecture has the two important penalty, the area penalty is 5-6 times that of the ASIC and wiring congestion by detouring wires to avoid I/O terminals. In this paper, we propose a new architecture “VPEX3” in order to achieve the practical structures. In VPEX3, we applied three techniques for decrease area penalty and higher wiring efficiency: (1) LE area is reduced approximately 60% by omitting 1 NOT gate on a LE and the gate width reduction, (2) the kinds of configurable logic function on a single LE is increased from 13 to 22 by introducing “flexible AOI gate technique” and (3) flexible I/O terminal by introducing 2nd via as a programmable layers. Furthermore, the delay model for via programmable wiring is necessary in order to evaluate via programmable wiring architecture compared to standard cell ASIC. We extracted wiring delay characteristics from the ring oscillator test circuit using both of normal wiring and via-programmable wiring. These three new architectures and via programmable wiring-delay-model revealed that an area-delay product of “VPEX3” is as small as twice that of ASIC. Chip-cost estimation among FPGA, “VPEX2”, “VPEX3” and ASIC revealed that the “VPEX3” is the most cost-effective architecture for Systems-on-chips (SoCs) whose production volume is from one thousand to several tens of thousands units.

  • Adversarial Black-Box Attacks with Timing Side-Channel Leakage

    Tsunato NAKAI  Daisuke SUZUKI  Fumio OMATSU  Takeshi FUJINO  

     
    PAPER

      Vol:
    E104-A No:1
      Page(s):
    143-151

    Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.

  • A 0.18 µm 32 Mb Embedded DRAM Macro for 3-D Graphics Controller

    Akira YAMAZAKI  Takeshi FUJINO  Kazunari INOUE  Isamu HAYASHI  Hideyuki NODA  Naoya WATANABE  Fukashi MORISHITA  Katsumi DOSAKA  Yoshikazu MOROOKA  Shinya SOEDA  Kazutami ARIMOTO  Setsuo WAKE  Kazuyasu FUJISHIMA  Hideyuki OZAKI  

     
    PAPER-Electronic Circuits

      Vol:
    E85-C No:9
      Page(s):
    1697-1708

    A 23.3 mm2 32 Mb embedded DRAM (eDRAM) macro has been fabricated using 0.18 µm triple-well 4-metal embedded DRAM process technology to realize an accelerated 3-D graphics controller. The array architecture, using a dual-port sense amplifier, achieves the column access latency of two cycles at 222 MHz and a peak data rate of 14.2 4 GB/s at 4 macros. The process cost has been kept low by using VT-MOS circuit technology and taking advantage of a characteristic of dual-gate oxide process technology. A tRAC of 11.6 ns at 2.0 V is achieved using a 'pre-detect redundancy' circuit.

  • Exploring Effect of Residual Electric Charges on Cryptographic Circuits: Extended Version

    Mitsuru SHIOZAKI  Takeshi SUGAWARA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2022/09/15
      Vol:
    E106-A No:3
      Page(s):
    281-293

    We study a new transistor-level side-channel leakage caused by charges trapped in between stacked transistors namely residual electric charges (RECs). Building leakage models is important in designing countermeasures against side-channel attacks (SCAs). The conventional work showed that even a transistor-level leakage is measurable with a local electromagnetic measurement. One example is the current-path leak [1], [2]: an attacker can distinguish the number of transistors in the current path activated during a signal transition. Addressing this issue, Sugawara et al. proposed to use a mirror circuit that has the same number of transistors on its possible current paths. We show that this countermeasure is insufficient by showing a new transistor-level leakage, caused by RECs, not covered in the previous work. RECs can carry the history of the gate's state over multiple clock cycles and changes the gate's electrical behavior. We experimentally verify that RECs cause exploitable side-channel leakage. We also propose a countermeasure against REC leaks and designed advanced encryption standard-128 (AES-128) circuits using IO-masked dual-rail read-only memory with a 180-nm complementary metal-oxide-semiconductor (CMOS) process. We compared the resilience of our AES-128 circuits against EMA attacks with and without our countermeasure and investigated an RECs' effect on physically unclonable functions (PUFs). We further extend RECs to physically unclonable function. We demonstrate that RECs affect the performance of arbiter and ring-oscillator PUFs through experiments using our custom chips fabricated with 180- and 40-nm CMOS processes*.

  • Profiling Deep Learning Side-Channel Attacks Using Multi-Label against AES Circuits with RSM Countermeasure

    Yuta FUKUDA  Kota YOSHIDA  Hisashi HASHIMOTO  Kunihiro KURODA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2022/09/08
      Vol:
    E106-A No:3
      Page(s):
    294-305

    Deep learning side-channel attacks (DL-SCAs) have been actively studied in recent years. In the DL-SCAs, deep neural networks (DNNs) are trained to predict the internal states of the cryptographic operation from the side-channel information such as power traces. It is important to select suitable DNN output labels expressing an internal states for successful DL-SCAs. We focus on the multi-label method proposed by Zhang et al. for the hardware-implemented advanced encryption standard (AES). They used the power traces supplied from the AES-HD public dataset, and reported to reveal a single key byte on conditions in which the target key was the same as the key used for DNN training (profiling key). In this paper, we discuss an improvement for revealing all the 16 key bytes in practical conditions in which the target key is different from the profiling key. We prepare hardware-implemented AES without SCA countermeasures on ASIC for the experimental environment. First, our experimental results show that the DNN using multi-label does not learn side-channel leakage sufficiently from the power traces acquired with only one key. Second, we report that DNN using multi-label learns the most of side-channel leakage by using three kinds of profiling keys, and all the 16 target key bytes are successfully revealed even if the target key is different from the profiling keys. Finally, we applied the proposed method, DL-SCA using multi-label and three profiling keys against hardware-implemented AES with rotating S-boxes masking (RSM) countermeasures. The experimental result shows that all the 16 key bytes are successfully revealed by using only 2,000 attack traces. We also studied the reasons for the high performance of the proposed method against RSM countermeasures and found that the information from the weak bits is effectively exploited.

  • Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface

    Tatsuya OYAMA  Kota YOSHIDA  Shunsuke OKURA  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2023/09/26
      Vol:
    E107-A No:3
      Page(s):
    344-354

    Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.

  • Regular Fabric of Via Programmable Logic Device Using EXclusive-or Array (VPEX) for EB Direct Writing

    Akihiro NAKAMURA  Masahide KAWARASAKI  Kouta ISHIBASHI  Masaya YOSHIKAWA  Takeshi FUJINO  

     
    PAPER

      Vol:
    E91-C No:4
      Page(s):
    509-516

    The photo-mask cost of standard-cell-based ASICs has been increased so prohibitively that low-volume production LSIs are difficult to fabricate due to high non-recurring engineering (NRE) cost including mask cost. Recently, user-programmable devices, such as FPGAs are started to be used for low-volume consumer products. However, FPGAs cannot be replaced for general purpose because of its lower speed-performance and higher power consumption. In this paper, we propose the user-programmable architecture called VPEX (Via Programmable logic device using EXclusive-or array), in which the hardware logic can be programmed by changing layout patterns on 2 via-layers. The logic element (LE) of VPEX consists of complex-gate-type EXclusive OR (EXOR) and Inverter (NOT) gates. The single LE can output 12 logics which include NOT, Buffer (BUF), all 2-inputs logic functions, 3-inputs AOI21 and inverted-output multiplexer (MUXI) by changing via-1 layout pattern. Furthermore, via-1 layout is optimized for high-throughput EB direct writing, so mask-less programming will be realized in VPEX. We compared the performance of area, speed, and power consumption of VPEX with that of standard-cell-based ASICs and FPGAs. As a result, the speed performance of VPEX was much better than FPGAs and about 1.3-1.6 times worse than standard-cells. We believe that the combination of VPEX architecture and EB direct writing is the best solution for low-volume production LSIs.

  • Development of Compression Tolerable and Highly Implementable Watermarking Method for Mobile Devices

    Takeshi KUMAKI  Kei NAKAO  Kohei HOZUMI  Takeshi OGURA  Takeshi FUJINO  

     
    LETTER-Information Network

      Vol:
    E97-D No:3
      Page(s):
    593-596

    This paper reports on the image compression tolerability and high implementability of a novel proposed watermarking method that uses a morphological wavelet transform based on max-plus algebra. This algorithm is suitable for embedded low-power processors in mobile devices. For objective and unified evaluation of the capability of the proposed watermarking algorithm, we focus attention on a watermarking contest presented by the IHC, which belongs to the IEICE and investigate the image quality and tolerance against JPEG compression attack. During experiments for this contest, six benchmark images processed by the proposed watermarking is done to reduce the file size of original images to 1/10, 1/20, or less, and the error rate of embedding data is reduced to 0%. Thus, the embedded data can be completely extracted. The PSNR value is up to 54.66dB in these experiments. Furthermore, when the smallest image size is attained 0.49MB and the PSNR value become about 52dB, the proposed algorithm maintains very high quality with an error rate of 0%. Additionally, the processing time of the proposed watermarking can realize about 416.4 and 4.6 times faster than that of DCT and HWT on the ARM processor, respectively. As a result, the proposed watermarking method achieves effective processing capability for mobile processors.

  • Security Evaluation of RG-DTM PUF Using Machine Learning Attacks

    Mitsuru SHIOZAKI  Kousuke OGAWA  Kota FURUHASHI  Takahiko MURAYAMA  Masaya YOSHIKAWA  Takeshi FUJINO  

     
    PAPER-Hardware Based Security

      Vol:
    E97-A No:1
      Page(s):
    275-283

    In modern hardware security applications, silicon physical unclonable functions (PUFs) are of interest for their potential use as a unique identity or secret key that is generated from inherent characteristics caused by process variations. However, arbiter-based PUFs utilizing the relative delay-time difference between equivalent paths have a security issue in which the generated challenge-response pairs (CRPs) can be predicted by a machine learning attack. We previously proposed the RG-DTM PUF, in which a response is decided from divided time domains allocated to response 0 or 1, to improve the uniqueness of the conventional arbiter-PUF in a small circuit. However, its resistance against machine learning attacks has not yet been studied. In this paper, we evaluate the resistance against machine learning attacks by using a support vector machine (SVM) and logistic regression (LR) in both simulations and measurements and compare the RG-DTM PUF with the conventional arbiter-PUF and with the XOR arbiter-PUF, which strengthens the resistance by using XORing output from multiple arbiter-PUFs. In numerical simulations, prediction rates using both SVM and LR were above 90% within 1,000 training CRPs on the arbiter-PUF. The machine learning attack using the SVM could never predict responses on the XOR arbiter-PUF with over six arbiter-PUFs, whereas the prediction rate eventually reached 95% using the LR and many training CRPs. On the RG-DTM PUF, when the division number of the time domains was over eight, the prediction rates using the SVM were equal to the probability by guess. The machine learning attack using LR has the potential to predict responses, although an adversary would need to steal a significant amount of CRPs. However, the resistance can exponentially be strengthened with an increase in the division number, just like with the XOR arbiter-PUF. Over one million CRPs are required to attack the 16-divided RG-DTM PUF. Differences between the RG-DTM PUF and the XOR arbiter-PUF relate to the area penalty and the power penalty. Specifically, the XOR arbiter-PUF has to make up for resistance against machine learning attacks by increasing the circuit area, while the RG-DTM PUF is resistant against machine learning attacks with less area penalty and power penalty since only capacitors are added to the conventional arbiter-PUF. We also attacked RG-DTM PUF chips, which were fabricated with 0.18-µm CMOS technology, to evaluate the effect of physical variations and unstable responses. The resistance against machine learning attacks was related to the delay-time difference distribution, but unstable responses had little influence on the attack results.