The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

10121-10140hit(42807hit)

  • Power Noise Measurements of Cryptographic VLSI Circuits Regarding Side-Channel Information Leakage

    Daisuke FUJIMOTO  Noriyuki MIURA  Makoto NAGATA  Yuichi HAYASHI  Naofumi HOMMA  Takafumi AOKI  Yohei HORI  Toshihiro KATASHITA  Kazuo SAKIYAMA  Thanh-Ha LE  Julien BRINGER  Pirouz BAZARGAN-SABET  Shivam BHASIN  Jean-Luc DANGER  

     
    PAPER

      Vol:
    E97-C No:4
      Page(s):
    272-279

    Power supply noise waveforms within cryptographic VLSI circuits in a 65nm CMOS technology are captured by using an on-chip voltage waveform monitor (OCM). The waveforms exhibit the correlation of dynamic voltage drops to internal logical activities during Advance Encryption Standard (AES) processing, and causes side-channel information leakage regarding to secret key bytes. Correlation Power Analysis (CPA) is the method of an attack extracting such information leakage from the waveforms. The frequency components of power supply noise contributing the leakage are shown to be localized in an extremely low frequency region. The level of information leakage is strongly associated with the size of increment of dynamic voltage drops against the Hamming distance in the AES processing. The time window of significant importance where the leakage most likely happens is clearly designated within a single clock cycle in the final stage of AES processing. The on-chip power supply noise measurements unveil the facts about side-channel information leakage behind the traditional CPA with on-board sensing of power supply current through a resistor of 1 ohm.

  • NAND Phase Change Memory with Block Erase Architecture and Pass-Transistor Design Requirements for Write and Disturbance

    Koh JOHGUCHI  Kasuaki YOSHIOKA  Ken TAKEUCHI  

     
    PAPER

      Vol:
    E97-C No:4
      Page(s):
    351-359

    In this paper, we propose an optimum access method for a phase change memory (PCM) with NAND strings. A PCM with a block erase interface is proposed. The method, which has a SET block erase operation and fast RESET programming, is proposed since the SET operation causes a slow access time for conventional PCM;. From the results of measurement, the SET-ERASE operation is successfully completed while the RESET-ERASE operation is incomplete owing to serial connection. As a result, the block erase interface with the SET-ERASE and RESET program method realizes a 7.7 times faster write speed compared than a conventional RAM interface owing to the long SET time. We also give pass-transistor design guidelines for PCM with NAND strings. In addition, the write-capability and write-disturb problems are investigated. The ERASE operation for the proposed device structure can be realized with the same current as that for the SET operation of a single cell. For the pass transistor, about 4.4 times larger on-current is needed to carry out the RESET operation and to avoid the write-disturb problem than the minimum RESET current of a single cell. In this paper, the SET programming method is also verified for a conventional RAM interface. The experimental results show that the write-capability and write-disturb problems are negligible.

  • Hypersphere Sampling for Accelerating High-Dimension and Low-Failure Probability Circuit-Yield Analysis

    Shiho HAGIWARA  Takanori DATE  Kazuya MASU  Takashi SATO  

     
    PAPER

      Vol:
    E97-C No:4
      Page(s):
    280-288

    This paper proposes a novel and an efficient method termed hypersphere sampling to estimate the circuit yield of low-failure probability with a large number of variable sources. Importance sampling using a mean-shift Gaussian mixture distribution as an alternative distribution is used for yield estimation. Further, the proposed method is used to determine the shift locations of the Gaussian distributions. This method involves the bisection of cones whose bases are part of the hyperspheres, in order to locate probabilistically important regions of failure; the determination of these regions accelerates the convergence speed of importance sampling. Clustering of the failure samples determines the required number of Gaussian distributions. Successful static random access memory (SRAM) yield estimations of 6- to 24-dimensional problems are presented. The number of Monte Carlo trials has been reduced by 2-5 orders of magnitude as compared to conventional Monte Carlo simulation methods.

  • A New Hybrid Approach for Privacy Preserving Distributed Data Mining

    Chongjing SUN  Hui GAO  Junlin ZHOU  Yan FU  Li SHE  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:4
      Page(s):
    876-883

    With the distributed data mining technique having been widely used in a variety of fields, the privacy preserving issue of sensitive data has attracted more and more attention in recent years. Our major concern over privacy preserving in distributed data mining is the accuracy of the data mining results while privacy preserving is ensured. Corresponding to the horizontally partitioned data, this paper presents a new hybrid algorithm for privacy preserving distributed data mining. The main idea of the algorithm is to combine the method of random orthogonal matrix transformation with the proposed secure multi-party protocol of matrix product to achieve zero loss of accuracy in most data mining implementations.

  • Effect of Multivariate Cauchy Mutation in Evolutionary Programming

    Chang-Yong LEE  Yong-Jin PARK  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E97-D No:4
      Page(s):
    821-829

    In this paper, we apply a mutation operation based on a multivariate Cauchy distribution to fast evolutionary programming and analyze its effect in terms of various function optimizations. The conventional fast evolutionary programming in-cooperates the univariate Cauchy mutation in order to overcome the slow convergence rate of the canonical Gaussian mutation. For a mutation of n variables, while the conventional method utilizes n independent random variables from a univariate Cauchy distribution, the proposed method adopts n mutually dependent random variables that satisfy a multivariate Cauchy distribution. The multivariate Cauchy distribution naturally has higher probabilities of generating random variables in inter-variable regions than the univariate Cauchy distribution due to the mutual dependence among variables. This implies that the multivariate Cauchy random variable enhances the search capability especially for a large number of correlated variables, and, as a result, is more appropriate for optimization schemes characterized by interdependence among variables. In this sense, the proposed mutation possesses the advantage of both the univariate Cauchy and Gaussian mutations. The proposed mutation is tested against various types of real-valued function optimizations. We empirically find that the proposed mutation outperformed the conventional Cauchy and Gaussian mutations in the optimization of functions having correlations among variables, whereas the conventional mutations showed better performance in functions of uncorrelated variables.

  • A High-Frame-Rate Vision System with Automatic Exposure Control

    Qingyi GU  Abdullah AL NOMAN  Tadayoshi AOYAMA  Takeshi TAKAKI  Idaku ISHII  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:4
      Page(s):
    936-950

    In this paper, we present a high frame rate (HFR) vision system that can automatically control its exposure time by executing brightness histogram-based image processing in real time at a high frame rate. Our aim is to obtain high-quality HFR images for robust image processing of high-speed phenomena even under dynamically changing illumination, such as lamps flickering at 100 Hz, corresponding to an AC power supply at 50 / 60 Hz. Our vision system can simultaneously calculate a 256-bin brightness histogram for an 8-bit gray image of 512×512 pixels at 2000 fps by implementing a brightness histogram calculation circuit module as parallel hardware logic on an FPGA-based high-speed vision platform. Based on the HFR brightness histogram calculation, our method realizes automatic exposure (AE) control of 512×512 images at 2000 fps using our proposed AE algorithm. The proposed AE algorithm can maximize the number of pixels in the effective range of the brightness histogram, thus excluding much darker and brighter pixels, to improve the dynamic range of the captured image without over- and under-exposure. The effectiveness of our HFR system with AE control is evaluated according to experimental results for several scenes with illumination flickering at 100 Hz, which is too fast for the human eye to see.

  • A New Non-data Aided Frequency Offset Estimation Method for OFDM Based Device-to-Device Systems

    Kyunghoon WON  Dongjun LEE  Wonjun HWANG  Hyung-Jin CHOI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:4
      Page(s):
    896-904

    D2D (Device-to-Device) communication has received considerable attention in recent years as one of the key technologies for future communication systems. Among the typical D2D communication systems, FlashLinQ (FLQ) adopted single-tone OFDM (Orthogonal Frequency Division Multiplexing) transmission which enables wide-sense discovery and distributed channel-aware link scheduling. Although synchronization based on a CES (Common External Source) is basically assumed in FLQ, a means to support devices when they are unable to use a CES is still necessary. In most OFDM systems, CFO (Carrier Frequency Offset) induces ICI (Inter Channel Interference) which degrades overall system performance drastically. Especially in D2D systems, ICI can be amplified due to different path losses between link and a precise estimation and correction of CFO is very important. Many CFO estimation algorithms based on DA (Data Aided) and NDA (None Data Aided) were proposed for OFDM systems, but there are several constraint conditions on frequency synchronization in D2D systems. Therefore, in this paper, we propose a new NDA-CFO estimation method for OFDM based D2D systems. The proposed method is based on the characteristics of single-tone OFDM signal, and is composed of two estimation stages: initial estimation and feed-back estimation. In initial estimation, the estimation of CFO is obtained by using two correlation results in a symbol. Also, estimation range can be adaptively defined as the distance between the two windows. In feed-back estimation, the distance between the two correlation results is gradually increased by re-using the estimated CFO and the correlation results. Therefore, more precise CFO estimation can be obtained. A numerical analysis and performance evaluation verify that the proposed method has a large estimation range and achieves precise estimation performance compared to the conventional methods.

  • Unambiguous Tracking Method Based on a New Combination Function for BOC Signals

    Lan YANG  Zulin WANG  Qin HUANG  Lei ZHAO  

     
    PAPER-Navigation, Guidance and Control Systems

      Vol:
    E97-B No:4
      Page(s):
    923-929

    The auto-correlation function (ACF) of Binary Offset Carrier (BOC) modulated signals has multiple peaks which raise the problem of ambiguity in acquisition and tracking. In this paper, the ACF is split into several sub-correlation functions (SCFs) through dividing the integration period of ACF into several partials. Then a pseudo correlation function (PCF) is constructed from the SCFs through a combination function to eliminate all side-peaks. The unambiguous tracking method based on the PCF achieves better code phase tracking accuracy than the conventional methods in AWGN environment. It only requires half computation cost of Bump-Jumping (BJ) and nearly quarter of Double-Estimator, although offers slightly less accurate tracking than BJ and Double-Estimator in multi-path environment. Moreover, this method suits all kinds of BOC signals without any auxiliary correlators.

  • A 7-bit 1-GS/s Flash ADC with Background Calibration

    Sanroku TSUKAMOTO  Masaya MIYAHARA  Akira MATSUZAWA  

     
    PAPER

      Vol:
    E97-C No:4
      Page(s):
    298-307

    A 7bit 1GS/s flash ADC using two bit active interpolation and background offset calibration is proposed and tested. It achieves background calibration using 36 pre-amplifiers with 139 comparators. To cancel the offset, two pre-amplifiers and 12 comparators are set to offline in turn while the others are operating. A two bit active interpolation design and an offset cancellation scheme are implemented in the latch stage. The interpolation and background calibration significantly reduce analog input signal as well as reference voltage load. Fabricated with the 90nm CMOS process, the proposed ADC consumes 95mW under a 1.2V power supply.

  • File and Task Abstraction in Task Workflow Patterns for File Recommendation Using File-Access Log Open Access

    Qiang SONG  Takayuki KAWABATA  Fumiaki ITOH  Yousuke WATANABE  Haruo YOKOTA  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    634-643

    The numbers of files in file systems have increased dramatically in recent years. Office workers spend much time and effort searching for the documents required for their jobs. To reduce these costs, we propose a new method for recommending files and operations on them. Existing technologies for recommendation, such as collaborative filtering, suffer from two problems. First, they can only work with documents that have been accessed in the past, so that they cannot recommend when only newly generated documents are inputted. Second, they cannot easily handle sequences involving similar or differently ordered elements because of the strict matching used in the access sequences. To solve these problems, such minor variations should be ignored. In our proposed method, we introduce the concepts of abstract files as groups of similar files used for a similar purpose, abstract tasks as groups of similar tasks, and frequent abstract workflows grouped from similar workflows, which are sequences of abstract tasks. In experiments using real file-access logs, we confirmed that our proposed method could extract workflow patterns with longer sequences and higher support-count values, which are more suitable as recommendations. In addition, the F-measure for the recommendation results was improved significantly, from 0.301 to 0.598, compared with a method that did not use the concepts of abstract tasks and abstract workflows.

  • Time Graph Pattern Mining for Network Analysis and Information Retrieval Open Access

    Yasuhito ASANO  Taihei OSHINO  Masatoshi YOSHIKAWA  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    733-742

    Graph pattern mining has played important roles in network analysis and information retrieval. However, temporal characteristics of networks have not been estimated sufficiently. We propose time graph pattern mining as a new concept of graph mining reflecting the temporal information of a network. We conduct two case studies of time graph pattern mining: extensively discussed topics on blog sites and a book recommendation network. Through examination of case studies, we ascertain that time graph pattern mining has numerous possibilities as a novel means for information retrieval and network analysis reflecting both structural and temporal characteristics.

  • Online Inference of Mixed Membership Stochastic Blockmodels for Network Data Streams Open Access

    Tomoki KOBAYASHI  Koji EGUCHI  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    752-761

    Many kinds of data can be represented as a network or graph. It is crucial to infer the latent structure underlying such a network and to predict unobserved links in the network. Mixed Membership Stochastic Blockmodel (MMSB) is a promising model for network data. Latent variables and unknown parameters in MMSB have been estimated through Bayesian inference with the entire network; however, it is important to estimate them online for evolving networks. In this paper, we first develop online inference methods for MMSB through sequential Monte Carlo methods, also known as particle filters. We then extend them for time-evolving networks, taking into account the temporal dependency of the network structure. We demonstrate through experiments that the time-dependent particle filter outperformed several baselines in terms of prediction performance in an online condition.

  • Analyzing Information Flow and Context for Facebook Fan Pages Open Access

    Kwanho KIM  Josué OBREGON  Jae-Yoon JUNG  

     
    LETTER

      Vol:
    E97-D No:4
      Page(s):
    811-814

    As the recent growth of online social network services such as Facebook and Twitter, people are able to easily share information with each other by writing posts or commenting for another's posts. In this paper, we firstly suggest a method of discovering information flows of posts on Facebook and their underlying contexts by incorporating process mining and text mining techniques. Based on comments collected from Facebook, the experiment results illustrate how the proposed method can be applied to analyze information flows and contexts of posts on social network services.

  • New Metrics for Prioritized Interaction Test Suites

    Rubing HUANG  Dave TOWEY  Jinfu CHEN  Yansheng LU  

     
    PAPER-Software Engineering

      Vol:
    E97-D No:4
      Page(s):
    830-841

    Combinatorial interaction testing has been well studied in recent years, and has been widely applied in practice. It generally aims at generating an effective test suite (an interaction test suite) in order to identify faults that are caused by parameter interactions. Due to some constraints in practical applications (e.g. limited testing resources), for example in combinatorial interaction regression testing, prioritized interaction test suites (called interaction test sequences) are often employed. Consequently, many strategies have been proposed to guide the interaction test suite prioritization. It is, therefore, important to be able to evaluate the different interaction test sequences that have been created by different strategies. A well-known metric is the Average Percentage of Combinatorial Coverage (shortly APCCλ), which assesses the rate of interaction coverage of a strength λ (level of interaction among parameters) covered by a given interaction test sequence S. However, APCCλ has two drawbacks: firstly, it has two requirements (that all test cases in S be executed, and that all possible λ-wise parameter value combinations be covered by S); and secondly, it can only use a single strength λ (rather than multiple strengths) to evaluate the interaction test sequence - which means that it is not a comprehensive evaluation. To overcome the first drawback, we propose an enhanced metric Normalized APCCλ (NAPCC) to replace the APCCλ Additionally, to overcome the second drawback, we propose three new metrics: the Average Percentage of Strengths Satisfied (APSS); the Average Percentage of Weighted Multiple Interaction Coverage (APWMIC); and the Normalized APWMIC (NAPWMIC). These metrics comprehensively assess a given interaction test sequence by considering different interaction coverage at different strengths. Empirical studies show that the proposed metrics can be used to distinguish different interaction test sequences, and hence can be used to compare different test prioritization strategies.

  • Automatic Rectification of Processor Design Bugs Using a Scalable and General Correction Model

    Amir Masoud GHAREHBAGHI  Masahiro FUJITA  

     
    PAPER-Dependable Computing

      Vol:
    E97-D No:4
      Page(s):
    852-863

    This paper presents a method for automatic rectification of design bugs in processors. Given a golden sequential instruction-set architecture model of a processor and its erroneous detailed cycle-accurate model at the micro-architecture level, we perform symbolic simulation and property checking combined with concrete simulation iteratively to detect the buggy location and its corresponding fix. We have used the truth-table model of the function that is required for correction, which is a very general model. Moreover, we do not represent the truth-table explicitly in the design. We use, instead, only the required minterms, which are obtained from the output of our backend formal engine. This way, we avoid adding any new variable for representing the truth-table. Therefore, our correction model is scalable to the number of inputs of the truth-table that could grow exponentially. We have shown the effectiveness of our method on a complex out-of-order superscalar processor supporting atomic execution of instructions. Our method reduces the model size for correction by 6.0x and total correction time by 12.6x, on average, compared to our previous work.

  • An Improved Video Identification Scheme Based on Video Tomography

    Qing-Ge JI  Zhi-Feng TAN  Zhe-Ming LU  Yong ZHANG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:4
      Page(s):
    919-927

    In recent years, with the popularization of video collection devices and the development of the Internet, it is easy to copy original digital videos and distribute illegal copies quickly through the Internet. It becomes a critical task to uphold copyright laws, and this problem will require a technical solution. Therefore, as a challenging problem, copy detection or video identification becomes increasingly important. The problem addressed here is to identify a given video clip in a given set of video sequences. In this paper, an extension to the video identification approach based on video tomography is presented. First, the feature extraction process is modified to enhance the reliability of the shot signature with its size unchanged. Then, a new similarity measurement between two shot signatures is proposed to address the problem generated by the original approach when facing the query shot with a short length. In addition, the query scope is extended from one shot only to one clip (several consecutive shots) by giving a new definition of similarity between two clips and describing a search algorithm which can save much of the computation cost. Experimental results show that the proposed approach is more suitable for identifying shots with short lengths than the original approach. The clip query approach performs well in the experiment and it also shows strong robustness to data loss.

  • Data Filter Cache with Partial Tag Matching for Low Power Embedded Processor

    Ju Hee CHOI  Jong Wook KWAK  Seong Tae JHANG  Chu Shik JHON  

     
    LETTER-Computer System

      Vol:
    E97-D No:4
      Page(s):
    972-975

    Filter caches have been studied as an energy efficient solution. They achieve energy savings via selected access to L1 cache, but severely decrease system performance. Therefore, a filter cache system should adopt components that balance execution delay against energy savings. In this letter, we analyze the legacy filter cache system and propose Data Filter Cache with Partial Tag Cache (DFPC) as a new solution. The proposed DFPC scheme reduces energy consumption of L1 data cache and does not impair system performance at all. Simulation results show that DFPC provides the 46.36% energy savings without any performance loss.

  • A Novel Joint Rate Distortion Optimization Scheme for Intra Prediction Coding in H.264/AVC

    Qingbo WU  Jian XIONG  Bing LUO  Chao HUANG  Linfeng XU  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E97-D No:4
      Page(s):
    989-992

    In this paper, we propose a novel joint rate distortion optimization (JRDO) model for intra prediction coding. The spatial prediction dependency is exploited by modeling the distortion propagation with a linear fitting function. A novel JRDO based Lagrange multiplier (LM) is derived from this model. To adapt to different blocks' distortion propagation characteristics, we also introduce a generalized multiple Lagrange multiplier (MLM) framework where some candidate LMs are used in the RDO process. Experiment results show that our proposed JRDO-MLM scheme is superior to the H.264/AVC encoder.

  • Interleaved k-NN Classification and Bias Field Estimation for MR Image with Intensity Inhomogeneity

    Jingjing GAO  Mei XIE  Ling MAO  

     
    LETTER-Biological Engineering

      Vol:
    E97-D No:4
      Page(s):
    1011-1015

    k-NN classification has been applied to classify normal tissues in MR images. However, the intensity inhomogeneity of MR images forces conventional k-NN classification into significant misclassification errors. This letter proposes a new interleaved method, which combines k-NN classification and bias field estimation in an energy minimization framework, to simultaneously overcome the limitation of misclassifications in conventional k-NN classification and correct the bias field of observed images. Experiments demonstrate the effectiveness and advantages of the proposed algorithm.

  • VAWS: Constructing Trusted Open Computing System of MapReduce with Verified Participants Open Access

    Yan DING  Huaimin WANG  Lifeng WEI  Songzheng CHEN  Hongyi FU  Xinhai XU  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    721-732

    MapReduce is commonly used as a parallel massive data processing model. When deploying it as a service over the open systems, the computational integrity of the participants is becoming an important issue due to the untrustworthy workers. Current duplication-based solutions can effectively solve non-collusive attacks, yet most of them require a centralized worker to re-compute additional sampled tasks to defend collusive attacks, which makes the worker a bottleneck. In this paper, we try to explore a trusted worker scheduling framework, named VAWS, to detect collusive attackers and assure the integrity of data processing without extra re-computation. Based on the historical results of verification, we construct an Integrity Attestation Graph (IAG) in VAWS to identify malicious mappers and remove them from the framework. To further improve the efficiency of identification, a verification-couple selection method with the IAG guidance is introduced to detect the potential accomplices of the confirmed malicious worker. We have proven the effectiveness of our proposed method on the improvement of system performance in theoretical analysis. Intensive experiments show the accuracy of VAWS is over 97% and the overhead of computation is closed to the ideal value of 2 with the increasing of the number of map tasks in our scheme.

10121-10140hit(42807hit)