The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

4181-4200hit(16314hit)

  • Power Noise Measurements of Cryptographic VLSI Circuits Regarding Side-Channel Information Leakage

    Daisuke FUJIMOTO  Noriyuki MIURA  Makoto NAGATA  Yuichi HAYASHI  Naofumi HOMMA  Takafumi AOKI  Yohei HORI  Toshihiro KATASHITA  Kazuo SAKIYAMA  Thanh-Ha LE  Julien BRINGER  Pirouz BAZARGAN-SABET  Shivam BHASIN  Jean-Luc DANGER  

     
    PAPER

      Vol:
    E97-C No:4
      Page(s):
    272-279

    Power supply noise waveforms within cryptographic VLSI circuits in a 65nm CMOS technology are captured by using an on-chip voltage waveform monitor (OCM). The waveforms exhibit the correlation of dynamic voltage drops to internal logical activities during Advance Encryption Standard (AES) processing, and causes side-channel information leakage regarding to secret key bytes. Correlation Power Analysis (CPA) is the method of an attack extracting such information leakage from the waveforms. The frequency components of power supply noise contributing the leakage are shown to be localized in an extremely low frequency region. The level of information leakage is strongly associated with the size of increment of dynamic voltage drops against the Hamming distance in the AES processing. The time window of significant importance where the leakage most likely happens is clearly designated within a single clock cycle in the final stage of AES processing. The on-chip power supply noise measurements unveil the facts about side-channel information leakage behind the traditional CPA with on-board sensing of power supply current through a resistor of 1 ohm.

  • Hypersphere Sampling for Accelerating High-Dimension and Low-Failure Probability Circuit-Yield Analysis

    Shiho HAGIWARA  Takanori DATE  Kazuya MASU  Takashi SATO  

     
    PAPER

      Vol:
    E97-C No:4
      Page(s):
    280-288

    This paper proposes a novel and an efficient method termed hypersphere sampling to estimate the circuit yield of low-failure probability with a large number of variable sources. Importance sampling using a mean-shift Gaussian mixture distribution as an alternative distribution is used for yield estimation. Further, the proposed method is used to determine the shift locations of the Gaussian distributions. This method involves the bisection of cones whose bases are part of the hyperspheres, in order to locate probabilistically important regions of failure; the determination of these regions accelerates the convergence speed of importance sampling. Clustering of the failure samples determines the required number of Gaussian distributions. Successful static random access memory (SRAM) yield estimations of 6- to 24-dimensional problems are presented. The number of Monte Carlo trials has been reduced by 2-5 orders of magnitude as compared to conventional Monte Carlo simulation methods.

  • A High-Frame-Rate Vision System with Automatic Exposure Control

    Qingyi GU  Abdullah AL NOMAN  Tadayoshi AOYAMA  Takeshi TAKAKI  Idaku ISHII  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:4
      Page(s):
    936-950

    In this paper, we present a high frame rate (HFR) vision system that can automatically control its exposure time by executing brightness histogram-based image processing in real time at a high frame rate. Our aim is to obtain high-quality HFR images for robust image processing of high-speed phenomena even under dynamically changing illumination, such as lamps flickering at 100 Hz, corresponding to an AC power supply at 50 / 60 Hz. Our vision system can simultaneously calculate a 256-bin brightness histogram for an 8-bit gray image of 512×512 pixels at 2000 fps by implementing a brightness histogram calculation circuit module as parallel hardware logic on an FPGA-based high-speed vision platform. Based on the HFR brightness histogram calculation, our method realizes automatic exposure (AE) control of 512×512 images at 2000 fps using our proposed AE algorithm. The proposed AE algorithm can maximize the number of pixels in the effective range of the brightness histogram, thus excluding much darker and brighter pixels, to improve the dynamic range of the captured image without over- and under-exposure. The effectiveness of our HFR system with AE control is evaluated according to experimental results for several scenes with illumination flickering at 100 Hz, which is too fast for the human eye to see.

  • Computationally Efficient Estimation of Squared-Loss Mutual Information with Multiplicative Kernel Models

    Tomoya SAKAI  Masashi SUGIYAMA  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:4
      Page(s):
    968-971

    Squared-loss mutual information (SMI) is a robust measure of the statistical dependence between random variables. The sample-based SMI approximator called least-squares mutual information (LSMI) was demonstrated to be useful in performing various machine learning tasks such as dimension reduction, clustering, and causal inference. The original LSMI approximates the pointwise mutual information by using the kernel model, which is a linear combination of kernel basis functions located on paired data samples. Although LSMI was proved to achieve the optimal approximation accuracy asymptotically, its approximation capability is limited when the sample size is small due to an insufficient number of kernel basis functions. Increasing the number of kernel basis functions can mitigate this weakness, but a naive implementation of this idea significantly increases the computation costs. In this article, we show that the computational complexity of LSMI with the multiplicative kernel model, which locates kernel basis functions on unpaired data samples and thus the number of kernel basis functions is the sample size squared, is the same as that for the plain kernel model. We experimentally demonstrate that LSMI with the multiplicative kernel model is more accurate than that with plain kernel models in small sample cases, with only mild increase in computation time.

  • Sparsity Regularized Affine Projection Adaptive Filtering for System Identification

    Young-Seok CHOI  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:4
      Page(s):
    964-967

    A new type of the affine projection (AP) algorithms which incorporates the sparsity condition of a system is presented. To exploit the sparsity of the system, a weighted l1-norm regularization is imposed on the cost function of the AP algorithm. Minimizing the cost function with a subgradient calculus and choosing two distinct weightings for l1-norm, two stochastic gradient based sparsity regularized AP (SR-AP) algorithms are developed. Experimental results show that the SR-AP algorithms outperform the typical AP counterparts for identifying sparse systems.

  • Rapid Acquisition Assisted by Navigation Data for Inter-Satellite Links of Navigation Constellation

    Xian-Bin LI  Yue-Ke WANG  Jian-Yun CHEN  Shi-ce NI  

     
    PAPER-Navigation, Guidance and Control Systems

      Vol:
    E97-B No:4
      Page(s):
    915-922

    Introducing inter-satellite ranging and communication links in a Global Navigation Satellite System (GNSS) can improve its performance. In view of the highly dynamic characteristics and the rapid but reliable acquisition requirement of inter-satellite link (ISL) signal of navigation constellation, we utilize navigation data, which is the special resource of navigation satellites, to assist signal acquisition. In this paper, we introduce a method that uses the navigation data for signal acquisition from three aspects: search space, search algorithm, and detector structure. First, an iteration method to calculate the search space is presented. Then the most efficient algorithm is selected by comparing the computation complexity of different search algorithms. Finally, with the navigation data, we also propose a method to guarantee the detecting probability constant by adjusting the non-coherent times. An analysis shows that with the assistance of navigation data, we can reduce the computing cost of ISL signal acquisition significantly, as well effectively enhancing acquisition speed and stabling the detection probability.

  • Past and Future Technology for Mixed Signal LSI Open Access

    Kenichi HATASAKO  Tetsuya NITTA  Masami HANE  Shigeto MAEGAWA  

     
    INVITED PAPER

      Vol:
    E97-C No:4
      Page(s):
    238-244

    This paper discusses Mixed Signal LSI technology with embedded power transistors. Trends in Mixed Signal LSI technology are explained at first. Mixed signal LSI technology has proceeded with the help of fine fabrication technology and SOI technology. The BEOL transistor is a new development, which uses InGaZnO (IGZO) as its TFT channel material. The BEOL transistor is one future device which enables 3D IC and chip shrinking technology.

  • Performance Improvement of Database Compression for OLTP Workloads

    Ki-Hoon LEE  

     
    LETTER-Data Engineering, Web Information Systems

      Vol:
    E97-D No:4
      Page(s):
    976-980

    As data volumes explode, data storage costs become a large fraction of total IT costs. We can reduce the costs substantially by using compression. However, it is generally known that database compression is not suitable for write-intensive workloads. In this paper, we provide a comprehensive solution to improve the performance of compressed databases for write-intensive OLTP workloads. We find that storing data too densely in compressed pages incurs many future page splits, which require exclusive locks. In order to avoid lock contention, we reduce page splits by sacrificing a couple of percent of space savings. We reserve enough space in each compressed page for future updates of records and prevent page merges that are prone to incur page splits in the near future. The experimental results using TPC-C benchmark and MySQL/InnoDB show that our method gives 1.5 times higher throughput with 33% space savings compared with the uncompressed counterpart and 1.8 times higher throughput with only 1% more space compared with the state-of-the-art compression method developed by Facebook.

  • A Technique of Femtocell Searching in Next-Generation Mobile Communication Systems Using Synchronization Signals

    Yeong Jun KIM  Tae Hwan HONG  Yong Soo CHO  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:4
      Page(s):
    817-825

    In this paper, a new technique is proposed to reduce the frequency of cell search by user equipment (UE) in the presence of femtocells. A new common signal (CS) and a separate set of primary synchronization signals (PSSs) are employed to facilitate efficient cell search in a next-geration LTE-based system. The velocity of the UE is also utilized to determine cell search mode. A slow UE recognizes the presence of femtocells using the CS, so that it can make separate searches for macrocells and femtocells. A fast UE will not search for femtocells since the coverage of femtocells is restricted to a small region. The fast UE detects the macrocell boundary using the PSSs transmitted from neighboring macrocells, so that it can search for macrocells only at the macrocell boundary. The effects of CS and UE velocity on the number of cell searches are analyzed. The performance of the proposed technique is evaluated by computer simulations.

  • Textual Approximation Methods for Time Series Classification: TAX and l-TAX Open Access

    Abdulla Al MARUF  Hung-Hsuan HUANG  Kyoji KAWAGOE  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    798-810

    A lot of work has been conducted on time series classification and similarity search over the past decades. However, the classification of a time series with high accuracy is still insufficient in applications such as ubiquitous or sensor systems. In this paper, a novel textual approximation of a time series, called TAX, is proposed to achieve high accuracy time series classification. l-TAX, an extended version of TAX that shows promising classification accuracy over TAX and other existing methods, is also proposed. We also provide a comprehensive comparison between TAX and l-TAX, and discuss the benefits of both methods. Both TAX and l-TAX transform a time series into a textual structure using existing document retrieval methods and bioinformatics algorithms. In TAX, a time series is represented as a document like structure, whereas l-TAX used a sequence of textual symbols. This paper provides a comprehensive overview of the textual approximation and techniques used by TAX and l-TAX

  • A Soft-Decision Recursive Decoding Algorithm Using Iterative Bounded-Distance Decoding for u|u+v Codes

    Hitoshi TOKUSHIGE  

     
    LETTER-Coding Theory

      Vol:
    E97-A No:4
      Page(s):
    996-1000

    A soft-decision recursive decoding algorithm (RDA) for the class of the binary linear block codes recursively generated using a u|u+v-construction method is proposed. It is well known that Reed-Muller (RM) codes are in this class. A code in this class can be decomposed into left and right components. At a recursive level of the RDA, if the component is decomposable, the RDA is performed for the left component and then for the cosets generated from the left decoding result and the right component. The result of this level is obtained by concatenating the left and right decoding results. If the component is indecomposable, a proposed iterative bounded-distance decoding algorithm is performed. Computer simulations were made to evaluate the RDA for RM codes over an additive white Gaussian-noise channel using binary phase-shift keying modulation. The results show that the block error rates of the RDA are relatively close to those of the maximum-likelihood decoding for the third-order RM code of length 26 and better than those of the Chase II decoding for the third-order RM codes of length 26 and 27, and the fourth-order RM code of length 28.

  • Predicting Political Orientation of News Articles Based on User Behavior Analysis in Social Network Open Access

    Jun-Gil KIM  Kyung-Soon LEE  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    685-693

    News articles usually represent a biased viewpoint on contentious issues, potentially causing social problems. To mitigate this media bias, we propose a novel framework for predicting orientation of a news article by analyzing social user behaviors in Twitter. Highly active users tend to have consistent behavior patterns in social network by retweeting behavior among users with the same viewpoints for contentious issues. The bias ratio of highly active users is measured to predict orientation of users. Then political orientation of a news article is predicted based on the bias ratio of users, mutual retweeting and opinion analysis of tweet documents. The analysis of user behavior shows that users with the value of 1 in bias ratio are 88.82%. It indicates that most of users have distinctive orientation. Our prediction method based on orientation of users achieved 88.6% performance in accuracy. Experimental results show significant improvements over the SVM classification. These results show that proposed detection method is effective in social network.

  • Microphone Classification Using Canonical Correlation Analysis

    Jongwon SEOK  Keunsung BAE  

     
    LETTER-Multimedia Environment Technology

      Vol:
    E97-A No:4
      Page(s):
    1024-1026

    Canonical correlation analysis (CCA) is applied to extract features for microphone classification. We utilized the coherence between near-silence regions. Experimental results show the promise of canonical correlation features for microphone classification.

  • Parameterized Multisurface Fitting for Multi-Frame Superresolution

    Hongliang XU  Fei ZHOU  Fan YANG  Qingmin LIAO  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E97-D No:4
      Page(s):
    1001-1003

    We propose a parameterized multisurface fitting method for multi-frame super-resolution (SR) processing. A parameter assumed for the unknown high-resolution (HR) pixel is used for multisurface fitting. Each surface fitted at each low-resolution (LR) pixel is an expression of the parameter. Final SR result is obtained by fusing the sampling values from these surfaces in the maximum a posteriori fashion. Experimental results demonstrate the superiority of the proposed method.

  • A Framework to Integrate Public Information into Runtime Safety Analysis for Critical Systems

    Guoqi LI  

     
    LETTER-Dependable Computing

      Vol:
    E97-D No:4
      Page(s):
    981-983

    The large and complicated safety-critical systems today need to keep changing to accommodate ever-changing objectives and environments. Accordingly, runtime analysis for safe reconfiguration or evaluation is currently a hot topic in the field, whereas information acquisition of external environment is crucial for runtime safety analysis. With the rapid development of web services, mobile networks and ubiquitous computing, abundant realtime information of environment is available on the Internet. To integrate these public information into runtime safety analysis of critical systems, this paper brings forward a framework, which could be implemented with open source and cross platform modules and encouragingly, applicable to various safety-critical systems.

  • Facial Expression Recognition Based on Facial Region Segmentation and Modal Value Approach

    Gibran BENITEZ-GARCIA  Gabriel SANCHEZ-PEREZ  Hector PEREZ-MEANA  Keita TAKAHASHI  Masahide KANEKO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:4
      Page(s):
    928-935

    This paper presents a facial expression recognition algorithm based on segmentation of a face image into four facial regions (eyes-eyebrows, forehead, mouth and nose). In order to unify the different results obtained from facial region combinations, a modal value approach that employs the most frequent decision of the classifiers is proposed. The robustness of the algorithm is also evaluated under partial occlusion, using four different types of occlusion (half left/right, eyes and mouth occlusion). The proposed method employs sub-block eigenphases algorithm that uses the phase spectrum and principal component analysis (PCA) for feature vector estimation which is fed to a support vector machine (SVM) for classification. Experimental results show that using modal value approach improves the average recognition rate achieving more than 90% and the performance can be kept high even in the case of partial occlusion by excluding occluded parts in the feature extraction process.

  • Mapping Articulatory-Features to Vocal-Tract Parameters for Voice Conversion

    Narpendyah Wisjnu ARIWARDHANI  Masashi KIMURA  Yurie IRIBE  Kouichi KATSURADA  Tsuneo NITTA  

     
    PAPER-Speech and Hearing

      Vol:
    E97-D No:4
      Page(s):
    911-918

    In this paper, we propose voice conversion (VC) based on articulatory features (AF) to vocal-tract parameters (VTP) mapping. An artificial neural network (ANN) is applied to map AF to VTP and to convert a speaker's voice to a target-speaker's voice. The proposed system is not only text-independent VC, in which it does not need parallel utterances between source and target-speakers, but can also be used for an arbitrary source-speaker. This means that our approach does not require source-speaker data to build the VC model. We are also focusing on a small number of target-speaker training data. For comparison, a baseline system based on Gaussian mixture model (GMM) approach is conducted. The experimental results for a small number of training data show that the converted voice of our approach is intelligible and has speaker individuality of the target-speaker.

  • QoS Analysis for Service Composition by Human and Web Services Open Access

    Donghui LIN  Toru ISHIDA  Yohei MURAKAMI  Masahiro TANAKA  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    762-769

    The availability of more and more Web services provides great varieties for users to design service processes. However, there are situations that services or service processes cannot meet users' requirements in functional QoS dimensions (e.g., translation quality in a machine translation service). In those cases, composing Web services and human tasks is expected to be a possible alternative solution. However, analysis of such practical efforts were rarely reported in previous researches, most of which focus on the technology of embedding human tasks in software environments. Therefore, this study aims at analyzing the effects of composing Web services and human activities using a case study in the domain of language service with large scale experiments. From the experiments and analysis, we find out that (1) service implementation variety can be greatly increased by composing Web services and human activities for satisfying users' QoS requirements; (2) functional QoS of a Web service can be significantly improved by inducing human activities with limited cost and execution time provided certain quality of human activities; and (3) multiple QoS attributes of a composite service are affected in different ways with different quality of human activities.

  • An Efficient Beamforming Algorithm for Large-Scale Phased Arrays with Lossy Digital Phase Shifters

    Shunji TANAKA  Tomohiko MITANI  Yoshio EBIHARA  

     
    PAPER-Antennas and Propagation

      Vol:
    E97-B No:4
      Page(s):
    783-790

    An efficient beamforming algorithm for large-scale phased arrays with lossy digital phase shifters is presented. This problem, which arises in microwave power transmission from solar power satellites, is to maximize the array gain in a desired direction with the gain loss of the phase shifters taken into account. In this paper the problem is first formulated as a discrete optimization problem, which is then decomposed into element-wise subproblems by the real rotation theorem. Based on this approach, a polynomial-time algorithm to solve the problem numerically is constructed and its effectiveness is verified by numerical simulations.

  • SegOMP: Sparse Recovery with Fewer Measurements

    Li ZENG  Xiongwei ZHANG  Liang CHEN  Weiwei YANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E97-A No:3
      Page(s):
    862-864

    Presented is a new measuring and reconstruction framework of Compressed Sensing (CS), aiming at reducing the measurements required to ensure faithful reconstruction. A sparse vector is segmented into sparser vectors. These new ones are then randomly sensed. For recovery, we reconstruct these vectors individually and assemble them to obtain the original signal. We show that the proposed scheme, referred to as SegOMP, yields higher probability of exact recovery in theory. It is finished with much smaller number of measurements to achieve a same reconstruction quality when compared to the canonical greedy algorithms. Extensive experiments verify the validity of the SegOMP and demonstrate its potentials.

4181-4200hit(16314hit)