The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CTI(8214hit)

3681-3700hit(8214hit)

  • Scan Chain Ordering to Reduce Test Data for BIST-Aided Scan Test Using Compatible Scan Flip-Flops

    Hiroyuki YOTSUYANAGI  Masayuki YAMAMOTO  Masaki HASHIZUME  

     
    PAPER

      Vol:
    E93-D No:1
      Page(s):
    10-16

    In this paper, the scan chain ordering method for BIST-aided scan test for reducing test data and test application time is proposed. In this work, we utilize the simple LFSR without a phase shifter as PRPG and configure scan chains using the compatible set of flip-flops with considering the correlations among flip-flops in an LFSR. The method can reduce the number of inverter codes required for inverting the bits in PRPG patterns that conflict with ATPG patterns. The experimental results for some benchmark circuits are shown to present the feasibility of our test method.

  • Practical Password Recovery Attacks on MD4 Based Prefix and Hybrid Authentication Protocols

    Yu SASAKI  Lei WANG  Kazuo OHTA  Kazumaro AOKI  Noboru KUNIHIRO  

     
    PAPER-Hash Function

      Vol:
    E93-A No:1
      Page(s):
    84-92

    In this paper, we present practical password recovery attacks against two challenge and response authentication protocols using MD4. For attacks on protocols, the number of queries is one of the most important factors because the opportunity where an attacker can ask queries is very limited in real protocols. When responses are computed as MD4(Password||Challenge), which is called prefix approach, previous work needs to ask 237 queries to recover a password. Asking 237 queries in real protocols is almost impossible. In our attack, to recover up to 8-octet passwords, we only need 1 time the amount of eavesdropping, 17 queries, and 234 MD4 off-line computations. To recover up to 12-octet passwords, we only need 210 times the amount of eavesdropping, 210 queries, and 241 off-line MD4 computations. When responses are computed as MD4(Password||Challenge||Password), which is called hybrid approach, previous work needs to ask 263 queries, while in our attack, up to 8-octet passwords are practically recovered by 28 times the amount of eavesdropping, 28 queries, and 239 off-line MD4 computations. Our idea is guessing a part of passwords so that we can simulate values of intermediate chaining variables from observed hash values. This enables us to use a short local collision that occurs with a very high probability, and thus the number of queries becomes practical.

  • Global Nonlinear Optimization Based on Wave Function and Wave Coefficient Equation

    Hideki SATOH  

     
    PAPER-Nonlinear Problems

      Vol:
    E93-A No:1
      Page(s):
    291-301

    A method was developed for deriving the approximate global optimum of a nonlinear objective function with multiple local optimums. The objective function is expanded into a linear wave coefficient equation, so the problem of maximizing the objective function is reduced to that of maximizing a quadratic function with respect to the wave coefficients. Because a wave function expressed by the wave coefficients is used in the algorithm for maximizing the quadratic function, the algorithm is equivalent to a full search algorithm, i.e., one that searches in parallel for the global optimum in the whole domain of definition. Therefore, the global optimum is always derived. The method was evaluated for various objective functions, and computer simulation showed that a good approximation of the global optimum for each objective function can always be obtained.

  • Filter Size Determination of Moving Average Filters for Extended Differential Detection of OFDM Preambles

    Minjoong RIM  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E92-B No:12
      Page(s):
    3953-3956

    OFDM (Orthogonal Frequency Division Multiplexing) is widely used in wideband wireless communication systems due to its excellent performance. One of the most important operations in OFDM receivers is preamble detection. This paper addresses a general form of extended differential detection methods, which is a combination of differential detection and a moving average filter. This paper also presents a filter size determination method that achieves satisfactory performance in various channel environments.

  • Rapid Design Space Exploration of a Reconfigurable Instruction-Set Processor

    Farhad MEHDIPOUR  Hamid NOORI  Koji INOUE  Kazuaki MURAKAMI  

     
    PAPER-Embedded, Real-Time and Reconfigurable Systems

      Vol:
    E92-A No:12
      Page(s):
    3182-3192

    Multitude parameters in the design process of a reconfigurable instruction-set processor (RISP) may lead to a large design space and remarkable complexity. Quantitative design approach uses the data collected from applications to satisfy design constraints and optimize the design goals while considering the applications' characteristics; however it highly depends on designer observations and analyses. Exploring design space can be considered as an effective technique to find a proper balance among various design parameters. Indeed, this approach would be computationally expensive when the performance evaluation of the design points is accomplished based on the synthesis-and-simulation technique. A combined analytical and simulation-based model (CAnSO**) is proposed and validated for performance evaluation of a typical RISP. The proposed model consists of an analytical core that incorporates statistics collected from cycle-accurate simulation to make a reasonable evaluation and provide a valuable insight. CAnSO has clear speed advantages and therefore it can be used for easing a cumbersome design space exploration of a reconfigurable RISP processor and quick performance evaluation of slightly modified architectures.

  • Detection and Classification of Invariant Blurs

    Rachel Mabanag CHONG  Toshihisa TANAKA  

     
    PAPER-Imaging

      Vol:
    E92-A No:12
      Page(s):
    3313-3320

    A new algorithm for simultaneously detecting and identifying invariant blurs is proposed. This is mainly based on the behavior of extrema values in an image. It is computationally simple and fast thereby making it suitable for preprocessing especially in practical imaging applications. Benefits of employing this method includes the elimination of unnecessary processes since unblurred images will be separated from the blurred ones which require deconvolution. Additionally, it can improve reconstruction performance by proper identification of blur type so that a more effective blur specific deconvolution algorithm can be applied. Experimental results on natural images and its synthetically blurred versions show the characteristics and validity of the proposed method. Furthermore, it can be observed that feature selection makes the method more efficient and effective.

  • A Reordering Model Using a Source-Side Parse-Tree for Statistical Machine Translation

    Kei HASHIMOTO  Hirofumi YAMAMOTO  Hideo OKUMA  Eiichiro SUMITA  Keiichi TOKUDA  

     
    PAPER-Machine Translation

      Vol:
    E92-D No:12
      Page(s):
    2386-2393

    This paper presents a reordering model using a source-side parse-tree for phrase-based statistical machine translation. The proposed model is an extension of IST-ITG (imposing source tree on inversion transduction grammar) constraints. In the proposed method, the target-side word order is obtained by rotating nodes of the source-side parse-tree. We modeled the node rotation, monotone or swap, using word alignments based on a training parallel corpus and source-side parse-trees. The model efficiently suppresses erroneous target word orderings, especially global orderings. Furthermore, the proposed method conducts a probabilistic evaluation of target word reorderings. In English-to-Japanese and English-to-Chinese translation experiments, the proposed method resulted in a 0.49-point improvement (29.31 to 29.80) and a 0.33-point improvement (18.60 to 18.93) in word BLEU-4 compared with IST-ITG constraints, respectively. This indicates the validity of the proposed reordering model.

  • Objective Evaluation of Components of Colour Distortions due to Image Compression

    Amal PUNCHIHEWA  Jonathan ARMSTRONG  Seiichiro HANGAI  Takayuki HAMAMOTO  

     
    PAPER-Evaluation

      Vol:
    E92-A No:12
      Page(s):
    3307-3312

    This paper presents a novel approach of analysing colour bleeding caused by image compression. This is achieved by isolating two components of colour bleeding, and evaluating these components separately. Although these specific components of colour bleeding have not been studied with great detail in the past, with the use of a synthetic test pattern -- similar to the colour bars used to test analogue television transmissions -- we have successfully isolated, and evaluated: "colour blur" and "colour ringing," as two separate components of colour bleeding artefact. We have also developed metrics for these artefacts, and tested these derived metrics in a series of trials aimed to test the colour reproduction performance of a JPEG codec, and a JPEG2000 codec -- both implemented by the developer IrfanView. The algorithms developed to measure these artefact metrics proved to be effective tools for evaluating and benchmarking the performance of similar codecs, or different implementations of the same codecs.

  • A Study of Inherent Pen Input Modalities for Precision Parameter Manipulations during Trajectory Tasks

    Yizhong XIN  Xiangshi REN  

     
    PAPER-Human-computer Interaction

      Vol:
    E92-D No:12
      Page(s):
    2454-2461

    Adjustment of a certain parameter in the course of performing a trajectory task such as drawing or gesturing is a common manipulation in pen-based interaction. Since pen tip information is confined to x-y coordinate data, such concurrent parameter adjustment is not easily accomplished in devices using only a pen tip. This paper comparatively investigates the performance of inherent pen input modalities (Pressure, Tilt, Azimuth, and Rolling) and Key Pressing with the non-preferred hand used for precision parameter manipulation during pen sliding actions. We elaborate our experimental design framework here and conduct experimentation to evaluate the effect of the five techniques. Results show that Pressure enabled the fastest performance along with the lowest error rate, while Azimuth exhibited the worst performance. Tilt showed slightly faster performance and achieved a lower error rate than Rolling. However, Rolling achieved the most significant learning effect on Selection Time and was favored over Tilt in subjective evaluations. Our experimental results afford a general understanding of the performance of inherent pen input modalities in the course of a trajectory task in HCI (human computer interaction).

  • Find the 'Best' Solution from Multiple Analog Topologies via Pareto-Optimality

    Yu LIU  Masato YOSHIOKA  Katsumi HOMMA  Toshiyuki SHIBUYA  

     
    PAPER-Device and Circuit Modeling and Analysis

      Vol:
    E92-A No:12
      Page(s):
    3035-3043

    This paper presents a novel method using multi-objective optimization algorithm to automatically find the best solution from a topology library of analog circuits. Firstly this method abstracts the Pareto-front of each topology in the library by SPICE simulation. Then, the Pareto-front of the topology library is abstracted from the individual Pareto-fronts of topologies in the library followed by the theorem we proved. The best solution which is defined as the nearest point to specification on the Pareto-front of the topology library is then calculated by the equations derived from collinearity theorem. After the local searching using Nelder-Mead method maps the calculated best solution backs to design variable space, the non-dominated best solution is obtained. Comparing to the traditional optimization methods using single-objective optimization algorithms, this work can efficiently find the best non-dominated solution from multiple topologies for different specifications without additional time-consuming optimizing iterations. The experiments demonstrate that this method is feasible and practical in actual analog designs especially for uncertain or variant multi-dimensional specifications.

  • Hash Functions and Information Theoretic Security

    Nasour BAGHERI  Lars R. KNUDSEN  Majid NADERI  Sφren S. THOMSEN  

     
    LETTER-Cryptography and Information Security

      Vol:
    E92-A No:12
      Page(s):
    3401-3403

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.

  • Accurate Systematic Hot-Spot Scoring Method and Score-Based Fixing Guidance Generation

    Yonghee PARK  Junghoe CHOI  Jisuk HONG  Sanghoon LEE  Moonhyun YOO  Jundong CHO  

     
    LETTER-Device and Circuit Modeling and Analysis

      Vol:
    E92-A No:12
      Page(s):
    3082-3085

    The researches on predicting and removing of lithographic hot-spots have been prevalent in recent semiconductor industries, and known to be one of the most difficult challenges to achieve high quality detection coverage. To provide physical design implementation with designer's favors on fixing hot-spots, in this paper, we present a noble and accurate hot-spot detection method, so-called "leveling and scoring" algorithm based on weighted combination of image quality parameters (i.e., normalized image log-slope (NILS), mask error enhancement factor (MEEF), and depth of focus (DOF)) from lithography simulation. In our algorithm, firstly, hot-spot scoring function considering severity level is calibrated with process window qualification, and then least-square regression method is used to calibrate weighting coefficients for each image quality parameter. In this way, after we obtain the scoring function with wafer results, our method can be applied to future designs of using the same process. Using this calibrated scoring function, we can successfully generate fixing guidance and rule to detect hot-spot area by locating edge bias value which leads to a hot-spot-free score level. Finally, we integrate the hot-spot fixing guidance information into layout editor to facilitate the user-favorable design environment. Applying our method to memory devices of 60 nm node and below, we could successfully attain sufficient process window margin to yield high mass production.

  • Robust Spectrum Sensing Algorithms for Cognitive Radio Application by Using Distributed Sensors

    Yohannes D. ALEMSEGED  Chen SUN  Ha Nguyen TRAN  Hiroshi HARADA  

     
    PAPER-Spectrum Sensing

      Vol:
    E92-B No:12
      Page(s):
    3616-3624

    Due to the advancement of software radio and RF technology, cognitive radio(CR) has become an enabling technology to realize dynamic spectrum access through its spectrum sensing and reconfiguration capability. Robust and reliable spectrum sensing is a key factor to discover spectrum opportunity. Single cognitive radios often fail to provide such reliable information because of their inherent sensitivity limitation. Primary signals that are subject to detection by cognitive radios may become weak due to several factors such as fading and shadowing. One approach to overcome this problem is to perform spectrum sensing by using multiple CRs or multiple spectrum sensors. This approach is known as distributed sensing because sensing is carried out through cooperation of spatially distributed sensors. In distributed sensing, sensors should perform spectrum sensing and forward the result to a destination where data fusion is carried out. Depending on the channel conditions between sensors (sensor-to-sensor channel) and between the sensor and the radio (user-channel), we explore different spectrum sensing algorithms where sensors provide the sensing information either cooperatively or independently. Moreover we investigate sensing schemes based on soft information combining (SC), hard information combining (HC). Finally we propose a two-stage detection scheme that uses both SC and HC. The newly proposed detection scheme is shown to provide improved performance compared to sensing based on either HC or SC alone. Computer simulation results are provided to illustrate the performances of the different sensing algorithms.

  • Extended Relief-F Algorithm for Nominal Attribute Estimation in Small-Document Classification

    Heum PARK  Hyuk-Chul KWON  

     
    PAPER-Document Analysis

      Vol:
    E92-D No:12
      Page(s):
    2360-2368

    This paper presents an extended Relief-F algorithm for nominal attribute estimation, for application to small-document classification. Relief algorithms are general and successful instance-based feature-filtering algorithms for data classification and regression. Many improved Relief algorithms have been introduced as solutions to problems of redundancy and irrelevant noisy features and to the limitations of the algorithms for multiclass datasets. However, these algorithms have only rarely been applied to text classification, because the numerous features in multiclass datasets lead to great time complexity. Therefore, in considering their application to text feature filtering and classification, we presented an extended Relief-F algorithm for numerical attribute estimation (E-Relief-F) in 2007. However, we found limitations and some problems with it. Therefore, in this paper, we introduce additional problems with Relief algorithms for text feature filtering, including the negative influence of computation similarities and weights caused by a small number of features in an instance, the absence of nearest hits and misses for some instances, and great time complexity. We then suggest a new extended Relief-F algorithm for nominal attribute estimation (E-Relief-Fd) to solve these problems, and we apply it to small text-document classification. We used the algorithm in experiments to estimate feature quality for various datasets, its application to classification, and its performance in comparison with existing Relief algorithms. The experimental results show that the new E-Relief-Fd algorithm offers better performance than previous Relief algorithms, including E-Relief-F.

  • Diversity Order Analysis of Dual-Hop Relaying with Partial Relay Selection

    Vo Nguyen Quoc BAO  Hyung Yun KONG  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E92-B No:12
      Page(s):
    3942-3946

    In this paper, we study the performance of dual hop relaying in which the best relay selected by partial relay selection will help the source-destination link to overcome the channel impairment. Specifically, closed-form expressions for outage probability, symbol error probability and achievable diversity gain are derived using the statistical characteristic of the signal-to-noise ratio. Numerical investigation shows that the system achieves diversity of two regardless of relay number and also confirms the correctness of the analytical results. Furthermore, the performance loss due to partial relay selection is investigated.

  • Image Restoration Based on Adaptive Directional Regularization

    Osama AHMED OMER  Toshihisa TANAKA  

     
    PAPER-Processing

      Vol:
    E92-A No:12
      Page(s):
    3344-3354

    This paper addresses problems appearing in restoration algorithms based on utilizing both Tikhonov and bilateral total variation (BTV) regularization. The former regularization assumes that prior information has Gaussian distribution which indeed fails at edges, while the later regularization highly depends on the selected bilateral filter's parameters. To overcome these problems, we propose a locally adaptive regularization. In the proposed algorithm, we use general directional regularization functions with adaptive weights. The adaptive weights are estimated from local patches based on the property of the partially restored image. Unlike Tikhonov regularization, it can avoid smoothness across edges by using adaptive weights. In addition, unlike BTV regularization, the proposed regularization function doesn't depend on parameters' selection. The convexity conditions as well as the convergence conditions are derived for the proposed algorithm.

  • A Robust Spectrum Sensing Method Based on Maximum Cyclic Autocorrelation Selection for Dynamic Spectrum Access

    Kazushi MURAOKA  Masayuki ARIYOSHI  Takeo FUJII  

     
    PAPER-Spectrum Sensing

      Vol:
    E92-B No:12
      Page(s):
    3635-3643

    Spectrum sensing is an important function for dynamic spectrum access (DSA) type cognitive radio systems to detect opportunities for sharing the spectrum with a primary system. The key requirements for spectrum sensing are stability in controlling the probability of false alarm as well as detection performance of the primary signals. However, false alarms can be triggered by noise uncertainty at the secondary devices or unknown interference signals from other secondary systems in realistic radio environments. This paper proposes a robust spectrum sensing method against such uncertainties; it is a kind of cyclostationary feature detection (CFD) approaches. Our proposed method, referred to as maximum cyclic autocorrelation selection (MCAS), compares the peak and non-peak values of the cyclic autocorrelation function (CAF) to detect primary signals, where the non-peak value is the CAF value calculated at cyclic frequencies between the peaks. In MCAS, the desired probability of false alarm can be obtained by setting the number of the non-peak values. In addition, the multiple peak values are combined in MCAS to obtain noise reduction effect and coherent combining gain. Through computer simulations, we show that MCAS can control the probability of false alarm under the condition of noise uncertainty and interference. Furthermore, our method achieves better performance with much less computational complexity in comparison to conventional CFD methods.

  • Moments Added Statistical Shape Model for Boundary Extraction

    Haechul CHOI  Ho Chul SHIN  Si-Woong LEE  Yun-Ho KO  

     
    LETTER-Pattern Recognition

      Vol:
    E92-D No:12
      Page(s):
    2524-2526

    In this paper, we propose a method for extracting an object boundary from a low-quality image such as an infrared one. To take full advantage of a training set, the overall shape is modeled by incorporating statistical characteristics of moments into the point distribution model (PDM). Furthermore, a differential equation for the moment of overall shape is derived for shape refinement, which leads to accurate and rapid deformation of a boundary template toward real object boundary. The simulation results show that the proposed method has better performance than conventional boundary extraction methods.

  • A 3-D Packaging Technology with Highly-Parallel Memory/Logic Interconnect

    Yoichiro KURITA  Koji SOEJIMA  Katsumi KIKUCHI  Masatake TAKAHASHI  Masamoto TAGO  Masahiro KOIKE  Koujirou SHIBUYA  Shintaro YAMAMICHI  Masaya KAWANO  

     
    PAPER-Electronic Components

      Vol:
    E92-C No:12
      Page(s):
    1512-1522

    A three-dimensional semiconductor package structure with inter-chip connections was developed for broadband data transfer and low latency electrical communication between a high-capacity memory and a logic device interconnected by a feedthrough interposer (FTI) featuring a 10 µm scale fine-wiring pattern and ultra-fine-pitch through vias. This technology features co-existence of the wide-band memory accessibility of a system-on-chip (SoC) and the capability of memory capacity increasing of a system-in-package (SiP) that is made possible by the individual fabrication of memory and logic on independent chips. This technology can improve performance due to memory band widening and a reduction in the power consumed in inter-chip communications. This paper describes the concept, structure, process, and experimental results of prototypes of this package, called SMAFTI (SMAart chip connection with FeedThrough Interposer). This paper also reports the results of the fundamental reliability test of this novel inter-chip connection structure and board-level interconnectivity tests.

  • P3HT/Al Organic/Inorganic Heterojunction Diodes Investigated by I-V and C-V Measurements

    Fumihiko HIROSE  Yasuo KIMURA  Michio NIWANO  

     
    PAPER-Fundamentals for Nanodevices

      Vol:
    E92-C No:12
      Page(s):
    1475-1478

    Electrical characteristics of P3HT/Aluminum organic/ inorganic heterojunction diodes were investigated V-I and capacitance-voltage (C-V) measurements. The V-I measurement exhibited current rectification inherent in the Schottky diode, suggesting their availabilities as rectification diodes in organic flexible circuits. C-V analysis indicated the fact that the depletion layer was generated in the P3HT film in the reversed bias condition. The flat band voltage analysis suggested that the interfacial charge affected the built-in potential of the diodes. Al/P3HT heterojunction is possible to be used as not only the rectification diodes but also gate junctions for junction type field effect or static induction transistors.

3681-3700hit(8214hit)