The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ATI(18690hit)

10621-10640hit(18690hit)

  • Impersonation Attacks on Key Agreement Protocols Resistant to Denial of Service Attacks

    Kyung-Ah SHIM  

     
    LETTER-Application Information Security

      Vol:
    E89-D No:7
      Page(s):
    2306-2309

    Hirose and Yoshida proposed an authenticated key agreement protocol based on the intractability of the Computational Diffie-Hellman problem. Recently, Hirose and Matsuura pointed out that Hirose and Yoshida's protocol is vulnerable to Denial-of-Service (DoS) attacks. And they proposed two key agreement protocols which are resistant to the DoS attacks. Their protocols are the first authenticated key agreement protocols resistant to both the storage exhaustion attack and the CPU exhaustion attack. In this paper we show that Hirose and Matsuura's DoS-resistant key agreement protocols and Hirose and Yoshida's key agreement protocol are vulnerable to impersonation attacks. We make suggestions for improvements.

  • A Visual Inspection System Based on Trinarized Broad-Edge and Gray-Scale Hybrid Matching

    Haruhisa OKUDA  Manabu HASHIMOTO  Miwako HIROOKA  Kazuhiko SUMI  

     
    PAPER-Image Inspection

      Vol:
    E89-D No:7
      Page(s):
    2068-2075

    In the field of industrial manufacturing, visual pattern inspection is an important task to prevent the inclusion of incorrect parts. There have been demands for such methods able to handle factors caused by positional and rotational alignment, and illumination changes. In this paper, we propose a discrimination method called Trinarized broad-edge and Gray-scale Hybrid Matching (TGHM). The method is highly reliable due to gray-scale cross correlation which has a high pattern discrimination efficiency, with high-speed position and rotation alignment using the characteristics of trinarized broad-edge representation which has high data compressibility and illumination-resistant variability. In an example in which the method is applied to mis-collation inspection equipment of a bookbinding machine, it is confirmed that the processing speed is 24,000 sheets/hour, the error detection rate is 100.0%, and the mis-alarm rate is less than 0.002%, and it is verified that the method is practical.

  • Effects of Localized Distribution of Terminals and Mobility on Performance Improvement by Direct Communication

    Tatsuya KABASAWA  Keisuke NAKANO  Yuta TANAKA  Ikuo SATO  Masakazu SENGOKU  Shoji SHINODA  

     
    PAPER

      Vol:
    E89-A No:7
      Page(s):
    1940-1949

    We investigated performance improvement in a cellular system by introducing direct communication between terminals. Previous research has indicated that direct communication efficiently uses channels; however, this is not always so. We studied two factors that affect how much efficiency improves. One is the distribution of terminals. We defined some typical distributions with localization of terminals and analyzed how the difference between the distributions affected the performance improvement by direct communication. Another factor is the mobility of terminals, because mobility shortens the length of time during which terminals are directly connected. We analyzed how mobility affected performance improvement by direct communication. For the analyses, we used some theoretical techniques.

  • Optimal Synthesis of a Class of 2-D Digital Filters with Minimum L2-Sensitivity and No Overflow Oscillations

    Takao HINAMOTO  Ken-ichi IWATA  Osemekhian I. OMOIFO  Shuichi OHNO  Wu-Sheng LU  

     
    PAPER-Digital Signal Processing

      Vol:
    E89-A No:7
      Page(s):
    1987-1994

    The minimization problem of an L2-sensitivity measure subject to L2-norm dynamic-range scaling constraints is formulated for a class of two-dimensional (2-D) state-space digital filters. First, the problem is converted into an unconstrained optimization problem by using linear-algebraic techniques. Next, the unconstrained optimization problem is solved by applying an efficient quasi-Newton algorithm with closed-form formula for gradient evaluation. The coordinate transformation matrix obtained is then used to synthesize the optimal 2-D state-space filter structure that minimizes the L2-sensitivity measure subject to L2-norm dynamic-range scaling constraints. Finally, a numerical example is presented to illustrate the utility of the proposed technique.

  • Estimation of the Visibility Distance by Stereovision: A Generic Approach

    Nicolas HAUTIERE  Raphael LABAYRADE  Didier AUBERT  

     
    PAPER-Intelligent Transport Systems

      Vol:
    E89-D No:7
      Page(s):
    2084-2091

    An atmospheric visibility measurement system capable of quantifying the most common operating range of onboard exteroceptive sensors is a key parameter in the creation of driving assistance systems. This information is then utilized to adapt sensor operations and processing or to alert the driver that the onboard assistance system is momentarily inoperative. Moreover, a system capable of either detecting the presence of fog or estimating visibility distances constitutes in itself a driving aid. In this paper, we first present a review of different optical sensors likely to measure the visibility distance. We then present our stereovision based technique to estimate what we call the "mobilized visibility distance". This is the distance to the most distant object on the road surface having a contrast above 5%. In fact, this definition is very close to the definition of the meteorological visibility distance proposed by the International Commission on Illumination (CIE). The method combines the computation of both a depth map of the vehicle environment using the "v-disparity" approach and of local contrasts above 5%. Both methods are described separately. Then, their combination is detailed. A qualitative evaluation is done using different video sequences. Finally, a static quantitative evaluation is also performed thanks to reference targets installed on a dedicated test site.

  • VLSI Design of a Fully-Parallel High-Throughput Decoder for Turbo Gallager Codes

    Luca FANUCCI  Pasquale CIAO  Giulio COLAVOLPE  

     
    PAPER-Digital Signal Processing

      Vol:
    E89-A No:7
      Page(s):
    1976-1986

    The most powerful channel coding schemes, namely those based on turbo codes and low-density parity-check (LDPC) Gallager codes, have in common the principle of iterative decoding. However, the relative coding structures and decoding algorithms are substantially different. This paper presents a 2048-bit, rate-1/2 soft decision decoder for a new class of codes known as Turbo Gallager Codes. These codes are turbo codes with properly chosen component convolutional codes such that they can be successfully decoded by means of the decoding algorithm used for LDPC codes, i.e., the belief propagation algorithm working on the code Tanner graph. These coding schemes are important in practical terms for two reasons: (i) they can be encoded as classical turbo codes, giving a solution to the encoding problem of LDPC codes; (ii) they can also be decoded in a fully parallel manner, partially overcoming the routing congestion bottleneck of parallel decoder VLSI implementations thanks to the locality of the interconnections. The implemented decoder can support up to 1 Gbit/s data rate and performs up to 48 decoding iterations ensuring both high throughput and good coding gain. In order to evaluate the performance and the gate complexity of the decoder VLSI architecture, it has been synthesized in a 0.18 µm standard-cell CMOS technology.

  • High-Speed Calculation of Worst-Case Link Delays in the EDD Connection Admission Control Scheme

    Tokumi YOKOHIRA  Kiyohiko OKAYAMA  

     
    PAPER-Network

      Vol:
    E89-B No:7
      Page(s):
    2012-2022

    The EDD connection admission control scheme has been proposed for supporting real-time communication in packet-switched networks. In the scheme, when a connection establishment request occurs, the worst-case link delay in each link along the connection is calculated to determine whether the request can be accepted or not. In order to calculate the worst-case link delay, we must perform a check called the point schedulability check for each of some discrete time instants (checkpoints). Therefore when there are many checkpoints, the worst-case link delay calculation is time-consuming. We have proposed a high-speed calculation method. The method finds some checkpoints for which the point schedulability check need not be performed and removes such unnecessary checkpoints in advance before a connection establishment request occurs, and the check is performed for each of the remaining checkpoints after the request occurs. However, the method is not so effective under the situation that the maximum packet length in networks is large, because the method can find few unnecessary checkpoints under the situation. This paper proposes a new high-speed calculation method. We relax the condition which determines whether or not the point schedulability check need not be performed for each checkpoint in our previous method and derive a new condition for finding unnecessary checkpoints. Using the proposed method based on the new condition, we can increase the number of unnecessary checkpoints compared to our previous method. Numerical examples which are obtained by extensive simulation show that the proposed method can attain as much as about 50 times speedup.

  • Iterative QRM-MLD with Pilot-Assisted Decision Directed Channel Estimation for OFDM MIMO Multiplexing

    Koichi ADACHI  Riaz ESMAILZADEH  Masao NAKAGAWA  

     
    PAPER

      Vol:
    E89-A No:7
      Page(s):
    1892-1902

    Multiple-input multiple-output (MIMO) multiplexing has recently been attracting considerable attention for increasing the transmission rate in a limited bandwidth. In MIMO multiplexing, the signals transmitted simultaneously from different transmit antennas must be separated and detected at a receiver. Maximum likelihood detection with QR-decomposition and M-algorithm (QRM-MLD) can achieve good performance while keeping computational complexity low. However, when the number of surviving symbol replica candidates in the M-algorithm is set to be small, the performance of QRM-MLD degrades compared to that of MLD because of wrong selection of surviving symbol replica candidates. Furthermore, when channel estimation is inaccurate, accurate signal ranking and QR-decomposition cannot be carried out. In this paper, we propose an iterative QRM-MLD with decision directed channel estimation to improve the packet error rate (PER) performance. In the proposed QRM-MLD, decision feedback data symbols are also used for channel estimation in addition to pilot symbols in order to improve the channel estimation accuracy. Signal detection/channel estimation are then carried out in an iterative fashion. Computer simulation results show that the proposed QRM-MLD reduces the required average received Eb/N0 for PER of 10-2 by about 1.2 dB compared to the conventional method using orthogonal pilot symbols only.

  • A Simplified Autocorrelation-Based Single Frequency Estimator

    Young-Hwan YOU  Dae-Ki HONG  Sung-Jin KANG  Jang-Yeon LEE  Jin-Woong CHO  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E89-B No:7
      Page(s):
    2096-2098

    This letter proposes a low-complexity single frequency estimator for flat fading channels. The simplified estimator decreases the number of computations in the calculation of the autocorrelation function (AF) when compared to AF-based conventional estimators. The simplified estimator yields a comparable estimation performance to the existing estimators, while retaining the same frequency range.

  • A Tool Platform Using an XML Representation of Source Code Information

    Katsuhisa MARUYAMA  Shinichiro YAMAMOTO  

     
    PAPER-Software Engineering

      Vol:
    E89-D No:7
      Page(s):
    2214-2222

    Recent IDEs have become more extensible tool platforms but do not concern themselves with how other tools running on them collaborate with each other. They compel developers to use proprietary representations or the classical abstract syntax tree (AST) to build source code tools. Although these representations contain sufficient information, they are neither portable nor extensible. This paper proposes a tool platform that manages commonly used, fined-grained, information about Java source code by using an XML representation. Our representation is suitable for developing tools which browse and manipulate actual source code, since the original code is annotated with tags based on its structure and retained within the tags. Additionally, it exposes information resulting from global semantic analysis, which is never provided by the typical AST. Our proposed platform allows the developers to extend the representation for the purpose of sharing or exchanging various kinds of information about the source code, and also enables them to build new tools by using existing XML utilities.

  • Particle Swarm Optimization Algorithm for Energy-Efficient Cluster-Based Sensor Networks

    Tzay-Farn SHIH  

     
    PAPER

      Vol:
    E89-A No:7
      Page(s):
    1950-1958

    In order to reduce the traffic load and improve the system's lifetime, a cluster-based routing protocol has attracted more attention. In cluster-based sensor networks, energy can be conserved by combining redundant data from nearby sensors into cluster head nodes before forwarding the data to the destination. The lifespan of the whole network can also be expanded by the clustering of sensor nodes and through data aggregation. In this paper, we propose a cluster-based routing protocol which uses the location information of sensors to assist in network clustering. Our protocol partitions the entire network into several clusters by a particle swarm optimization (PSO) clustering algorithm. In each cluster, a cluster head is selected to deal with data aggregation or compression of nearby sensor nodes. For this clustering technique, the correct selection of the number of clusters is challenging and important. To cope with this issue, an energy dissipation model is used in our protocol to automatically estimate the optimal number of clusters. Several variations of PSO-clustering algorithm are proposed to improve the performance of our protocol. Simulation results show that the performance of our protocol is better than other protocols.

  • Removal of Adherent Waterdrops from Images Acquired with a Stereo Camera System

    Yuu TANAKA  Atsushi YAMASHITA  Toru KANEKO  Kenjiro T. MIURA  

     
    PAPER-Stereo and Multiple View Analysis

      Vol:
    E89-D No:7
      Page(s):
    2021-2027

    In this paper, we propose a new method that can remove view-disturbing noises from stereo images. One of the thorny problems in outdoor surveillance by a camera is that adherent noises such as waterdrops on the protecting glass surface lens disturb the view from the camera. Therefore, we propose a method for removing adherent noises from stereo images taken with a stereo camera system. Our method is based on the stereo measurement and utilizes disparities between stereo image pair. Positions of noises in images can be detected by comparing disparities measured from stereo images with the distance between the stereo camera system and the glass surface. True disparities of image regions hidden by noises can be estimated from the property that disparities are generally similar with those around noises. Finally, we can remove noises from images by replacing the above regions with textures of corresponding image regions obtained by the disparity referring. Experimental results show the effectiveness of the proposed method.

  • A Hierarchical Classification Method for US Bank-Notes

    Tatsuhiko KAGEHIRO  Hiroto NAGAYOSHI  Hiroshi SAKO  

     
    PAPER-Pattern Discrimination and Classification

      Vol:
    E89-D No:7
      Page(s):
    2061-2067

    This paper describes a method for the classification of bank-notes. The algorithm has three stages, and classifies bank-notes with very low error rates and at high speeds. To achieve the very low error rates, the result of classification is checked in the final stage by using different features to those used in the first two. High-speed processing is mainly achieved by the hierarchical structure, which leads to low computational costs. In evaluation on 32,850 samples of US bank-notes, with the same number used for training, the algorithm classified all samples precisely with no error sample. We estimate that the worst error rate is 3.1E-9 for the classification statistically.

  • Robust Active Shape Model Using AdaBoosted Histogram Classifiers and Shape Parameter Optimization

    Yuanzhong LI  Wataru ITO  

     
    PAPER-Shape Models

      Vol:
    E89-D No:7
      Page(s):
    2117-2123

    Active Shape Model (ASM) has been shown to be a powerful tool to aid the interpretation of images, especially in face alignment. ASM local appearance model parameter estimation is based on the assumption that residuals between model fit and data have a Gaussian distribution. Moreover, to generate an allowable face shape, ASM truncates coefficients of shape principal components into the bounds determined by eigenvalues. In this paper, an algorithm of modeling local appearances, called AdaBoosted ASM, and a shape parameter optimization method are proposed. In the algorithm of modeling the local appearances, we describe our novel modeling method by using AdaBoosted histogram classifiers, in which the assumption of the Gaussian distribution is not necessary. In the shape parameter optimization, we describe that there is an inadequacy on controlling shape parameters in ASM, and our novel method on how to solve it. Experimental results demonstrate that the AdaBoosted histogram classifiers improve robustness of landmark displacement greatly, and the shape parameter optimization solves the inadequacy problem of ASM on shape constraint effectively.

  • HEMT CCD Matched Filter for Spread Spectrum Communication

    Takahiro SUGIYAMA  Eiji NISHIMORI  Satoru ONO  Kiyoshi KAWAGUCHI  Atsushi NAKAGAWA  

     
    PAPER-Millimeter-Wave Devices

      Vol:
    E89-C No:7
      Page(s):
    959-964

    An HEMT CCD (charge-coupled-device) matched filter for spread-spectrum communication was developed. For higher data rates, it was fabricated using a two-phase CCD based on HEMT technology. It operates at 1.6 GHz, and its calculated data rate is 100 Mbps with a PN data length of 16 bits (PN data rate is 1.6 GHz). And it attains a charge transfer efficiency (CTE) of 0.975 at 2 GHz. The HEMT CCD matched filter dissipates 173 mW from a 10-Vp-p supply, and its chip size is 0.961.03 mm. It will thus be useful for optical communication and other high-data-rate applications utilizing spread-spectrum (SS) communication.

  • A Very Low Power 10 MHz CMOS Continuous-Time Bandpass Filter with On-Chip Automatic Tuning

    Gholamreza Zareh FATIN  Mohammad GHADAMI  

     
    PAPER-Electronic Circuits

      Vol:
    E89-C No:7
      Page(s):
    1089-1096

    A second-order CMOS continuous-time bandpass filter with a tuneable 4-12 MHz center frequency (fc) is presented. The Design has been done by using a new second-order block which is based on Gm-C method. This Gm-C filter achieves a dynamic range of 30 dB for 1% IM3, and Q equal to 58 at 12 MHz, while dissipating only 10.5 mW from 3.3 V power supply in 0.35 µm CMOS process. The on-chip indirect automatic tuning circuit uses a phase-locked loop which sets filter center frequency to an external reference clock.

  • Secret Key Agreement from Correlated Source Outputs Using Low Density Parity Check Matrices

    Jun MURAMATSU  

     
    PAPER-Information Theory

      Vol:
    E89-A No:7
      Page(s):
    2036-2046

    This paper deals with a secret key agreement problem from correlated random numbers. It is proved that there is a pair of linear matrices that yields a secret key agreement in the situation wherein a sender, a legitimate receiver, and an eavesdropper have access to correlated random numbers. A relation between the coding problem of correlated sources and a secret key agreement problem from correlated random numbers are also discussed.

  • Skeletons and Asynchronous RPC for Embedded Data and Task Parallel Image Processing

    Wouter CAARLS  Pieter JONKER  Henk CORPORAAL  

     
    PAPER-Parallel and Distributed Computing

      Vol:
    E89-D No:7
      Page(s):
    2036-2043

    Developing embedded parallel image processing applications is usually a very hardware-dependent process, often using the single instruction multiple data (SIMD) paradigm, and requiring deep knowledge of the processors used. Furthermore, the application is tailored to a specific hardware platform, and if the chosen hardware does not meet the requirements, it must be rewritten for a new platform. We have proposed the use of design space exploration [9] to find the most suitable hardware platform for a certain application. This requires a hardware-independent program, and we use algorithmic skeletons [5] to achieve this, while exploiting the data parallelism inherent to low-level image processing. However, since different operations run best on different kinds of processors, we need to exploit task parallelism as well. This paper describes how we exploit task parallelism using an asynchronous remote procedure call (RPC) system, optimized for low-memory and sparsely connected systems such as smart cameras. It uses a futures [16]-like model to present a normal imperative C-interface to the user in which the skeleton calls are implicitly parallelized and pipelined. Simulation provides the task dependency graph and performance numbers for the mapping, which can be done at run time to facilitate data dependent branching. The result is an easy to program, platform independent framework which shields the user from the parallel implementation and mapping of his application, while efficiently utilizing on-chip memory and interconnect bandwidth.

  • A 3D Feature-Based Binocular Tracking Algorithm

    Guang TIAN  Feihu QI  Masatoshi KIMACHI  Yue WU  Takashi IKETANI  

     
    PAPER-Tracking

      Vol:
    E89-D No:7
      Page(s):
    2142-2149

    This paper presents a 3D feature-based binocular tracking algorithm for tracking crowded people indoors. The algorithm consists of a two stage 3D feature points grouping method and a robust 3D feature-based tracking method. The two stage 3D feature points grouping method can use kernel-based ISODATA method to detect people accurately even though the part or almost full occlusion occurs among people in surveillance area. The robust 3D feature-based Tracking method combines interacting multiple model (IMM) method with a cascade multiple feature data association method. The robust 3D feature-based tracking method not only manages the generation and disappearance of a trajectory, but also can deal with the interaction of people and track people maneuvering. Experimental results demonstrate the robustness and efficiency of the proposed framework. It is real-time and not sensitive to the variable frame to frame interval time. It also can deal with the occlusion of people and do well in those cases that people rotate and wriggle.

  • Multi-Dimensional Mappings of M-ary Constellations for BICM-ID Systems

    Nghi H. TRAN  Ha H. NGUYEN  

     
    LETTER-Coding Theory

      Vol:
    E89-A No:7
      Page(s):
    2088-2091

    This paper studies bit-interleaved coded modulation with iterative decoding (BICM-ID) systems that employ multi-dimensional mappings of M-ary constellations to improve the error performance over Rayleigh fading channels. Based on the analytical evaluations of the asymptotic bit error probability (BEP), the distance criteria for the mapping designs can be obtained. A binary switching algorithm (BSA) is then applied to find the optimal mappings with respect to the asymptotic performance. Simulation and analytical results show that the use of multi-dimensional mappings of M-ary constellations can significantly improve the error performance.

10621-10640hit(18690hit)