The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

11381-11400hit(42807hit)

  • Robustness in Supervised Learning Based Blind Automatic Modulation Classification

    Md. Abdur RAHMAN  Azril HANIZ  Minseok KIM  Jun-ichi TAKADA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E96-B No:4
      Page(s):
    1030-1038

    Automatic modulation classification (AMC) involves extracting a set of unique features from the received signal. Accuracy and uniqueness of the features along with the appropriate classification algorithm determine the overall performance of AMC systems. Accuracy of any modulation feature is usually limited by the blindness of the signal information such as carrier frequency, symbol rate etc. Most papers do not sufficiently consider these impairments and so do not directly target practical applications. The AMC system proposed herein is trained with probable input signals, and the appropriate decision tree should be chosen to achieve robust classification. Six unique features are used to classify eight analog and digital modulation schemes which are widely used by low frequency mobile emergency radios around the globe. The Proposed algorithm improves the classification performance of AMC especially for the low SNR regime.

  • A Sub-100 mW Dual-Core HOG Accelerator VLSI for Parallel Feature Extraction Processing for HDTV Resolution Video

    Kosuke MIZUNO  Kenta TAKAGI  Yosuke TERACHI  Shintaro IZUMI  Hiroshi KAWAGUCHI  Masahiko YOSHIMOTO  

     
    PAPER

      Vol:
    E96-C No:4
      Page(s):
    433-443

    This paper describes a Histogram of Oriented Gradients (HOG) feature extraction accelerator that features a VLSI-oriented HOG algorithm with early classification in Support Vector Machine (SVM) classification, dual core architecture for parallel feature extraction and multiple object detection, and detection-window-size scalable architecture with reconfigurable MAC array for processing objects of several shapes. To achieve low-power consumption for mobile applications, early classification reduces the amount of computations in SVM classification efficiently with no accuracy degradation. The dual core architecture enables parallel feature extraction in one frame for high-speed or low-power computing and detection of multiple objects simultaneously with low power consumption by HOG feature sharing. Objects of several shapes, a vertically long object, a horizontally long object, and a square object, can be detected because of cooperation between the two cores. The proposed methods provide processing capability for HDTV resolution video (19201080 pixels) at 30 frames per second (fps). The test chip, which has been fabricated using 65 nm CMOS technology, occupies 4.22.1 mm2 containing 502 Kgates and 1.22 Mbit on-chip SRAMs. The simulated data show 99.5 mW power consumption at 42.9 MHz and 1.1 V.

  • Valid Digit and Overflow Information to Reduce Energy Dissipation of Functional Units in General Purpose Processors

    Kazuhito ITO  Takuya NUMATA  

     
    PAPER

      Vol:
    E96-C No:4
      Page(s):
    463-472

    In order to reduce the dynamic energy dissipation in CMOS LSIs, it is effective to reduce the frequency of value changes of the signals. In this paper, a data expression with the valid digit and lower digit overflow information is proposed to suppress unnecessary signal changes in integer functional units and registers of general purpose processors. Experimental results show that the proposed method reduces the energy dissipation by 9.8% for benchmark programs.

  • Self Synchronous Circuits for Robust Operation in Low Voltage and Soft Error Prone Environments

    Benjamin DEVLIN  Makoto IKEDA  Kunihiro ASADA  

     
    PAPER

      Vol:
    E96-C No:4
      Page(s):
    518-527

    In this paper we show that self synchronous circuits can provide robust operation in both soft error prone and low voltage operating environments. Self synchronous circuits are shown to be self checking, where a soft error will either cause a detectable error or halt operation of the circuit. A watchdog circuit is proposed to autonomously detect dual-rail '11' errors and prevent propagation, with measurements in 65 nm CMOS showing seamless operation from 1.6 V to 0.37 V. Compared to a system without the watchdog circuit size and energy-per-operation is increased 6.9% and 16% respectively, while error tolerance to noise is improved 83% and 40% at 1.2 V and 0.4 V respectively. A circuit that uses the dual-pipeline circuit style as redundancy against permanent faults is also presented and 40 nm CMOS measurement results shows correct operation with throughput of 1.2 GHz and 810 MHz at 1.1 V before and after disabling a faulty pipeline stage respectively.

  • Design of CMOS Low-Noise Analog Circuits for Particle Detector Pixel Readout LSIs

    Fei LI  Masaya MIYAHARA  Akira MATSUZAWA  

     
    PAPER

      Vol:
    E96-C No:4
      Page(s):
    568-576

    This paper describes the analysis and design of low-noise analog circuits for a new architecture readout LSI, Qpix. In contrast to conventional readout LSIs using TOT method, Qpix measures deposited charge directly as well as time information. A preamplifier with a two-stage op amp and current-copy output buffers is proposed to realize these functions. This preamplifier is configured to implement a charge sensitive amplifier (CSA) and a trans-impedance amplifier (TIA). Design issues related to CSA are analyzed, which includes gain requirement of the op amp, stability and compensation of the two-stage cascode op amp, noise performance estimation, requirement for the resolution of the ADC and time response. The offset calibration method in the TIA to improve the charge detecting sensitivity is also presented. Also, some design principles for these analog circuits are presented. In order to verify the theoretical analysis, a 400-pixel high speed readout LSI: Qpix v.1 has been designed and fabricated in 180 nm CMOS process. Calculations and SPICE simulations show that the total output noise is about 0.31 mV (rms) at the output of the CSA and the offset voltage is less than 4 mV at the output of the TIA. These are attractive performances for experimental particle detector using Qpix v.1 chip as its readout LSI.

  • 25 Gb/s 150-m Multi-Mode Fiber Transmission Using a CMOS-Driven 1.3-µm Lens-Integrated Surface-Emitting Laser

    Daichi KAWAMURA  Toshiaki TAKAI  Yong LEE  Kenji KOGO  Koichiro ADACHI  Yasunobu MATSUOKA  Norio CHUJO  Reiko MITA  Saori HAMAMURA  Satoshi KANEKO  Kinya YAMAZAKI  Yoshiaki ISHIGAMI  Toshiki SUGAWARA  Shinji TSUJI  

     
    BRIEF PAPER-Lasers, Quantum Electronics

      Vol:
    E96-C No:4
      Page(s):
    615-617

    We describe 25-Gb/s error-free transmission over multi-mode fiber (MMF) by using a transmitter based on a 1.3-µm lens-integrated surface-emitting laser (LISEL) and a CMOS laser-diode driver (LDD). It demonstrates 25-Gb/s error-free transmission over 30-m MMF under the overfilled-launch condition and over 150-m MMF with a power penalty less than 1.0 dB under the underfilled-launch condition.

  • Application of an Artificial Fish Swarm Algorithm in Symbolic Regression

    Qing LIU  Tomohiro ODAKA  Jousuke KUROIWA  Hisakazu OGURA  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:4
      Page(s):
    872-885

    An artificial fish swarm algorithm for solving symbolic regression problems is introduced in this paper. In the proposed AFSA, AF individuals represent candidate solutions, which are represented by the gene expression scheme in GEP. For evaluating AF individuals, a penalty-based fitness function, in which the node number of the parse tree is considered to be a constraint, was designed in order to obtain a solution expression that not only fits the given data well but is also compact. A number of important conceptions are defined, including distance, partners, congestion degree, and feature code. Based on the above concepts, we designed four behaviors, namely, randomly moving behavior, preying behavior, following behavior, and avoiding behavior, and present their respective formalized descriptions. The exhaustive simulation results demonstrate that the proposed algorithm can not only obtain a high-quality solution expression but also provides remarkable robustness and quick convergence.

  • Failure Microscope: Precisely Diagnosing Routing Instability

    Hongjun LIU  Baokang ZHAO  Xiaofeng HU  Dan ZHAO  Xicheng LU  

     
    PAPER-Information Network

      Vol:
    E96-D No:4
      Page(s):
    918-926

    Root cause analysis of BGP updates is the key to debug and troubleshoot BGP routing problems. However, it is a challenge to precisely diagnose the cause and the origin of routing instability. In this paper, we are the first to distinguish link failure events from policy change events based on BGP updates from single vantage points by analyzing the relationship of the closed loops formed through intersecting all the transient paths during instability and the length variation of the stable paths after instability. Once link failure events are recognized, their origins are precisely inferred with 100% accuracy. Through simulation, our method is effective to distinguish link failure events from link restoration events and policy related events, and reduce the size of candidate set of origins.

  • A Bayesian Framework Using Multiple Model Structures for Speech Recognition

    Sayaka SHIOTA  Kei HASHIMOTO  Yoshihiko NANKAKU  Keiichi TOKUDA  

     
    PAPER-Speech and Hearing

      Vol:
    E96-D No:4
      Page(s):
    939-948

    This paper proposes an acoustic modeling technique based on Bayesian framework using multiple model structures for speech recognition. The aim of the Bayesian approach is to obtain good prediction of observation by marginalizing all variables related to generative processes. Although the effectiveness of marginalizing model parameters was recently reported in speech recognition, most of these systems use only “one” model structure, e.g., topologies of HMMs, the number of states and mixtures, types of state output distributions, and parameter tying structures. However, it is insufficient to represent a true model distribution, because a family of such models usually does not include a true distribution in most practical cases. One of solutions of this problem is to use multiple model structures. Although several approaches using multiple model structures have already been proposed, the consistent integration of multiple model structures based on the Bayesian approach has not seen in speech recognition. This paper focuses on integrating multiple phonetic decision trees based on the Bayesian framework in HMM based acoustic modeling. The proposed method is derived from a new marginal likelihood function which includes the model structures as a latent variable in addition to HMM state sequences and model parameters, and the posterior distributions of these latent variables are obtained using the variational Bayesian method. Furthermore, to improve the optimization algorithm, the deterministic annealing EM (DAEM) algorithm is applied to the training process. The proposed method effectively utilizes multiple model structures, especially in the early stage of training and this leads to better predictive distributions and improvement of recognition performance.

  • Unified Time-Frequency OFDM Transmission with Self Interference Cancellation

    Changyong PAN  Linglong DAI  Zhixing YANG  

     
    PAPER-Communication Theory and Signals

      Vol:
    E96-A No:4
      Page(s):
    807-813

    Time domain synchronous orthogonal frequency division multiplexing (TDS-OFDM) has higher spectral efficiency than the standard cyclic prefix OFDM (CP-OFDM) OFDM by replacing the random CP with the known training sequence (TS), which could be also used for synchronization and channel estimation. However, TDS-OFDM requires suffers from performance loss over fading channels due to the iterative interference cancellation has to be used to remove the mutual interferences between the TS and the useful data. To solve this problem, the novel TS based OFDM transmission scheme, referred to as the unified time-frequency OFDM (UTF-OFDM), is proposed in which the time-domain TS and the frequency-domain pilots are carefully designed to naturally avoid the interference from the TS to the data without any reconstruction. The proposed UTF-OFDM based flexible frame structure supports effective channel estimation and reliable channel equalization, while imposing a significantly lower complexity than the TDS-OFDM system at the cost of a slightly reduced spectral efficiency. Simulation results demonstrate that the proposed UTF-OFDM substantially outperforms the existing TDS-OFDM, in terms of the system's achievable bit error rate.

  • A Bag-of-Features Approach to Classify Six Types of Pulmonary Textures on High-Resolution Computed Tomography Open Access

    Rui XU  Yasushi HIRANO  Rie TACHIBANA  Shoji KIDO  

     
    PAPER-Computer-Aided Diagnosis

      Vol:
    E96-D No:4
      Page(s):
    845-855

    Computer-aided diagnosis (CAD) systems on diffuse lung diseases (DLD) were required to facilitate radiologists to read high-resolution computed tomography (HRCT) scans. An important task on developing such CAD systems was to make computers automatically recognize typical pulmonary textures of DLD on HRCT. In this work, we proposed a bag-of-features based method for the classification of six kinds of DLD patterns which were consolidation (CON), ground-glass opacity (GGO), honeycombing (HCM), emphysema (EMP), nodular (NOD) and normal tissue (NOR). In order to successfully apply the bag-of-features based method on this task, we focused to design suitable local features and the classifier. Considering that the pulmonary textures were featured by not only CT values but also shapes, we proposed a set of statistical measures based local features calculated from both CT values and eigen-values of Hessian matrices. Additionally, we designed a support vector machine (SVM) classifier by optimizing parameters related to both kernels and the soft-margin penalty constant. We collected 117 HRCT scans from 117 subjects for experiments. Three experienced radiologists were asked to review the data and their agreed-regions where typical textures existed were used to generate 3009 3D volume-of-interest (VOIs) with the size of 323232. These VOIs were separated into two sets. One set was used for training and tuning parameters, and the other set was used for evaluation. The overall recognition accuracy for the proposed method was 93.18%. The precisions/sensitivities for each texture were 96.67%/95.08% (CON), 92.55%/94.02% (GGO), 97.67%/99.21% (HCM), 94.74%/93.99% (EMP), 81.48%/86.03%(NOD) and 94.33%/90.74% (NOR). Additionally, experimental results showed that the proposed method performed better than four kinds of baseline methods, including two state-of-the-art methods on classification of DLD textures.

  • A Cost-Effective Selective TMR for Coarse-Grained Reconfigurable Architectures Based on DFG-Level Vulnerability Analysis

    Takashi IMAGAWA  Hiroshi TSUTSUI  Hiroyuki OCHI  Takashi SATO  

     
    PAPER

      Vol:
    E96-C No:4
      Page(s):
    454-462

    This paper proposes a novel method to determine a priority for applying selective triple modular redundancy (selective TMR) against single event upset (SEU) to achieve cost-effective reliable implementation of application circuits onto coarse-grained reconfigurable architectures (CGRAs). The priority is determined by an estimation of the vulnerability of each node in the data flow graph (DFG) of the application circuit. The estimation is based on a weighted sum of the node parameters which characterize impact of the SEU in the node on the output data. This method does not require time-consuming placement-and-routing processes, as well as extensive fault simulations for various triplicating patterns, which allows us to identify the set of nodes to be triplicated for minimizing the vulnerability under given area constraint at the early stage of design flow. Therefore, the proposed method enables us efficient design space exploration of reliability-oriented CGRAs and their applications.

  • All-Zero Block-Based Optimization for Quadtree-Structured Prediction and Residual Encoding in High Efficiency Video Coding

    Guifen TIAN  Xin JIN  Satoshi GOTO  

     
    PAPER-Digital Signal Processing

      Vol:
    E96-A No:4
      Page(s):
    769-779

    High Efficiency Video Coding (HEVC) outperforms H.264 High Profile with bitrate saving of about 43%, mostly because block sizes for hybrid prediction and residual encoding are recursively chosen using a quadtree structure. Nevertheless, the exhaustive quadtree-based partition is not always necessary. This paper takes advantage of all-zero residual blocks at every quadtree depth to accelerate the prediction and residual encoding processes. First, we derive a near-sufficient condition to detect variable-sized all-zero blocks (AZBs). For these blocks, discrete cosine transform (DCT) and quantization can be skipped. Next, using the derived condition, we propose an early termination technique to reduce the complexity for motion estimation (ME). More significantly, we present a two-dimensional pruning technique based on AZBs to constrain prediction units (PU) that contribute negligibly to rate-distortion (RD) performance. Experiments on a wide range of videos with resolution ranging from 416240 to 4k2k, show that the proposed scheme can reduce computational complexity for the HEVC encoder by up to 70.46% (50.34% on average), with slight loss in terms of the peak signal-to-noise ratio (PSNR) and bitrate. The proposal also outperforms other state-of-the-art methods by achieving greater complexity reduction and improved bitrate performance.

  • A Comb Filter with Adaptive Notch Gain and Bandwidth

    Yosuke SUGIURA  Arata KAWAMURA  Youji IIGUNI  

     
    PAPER-Digital Signal Processing

      Vol:
    E96-A No:4
      Page(s):
    790-795

    This paper proposes a new adaptive comb filter which automatically designs its characteristics. The comb filter is used to eliminate a periodic noise from an observed signal. To design the comb filter, there exists three important factors which are so-called notch frequency, notch gain, and notch bandwidth. The notch frequency is the null frequency which is aligned at equally spaced frequencies. The notch gain controls an elimination quantity of the observed signal at notch frequencies. The notch bandwidth controls an elimination bandwidth of the observed signal at notch frequencies. We have previously proposed a comb filter which can adjust the notch gain adaptively to eliminate the periodic noise. In this paper, to eliminate the periodic noise when its frequencies fluctuate, we propose the comb filter which achieves the adaptive notch gain and the adaptive notch bandwidth, simultaneously. Simulation results show the effectiveness of the proposed adaptive comb filter.

  • Evolutionarily and Neutrally Stable Strategies in Multicriteria Games

    Tomohiro KAWAMURA  Takafumi KANAZAWA  Toshimitsu USHIO  

     
    PAPER-Concurrent Systems

      Vol:
    E96-A No:4
      Page(s):
    814-820

    Evolutionary stability has been discussed as a fundamental issue in single-criterion games. We extend evolutionarily and neutrally stable strategies to multicriteria games. Keeping in mind the fact that a payoff is given by a vector in multicriteria games, we provide several concepts which are coincident in single-criterion games based on partial vector orders of payoff vectors. We also investigate the hierarchical structure of our proposed evolutionarily and neutrally stable strategies. Shapley had introduced concepts such as strong and weak equilibria. We discuss the relationship between these equilibria and our proposed evolutionary stability.

  • Whitelisting for Critical IT-Based Infrastructure

    YoungHwa JANG  InCheol SHIN  Byung-gil MIN  Jungtaek SEO  MyungKeun YOON  

     
    LETTER-Network Management/Operation

      Vol:
    E96-B No:4
      Page(s):
    1070-1074

    Critical infrastructures are falsely believed to be safe when they are isolated from the Internet. However, the recent appearance of Stuxnet demonstrated that isolated networks are no longer safe. We observe that a better intrusion detection scheme can be established based on the unique features of critical infrastructures. In this paper, we propose a whitelist-based detection system. Network and application-level whitelists are proposed, which are combined to form a novel cross-layer whitelist. Through experiments, we confirm that the proposed whitelists can exactly detect attack packets, which cannot be achieved by existing schemes.

  • Mode-Matching Analysis of a Coaxially-Driven Finite Monopole Based on a Variable Bound Approach

    Young Seung LEE  Seung Keun PARK  

     
    PAPER-Antennas and Propagation

      Vol:
    E96-B No:4
      Page(s):
    994-1000

    The problem of a finite monopole antenna driven by a coaxial cable is revisited. On the basis of a variable bound approach, the radiated field around a monopole antenna can be represented in terms of the discrete modal summation. This theoretical model allows us to avoid the difficulties experienced when dealing with integral equations having different wavenumber spectra and ensures a solution in a convergent series form so that it is numerically efficient. The behaviors of the input admittance and the current distribution to characterize the monopole antenna are shown for different coaxial-antenna geometries and also compared with other existing results.

  • Outage Channel Capacity of Direct/Cooperative AF Relay Switched SC-FDMA Using Spectrum Division/Adaptive Subcarrier Allocation

    Masayuki NAKADA  Tatsunori OBARA  Tetsuya YAMAMOTO  Fumiyuki ADACHI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E96-B No:4
      Page(s):
    1001-1011

    In this paper, a direct/cooperative relay switched single carrier-frequency division multiple access (SC-FDMA) using amplify-and-forward (AF) protocol and spectrum division/adaptive subcarrier allocation (SDASA) is proposed. Using SDASA, the transmit SC signal spectrum is divided into sub-blocks, to each of which a different set of subcarriers (resource block) is adaptively allocated according to the channel conditions of mobile terminal (MT)-relay station (RS) link, RS-base station (BS) link, and MT-BS link. Cooperative relay does not always provide higher capacity than the direct communication. Switching between direct communication and cooperative relay is done depending on the channel conditions of MT-RS, RS-BS, and MT-BS links. We evaluate the achievable channel capacity by the Monte-Carlo numerical computation method. It is shown that the proposed scheme can reduce the transmit power by about 6.0 (2.0) dB compared to the direct communication (the cooperative AF relay) for a 1%-outage capacity of 3.0 bps/Hz.

  • Robust and Accurate Image Expansion Algorithm Based on Double Scattered Range Points Migration for UWB Imaging Radars

    Shouhei KIDERA  Tetsuo KIRIMOTO  

     
    PAPER-Sensing

      Vol:
    E96-B No:4
      Page(s):
    1061-1069

    UWB (Ultra Wideband) radar offers great promise for advanced near field sensors due to its high range resolution. In particular, it is suitable for rescue or resource exploration robots, which need to identify a target in low visibility or acoustically harsh environments. Recently, radar algorithms that actively coordinate multiple scattered components have been developed to enhance the imaging range beyond what can be achieved by synthesizing a single scattered component. Although we previously developed an accurate algorithm for imaging shadow regions with low computational complexity using derivatives of observed ranges for double scattered signals, the algorithm yields inaccurate images under the severe interference situations that occur with complex-shaped or multiple objects or in noisy environments. This is because small range fluctuations arising from interference or random noises can produce non-negligible image degradation owing to inaccuracy in the range derivative calculation. As a solution to this difficulty, this paper proposes a novel imaging algorithm that does not use the range derivatives of doubly scattered signals, and instead extracts a feature of expansive distributions of the observed ranges, using a unique property inherent to the doubly scattering mechanism. Numerical simulation examples of complex-shaped or multiple targets are presented to demonstrate the distinct advantage of the proposed algorithm which creates more accurate images even for complicated objects or in noisy situations.

  • FOREWORD Open Access

    Masahiko YOSHIMOTO  

     
    FOREWORD

      Vol:
    E96-C No:4
      Page(s):
    403-403
11381-11400hit(42807hit)