The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

9401-9420hit(42807hit)

  • Signal Detection for EM-Based Iterative Receivers in MIMO-OFDM Mobile Communications

    Kazushi MURAOKA  Kazuhiko FUKAWA  Hiroshi SUZUKI  Satoshi SUYAMA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:11
      Page(s):
    2480-2490

    Joint signal detection and channel estimation based on the expectation-maximization (EM) algorithm has been investigated for multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) mobile communications over fast-fading channels. The previous work in [20] developed a channel estimation method suitable for the EM-based iterative receiver. However, it remained possible for unreliable received signals to be repetitively used during the iterative process. In order to improve the EM-based iterative receiver further, this paper proposes spatial removal from the perspective of a message-passing algorithm on factor graphs. The spatial removal performs the channel estimation of a targeted antenna by using detected signals that are obtained from the received signals of all antennas other than the targeted antenna. It can avoid the repetitive use of unreliable received signals for consecutive signal detection and channel estimation. Appropriate applications of the spatial removal are also discussed to exploit both the removal effect and the spatial diversity. Computer simulations under fast-fading conditions demonstrate that the appropriate applications of the spatial removal can improve the packet error rate (PER) of the EM-based receiver thanks to both the removal effect and the spatial diversity.

  • Spatial Division Transmission without Signal Processing for MIMO Detection Utilizing Two-Ray Fading

    Ken HIRAGA  Kazumitsu SAKAMOTO  Maki ARAI  Tomohiro SEKI  Tadao NAKAGAWA  Kazuhiro UEHARA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:11
      Page(s):
    2491-2501

    This paper presents a spatial division (SD) transmission method based on two-ray fading that dispenses with the high signal processing cost of multiple-input and multiple-output (MIMO) detection and antennas with narrow beamwidth. We show the optimum array geometries as functions of the transmission distance for providing a concrete array design method. Moreover, we clarify achievable channel capacity considering reflection coefficients that depend on the polarization, incident angle, and dielectric constant. When the ground surface is conductive, for two- and three-element arrays, channel capacity is doubled and tripled, respectively, over that of free space propagation. We also clarify the application limit of this method for a dielectric ground by analyzing the channel capacity's dependency on the dielectric constant. With this method, increased channel capacity by SD transmission can be obtained merely by placing antennas of wireless transceiver sets that have only SISO (single-input and single-output) capability in a two-ray propagation environment. By using formulations presented in this paper for the first time and adding discussions on the adoption of polarization multiplexing, we clarify antenna geometries of SD transmission systems using polarization multiplexing for up to six streams.

  • Correction of Dechirp Distortion in Long-Distance Target Imaging with LFMCW-ISAR

    Wen CHANG  Zenghui LI  Jian YANG  Chunmao YEH  

     
    PAPER-Sensing

      Vol:
    E97-B No:11
      Page(s):
    2552-2559

    The combined linear frequency modulation continuous wave (LFMCW) and inverse synthetic aperture radar (ISAR) can be used for imaging long-distance targets because of its long-distance and high resolution imaging abilities. In this paper, we find and study the dechirp distortion phenomenon (DDP) for imaging long-distance targets by a dechirp-on-receive LFMCW radar. If the targets are very far from the radar, the maximum delay-time is not much smaller than a single sweep duration, and the dechirp distortion is triggered since the distance of the target is unknown in a LFMCW-ISAR system. DDP cannot be ignored in long-distance imaging because double images of a target appear in the frequency domain, which reduces resolution and degrades image quality. A novel LFMCW-ISAR signal model is established to analyze DDP and its negative effects on long-distance target imaging. Using the proportionately distributed energy of double images, the authors propose a method to correct dechirp distortion. In addition, the applicable scope of the proposed method is also discussed. Simulation results validate the theoretical analysis and the effectiveness of the proposed method.

  • Thickness of Crystalline Layer of Rubbed Polyimide Film Characterized by Grazing Incidence X-ray Diffractions with Multi Incident Angles

    Ichiro HIROSAWA  Tomoyuki KOGANEZAWA  Hidenori ISHII  

     
    BRIEF PAPER

      Vol:
    E97-C No:11
      Page(s):
    1089-1092

    Thickness of crystalline layer induced by annealing after rubbing at surface of polyimide film for liquid crystal displays was estimated to be 3--5 nm by grazing-incidence X-ray diffractions with multi incident angles. Agreement of thickness of crystalline layer with that of initially oriented layer suggests polymer orientation induced by rubbing proceeds crystallization by annealing. Furthermore, no in-plane smectic ordering in bottom 20,nm region of polyimide film was suggested.

  • An Accident Severity Classification Model Based on Multi-Objective Particle Swarm Optimization

    Chunlu WANG  Chenye QIU  Xingquan ZUO  Chuanyi LIU  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E97-D No:11
      Page(s):
    2863-2871

    Reducing accident severity is an effective way to improve road safety. In the literature of accident severity analysis, two main disadvantages exist: most studies use classification accuracy to measure the quality of a classifier which is not appropriate in the condition of unbalanced dataset; the other is the results are not easy to be interpreted by users. Aiming at these drawbacks, a novel multi-objective particle swarm optimization (MOPSO) method is proposed to identify the contributing factors that impact accident severity. By employing Pareto dominance concept, a set of Pareto optimal rules can be obtained by MOPSO automatically, without any pre-defined threshold or variables. Then the rules are used to form a non-ordered classifier. A MOPSO is applied to discover a set of Pareto optimal rules. The accident data of Beijing between 2008 and 2010 are used to build the model. The proposed approach is compared with several rule learning algorithms. The results show the proposed approach can generate a set of accurate and comprehensible rules which can indicate the relationship between risk factors and accident severity.

  • Reducing Speech Noise for Patients with Dysarthria in Noisy Environments

    Woo KYEONG SEONG  Ji HUN PARK  Hong KOOK KIM  

     
    PAPER-Speech and Hearing

      Vol:
    E97-D No:11
      Page(s):
    2881-2887

    Dysarthric speech results from damage to the central nervous system involving the articulator, which can mainly be characterized by poor articulation due to irregular sub-glottal pressure, loudness bursts, phoneme elongation, and unexpected pauses during utterances. Since dysarthric speakers have physical disabilities due to the impairment of their nervous system, they cannot easily control electronic devices. For this reason, automatic speech recognition (ASR) can be a convenient interface for dysarthric speakers to control electronic devices. However, the performance of dysarthric ASR severely degrades when there is background noise. Thus, in this paper, we propose a noise reduction method that improves the performance of dysarthric ASR. The proposed method selectively applies either a Wiener filtering algorithm or a Kalman filtering algorithm according to the result of voiced or unvoiced classification. Then, the performance of the proposed method is compared to a conventional Wiener filtering method in terms of ASR accuracy.

  • Partial Volume Correction on ASL-MRI and Its Application on Alzheimer's Disease Diagnosis

    Wenji YANG  Wei HUANG  Shanxue CHEN  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:11
      Page(s):
    2912-2918

    Arterial spin labeling (ASL) is a non-invasive magnetic resonance imaging (MRI) method that can provide direct and quantitative measurements of cerebral blood flow (CBF) of scanned patients. ASL can be utilized as an imaging modality to detect Alzheimer's disease (AD), as brain atrophy of AD patients can be revealed by low CBF values in certain brain regions. However, partial volume effects (PVE), which is mainly caused by signal cross-contamination due to voxel heterogeneity and limited spatial resolution of ASL images, often prevents CBF in ASL from being precisely measured. In this study, a novel PVE correction method is proposed based on pixel-wise voxels in ASL images; it can well handle with the existing problems of blurring and loss of brain details in conventional PVE correction methods. Dozens of comparison experiments and statistical analysis also suggest that the proposed method is superior to other PVE correction methods in AD diagnosis based on real patients data.

  • Evaluation of Agile Software Develeopment Method for Carrier Cloud Service Platform Development

    Yoji YAMATO  Naoko SHIGEMATSU  Norihiro MIURA  

     
    LETTER-Software Engineering

      Pubricized:
    2014/08/19
      Vol:
    E97-D No:11
      Page(s):
    2959-2962

    In this paper, we evaluate a method of agile software development for carrier Cloud service platform development. It is generally said that agile software development is suitable for small-scale development, but we adopt it for the development which has more than 30 members. We attempted to enable automatic regression tests for each iteration when we adopted agile software development, so that we could start our Cloud service sufficiently fast. We compared and evaluated software reliability growth curves, regression test efforts and bug causes with waterfall development.

  • A Fixed-Point Global Tone Mapping Operation for HDR Images in the RGBE Format

    Toshiyuki DOBASHI  Tatsuya MUROFUSHI  Masahiro IWAHASHI  Hitoshi KIYA  

     
    PAPER

      Vol:
    E97-A No:11
      Page(s):
    2147-2153

    A global tone mapping operation (TMO) for high dynamic range (HDR) images with fixed-point arithmetic is proposed and evaluated in this paper. A TMO generates a low dynamic range (LDR) image from an HDR image by compressing its dynamic range. Since an HDR image is generally expressed in a floating-point data format, a TMO also deals with floating-point data even though a resultant LDR image is integer data. The proposed method treats a floating-point number as two 8-bit integer numbers which correspond to an exponent part and a mantissa part, and applies tone mapping to these integer numbers separately. Moreover, the method conducts all calculations in the tone mapping with only fixed-point arithmetic. As a result, the method reduces a memory cost and a computational cost. The evaluation shows that the proposed method reduces 81.25% of memory usage. The experimental results show that the processing speed of the proposed method with fixed-point arithmetic is 23.1 times faster than the conventional method with floating-point arithmetic. Furthermore, they also show the PSNR of LDR images obtained by the proposed method are comparable to those of the conventional method, though reducing computational and memory cost.

  • Distribution of Attention in Augmented Reality: Comparison between Binocular and Monocular Presentation Open Access

    Akihiko KITAMURA  Hiroshi NAITO  Takahiko KIMURA  Kazumitsu SHINOHARA  Takashi SASAKI  Haruhiko OKUMURA  

     
    INVITED PAPER

      Vol:
    E97-C No:11
      Page(s):
    1081-1088

    This study investigated the distribution of attention to frontal space in augmented reality (AR). We conducted two experiments to compare binocular and monocular observation when an AR image was presented. According to a previous study, when participants observed an AR image in monocular presentation, they perceived the AR image as more distant than in binocular vision. Therefore, we predicted that attention would need to be shifted between the AR image and the background in not the monocular observation but the binocular one. This would enable an observer to distribute his/her visual attention across a wider space in the monocular observation. In the experiments, participants performed two tasks concurrently to measure the size of the useful field of view (UFOV). One task was letter/number discrimination in which an AR image was presented in the central field of view (the central task). The other task was luminance change detection in which dots were presented in the peripheral field of view (the peripheral task). Depth difference existed between the AR image and the location of the peripheral task in Experiment 1 but not in Experiment 2. The results of Experiment 1 indicated that the UFOV became wider in the monocular observation than in the binocular observation. In Experiment 2, the size of the UFOV in the monocular observation was equivalent to that in the binocular observation. It becomes difficult for a participant to observe the stimuli on the background in the binocular observation when there is depth difference between the AR image and the background. These results indicate that the monocular presentation in AR is superior to binocular presentation, and even in the best condition for the binocular condition the monocular presentation is equivalent to the binocular presentation in terms of the UFOV.

  • Trojan Vulnerability Map: An Efficient Metric for Modeling and Improving the Security Level of Hardware

    Mahmoud BAKHSHIZADEH  Ali JAHANIAN  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E97-A No:11
      Page(s):
    2218-2226

    Hardware Trojan or any other kind of unwanted hardware modifications has been thought as a major challenge in many commercial and secure applications. Currently, detection and prevention of hardware Trojans appeared as an important requirement in such systems. In this paper, a new concept, Trojan Vulnerability Map, is introduced to model the immunity of various regions of hardware against hardware attacks. Then, placement and routing algorithms are proposed to improve the immunity of hardware using the Trojan Vulnerability Map. Experimental results show that the proposed placement and routing algorithm reduces the hardware vulnerability by 25.65% and 4.08%, respectively. These benefits are earned in cost of negligible total wire length and delay overhead.

  • A Copyright- and Privacy-Protected Image Trading System Using Fingerprinting in Discrete Wavelet Domain with JPEG 2000

    Wannida SAE-TANG  Shenchuan LIU  Masaaki FUJIYOSHI  Hitoshi KIYA  

     
    PAPER

      Vol:
    E97-A No:11
      Page(s):
    2107-2113

    In this paper, a compression-friendly copyright- and privacy-protected image trading system is proposed. In the image trading system, the copyright of the image and the consumer's privacy is important. In addition, it should preserve existing image compression standards. In the proposed method, for privacy protection, the content provider (CP) multiplies random signs to the discrete wavelet transformed (DWTed) coefficients of an image to generate the visually encrypted image. The proposed visually protected images can be efficiently compressed by using JPEG 2000 which compresses the image in the DWTed domain as well. For copyright protection, the trusted third party (TTP) applies digital fingerprinting to the image in the encrypted domain. While in the conventional system, the amplitude-only image (AOI) which is the inversely transformed amplitude spectra of an image is used for privacy protection. Since, the AOI consists of real numbers, to store and transmit the AOI, it has to be quantized before compression. Therefore, quantization errors cannot be avoided in the conventional system. On the other hand, the proposed method applies the digital fingerprint in the DWTed domain, so clipping errors in decoding the image by the TTP is avoided. In addition, only a seed number which is input to a pseudo random number generator is shared between the CP and the consumer, whereas an extra image is shared in the conventional systems. Experimental results show that the proposed system is efficient in terms of privacy protection, compression performance, quality of fingerprinted images, and correct fingerprint extracting performance.

  • FOREWORD

    Hiroshi YASUKAWA  Akira ASANO  

     
    FOREWORD

      Vol:
    E97-A No:11
      Page(s):
    2095-2096
  • Research and Modeling on Performance Evaluation of IEEE 802.15.6

    Yali WANG  Lan CHEN  Chao LV  

     
    PAPER-Network

      Vol:
    E97-B No:11
      Page(s):
    2378-2385

    IEEE 802.15.6 provides PHY and MAC layer profiles for wearable and implanted Wireless Body Area Networks (WBANs). The critical requirements of QoS guarantee and ultra-low-power are severe challenges when implementing IEEE 802.15.6. In this paper, the key problem in IEEE 802.15.6 standard that “How to allocate EAP (Exclusive Access Phase)?” is solved for the first time: An analysis of network performance indicates that too much EAP allocation can not promote traffic performance obviously and effectually. However, since EAP allocation plays an important role in guaranteeing quality of service, a customized and quantitative EAP allocation solution is proposed. Simulation results show that the solution can obtain the optimal network performance. Furthermore, the estimated models of delay and energy are developed, which help to design the WBAN according to application requirements and analyze the network performance according to the traffic characteristics. The models are simple, effective, and relatively accurate. Results show that the models have approximated mean and the correlation coefficient is greater than 0.95 compared with the simulations of IEEE 802.15.6 using NS2 platform. The work of this paper can solve crucial practical problems in using IEEE 802.15.6, and will propel WBANs applications widely.

  • Cross-Dialectal Voice Conversion with Neural Networks

    Weixun GAO  Qiying CAO  Yao QIAN  

     
    PAPER-Speech and Hearing

      Vol:
    E97-D No:11
      Page(s):
    2872-2880

    In this paper, we use neural networks (NNs) for cross-dialectal (Mandarin-Shanghainese) voice conversion using a bi-dialectal speakers' recordings. This system employs a nonlinear mapping function, which is trained by parallel mandarin features of source and target speakers, to convert source speaker's Shanghainese features to those of target speaker. This study investigates three training aspects: a) Frequency warping, which is supposed to be language independent; b) Pre-training, which drives weights to a better starting point than random initialization or be regarded as unsupervised feature learning; and c) Sequence training, which minimizes sequence-level errors and matches objectives used in training and converting. Experimental results show that the performance of cross-dialectal voice conversion is close to that of intra-dialectal. This benefit is likely from the strong learning capabilities of NNs, e.g., exploiting feature correlations between fundamental frequency (F0) and spectrum. The objective measures: log spectral distortion (LSD) and root mean squared error (RMSE) of F0, both show that pre-training and sequence training outperform the frame-level mean square error (MSE) training. The naturalness of the converted Shanghainese speech and the similarity between converted Shanghainese speech and target Mandarin speech are significantly improved.

  • MVP-Cache: A Multi-Banked Cache Memory for Energy-Efficient Vector Processing of Multimedia Applications

    Ye GAO  Masayuki SATO  Ryusuke EGAWA  Hiroyuki TAKIZAWA  Hiroaki KOBAYASHI  

     
    PAPER-Computer System

      Pubricized:
    2014/08/22
      Vol:
    E97-D No:11
      Page(s):
    2835-2843

    Vector processors have significant advantages for next generation multimedia applications (MMAs). One of the advantages is that vector processors can achieve high data transfer performance by using a high bandwidth memory sub-system, resulting in a high sustained computing performance. However, the high bandwidth memory sub-system usually leads to enormous costs in terms of chip area, power and energy consumption. These costs are too expensive for commodity computer systems, which are the main execution platform of MMAs. This paper proposes a new multi-banked cache memory for commodity computer systems called MVP-cache in order to expand the potential of vector architectures on MMAs. Unlike conventional multi-banked cache memories, which employ one tag array and one data array in a sub-cache, MVP-cache associates one tag array with multiple independent data arrays of small-sized cache lines. In this way, MVP-cache realizes less static power consumption on its tag arrays. MVP-cache can also achieve high efficiency on short vector data transfers because the flexibility of data transfers can be improved by independently controlling the data transfers of each data array.

  • Parallelization of Dynamic Time Warping on a Heterogeneous Platform

    Yao ZHENG  Limin XIAO  Wenqi TANG  Lihong SHANG  Guangchao YAO  Li RUAN  

     
    LETTER-Algorithms and Data Structures

      Vol:
    E97-A No:11
      Page(s):
    2258-2262

    The dynamic time warping (DTW) algorithm is widely used to determine time series similarity search. As DTW has quadratic time complexity, the time taken for similarity search is the bottleneck for virtually all time series data mining algorithms. In this paper, we present a parallel approach for DTW on a heterogeneous platform with a graphics processing unit (GPU). In order to exploit fine-grained data-level parallelism, we propose a specific parallel decomposition in DTW. Furthermore, we introduce an optimization technique called diamond tiling to improve the utilization of threads. Results show that our approach substantially reduces computational time.

  • FOREWORD Open Access

    Katsuhiro SHIMANO  

     
    FOREWORD

      Vol:
    E97-B No:11
      Page(s):
    2251-2251
  • Multi-Access Selection Algorithm Based on Joint Utility Optimization for the Fusion of Heterogeneous Wireless Networks

    Lina ZHANG  Qi ZHU  Shasha ZHAO  

     
    PAPER

      Vol:
    E97-B No:11
      Page(s):
    2269-2277

    Network selection is one of the hot issues in the fusion of heterogeneous wireless networks (HWNs). However, most of previous works only consider selecting single-access network, which wastes other available network resources, rarely take account of multi-access. To make full utilization of available coexisted networks, this paper proposes a novel multi-access selection algorithm based on joint utility optimization for users with multi-mode terminals. At first, the algorithm adopts exponential smoothing method (ESM) to get smoothed values of received signal strength (RSS). Then we obtain network joint utility function under the constraints of bandwidth and number of networks, with the consideration of trade-off between network benefit and cost. At last, Lagrange multiplier and dual optimization methods are used to maximize joint utility. Users select multiple networks according to the optimal association matrix of user and network. The simulation results show that the proposed algorithm can optimize network joint utility, improve throughput, effectively reduce vertical handoff number, and ensure Quality of Service (QoS).

  • Real-Time MAC Protocol Based on Coding-Black-Burst in Wireless Sensor Networks

    Feng YU  Lei WANG  Dan GAO  Yingguan WANG  Xiaolin ZHANG  

     
    LETTER-Communication Theory and Signals

      Vol:
    E97-A No:11
      Page(s):
    2279-2282

    In this paper, a real-time medium access control (MAC) protocol based on a coding-black-burst mechanism with low latency and high energy efficiency is proposed for wireless sensor networks. The Black-Burst (BB) mechanism is used to provide real-time access. However, when the traffic load is heavy, BB will cause a lot of energy loss and latency due to its large length. A binary coding mechanism is applied to BB in our coding-black-burst-based protocol to reduce the energy consumption and latency by at least (L-2(log2 L+1)) for L-length BB. The new mechanism also gives priority to the real-time traffic with longer waiting delays to access the channel. The theoretical analysis and simulation results indicate that our protocol provides low end-to-end delay and high energy efficiency for real-time communication.

9401-9420hit(42807hit)