The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

4601-4620hit(20498hit)

  • Model-Based Contract Testing of Graphical User Interfaces

    Tugkan TUGLULAR  Arda MUFTUOGLU  Fevzi BELLI  Michael LINSCHULTE  

     
    PAPER-Software Engineering

      Pubricized:
    2015/03/19
      Vol:
    E98-D No:7
      Page(s):
    1297-1305

    Graphical User Interfaces (GUIs) are critical for the security, safety and reliability of software systems. Injection attacks, for instance via SQL, succeed due to insufficient input validation and can be avoided if contract-based approaches, such as Design by Contract, are followed in the software development lifecycle of GUIs. This paper proposes a model-based testing approach for detecting GUI data contract violations, which may result in serious failures such as system crash. A contract-based model of GUI data specifications is used to develop test scenarios and to serve as test oracle. The technique introduced uses multi terminal binary decision diagrams, which are designed as an integral part of decision table-augmented event sequence graphs, to implement a GUI testing process. A case study, which validates the presented approach on a port scanner written in Java programming language, is presented.

  • Design of q-Parallel LFSR-Based Syndrome Generator

    Seung-Youl KIM  Kyoung-Rok CHO  Je-Hoon LEE  

     
    BRIEF PAPER

      Vol:
    E98-C No:7
      Page(s):
    594-596

    This paper presents a new parallel architecture of syndrome generator for a high-speed BCH (Bose-Chaudhuri-Hocquenghem) decoder. In particular, the proposed parallel syndrome generators are based on LFSR (linear feedback shift register) architecture to achieve high throughput without significant area overhead. From the experimental results, the proposed approach achieves 4.60 Gbps using 0.25-µm standard CMOS technology. This result is much faster than the conventional byte-wise GFM-based counterpart. The high throughputs are due to the well-tuned hardware implementation using unfolding transformation.

  • Accurate Coherent Change Detection Method Based on Pauli Decomposition for Fully Polarimetric SAR Imagery

    Ryo OYAMA  Shouhei KIDERA  Tetsuo KIRIMOTO  

     
    PAPER-Sensing

      Vol:
    E98-B No:7
      Page(s):
    1390-1395

    Microwave imaging techniques, particularly for synthetic aperture radar (SAR), produce high-resolution terrain surface images regardless of the weather conditions. Focusing on a feature of complex SAR images, coherent change detection (CCD) approaches have been developed in recent decades that can detect invisible changes in the same regions by applying phase interferometry to pairs of complex SAR images. On the other hand, various techniques of polarimetric SAR (PolSAR) image analysis have been developed, since fully polarimetric data often include valuable information that cannot be obtained from single polarimetric observations. According to this background, various coherent change detection methods based on fully polarimetric data have been proposed. However, the detection accuracies of these methods often degrade in low signal-to-noise ratio (SNR) situations due to the lower signal levels of cross-polarized components compared with those of co-polarized ones. To overcome the problem mentioned above, this paper proposes a novel CCD method by introducing the Pauli decomposition and the weighting of component with their respective SNR. The experimental data obtained in anechoic chamber show that the proposed method significantly enhances the performance of the receiver operation characteristic (ROC) compared with that obtained by a conventional approach.

  • Automatic Detection of the Carotid Artery Location from Volumetric Ultrasound Images Using Anatomical Position-Dependent LBP Features

    Fumi KAWAI  Satoshi KONDO  Keisuke HAYATA  Jun OHMIYA  Kiyoko ISHIKAWA  Masahiro YAMAMOTO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/04/13
      Vol:
    E98-D No:7
      Page(s):
    1353-1364

    We propose a fully automatic method for detecting the carotid artery from volumetric ultrasound images as a preprocessing stage for building three-dimensional images of the structure of the carotid artery. The proposed detector utilizes support vector machine classifiers to discriminate between carotid artery images and non-carotid artery images using two kinds of LBP-based features. The detector switches between these features depending on the anatomical position along the carotid artery. We evaluate our proposed method using actual clinical cases. Accuracies of detection are 100%, 87.5% and 68.8% for the common carotid artery, internal carotid artery, and external carotid artery sections, respectively.

  • Software Maintenance Evaluation of Agile Software Development Method Based on OpenStack

    Yoji YAMATO  Shinichiro KATSURAGI  Shinji NAGAO  Norihiro MIURA  

     
    LETTER-Software Engineering

      Pubricized:
    2015/04/20
      Vol:
    E98-D No:7
      Page(s):
    1377-1380

    We evaluated software maintenance of an open source cloud platform system we developed using an agile software development method. We previously reported on a rapid service launch using the agile software development method in spite of large-scale development. For this study, we analyzed inquiries and the defect removal efficiency of our recently developed software throughout one-year operation. We found that the defect removal efficiency of our recently developed software was 98%. This indicates that we could achieve sufficient quality in spite of large-scale agile development. In term of maintenance process, we could answer all enquiries within three business days and could conduct version-upgrade fast. Thus, we conclude that software maintenance of agile software development is not ineffective.

  • Time Difference Estimation Based on Blind Beamforming for Wideband Emitter

    Sen ZHONG  Wei XIA  Lingfeng ZHU  Zishu HE  

     
    LETTER-Dependable Computing

      Pubricized:
    2015/04/13
      Vol:
    E98-D No:7
      Page(s):
    1386-1390

    In the localization systems based on time difference of arrival (TDOA), multipath fading and the interference source will deteriorate the localization performance. In response to this situation, TDOA estimation based on blind beamforming is proposed in the frequency domain. An additional constraint condition is designed for blind beamforming based on maximum power collecting (MPC). The relationship between the weight coefficients of the beamformer and TDOA is revealed. According to this relationship, TDOA is estimated by discrete Fourier transform (DFT). The efficiency of the proposed estimator is demonstrated by simulation results.

  • Learning Deep Dictionary for Hyperspectral Image Denoising

    Leigang HUO  Xiangchu FENG  Chunlei HUO  Chunhong PAN  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/04/20
      Vol:
    E98-D No:7
      Page(s):
    1401-1404

    Using traditional single-layer dictionary learning methods, it is difficult to reveal the complex structures hidden in the hyperspectral images. Motivated by deep learning technique, a deep dictionary learning approach is proposed for hyperspectral image denoising, which consists of hierarchical dictionary learning, feature denoising and fine-tuning. Hierarchical dictionary learning is helpful for uncovering the hidden factors in the spectral dimension, and fine-tuning is beneficial for preserving the spectral structure. Experiments demonstrate the effectiveness of the proposed approach.

  • Inter-Cell Interference Coordination Method Based on Coordinated Inter-Cell Interference Power Control in Uplink

    Kenichi HIGUCHI  Yoshiko SAITO  Seigo NAKAO  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E98-B No:7
      Page(s):
    1357-1362

    We propose an inter-cell interference coordination (ICIC) method that employs inter-cell coordinated transmission power control (TPC) based on inter-cell interference power in addition to conventional received signal power-based TPC in the cellular uplink. We assume orthogonal multiple-access as is used in 3GPP LTE. In the proposed method, an ICIC effect similar to that for conventional fractional frequency reuse (FFR) is obtained. This is achieved by coordinating the allowable inter-cell interference power level at the appropriate frequency blocks within the system bandwidth among neighboring cells in a semi-static manner. Different from conventional FFR, since all users within a cell can access all the frequency blocks, the reduction in multiuser diversity gain is abated. Computer simulation results show that the proposed method enhances both the cell-edge and average user throughput simultaneously compared to conventional universal frequency reuse (UFR) and FFR.

  • Method of Spread Spectrum Watermarking Using Quantization Index Modulation for Cropped Images

    Takahiro YAMAMOTO  Masaki KAWAMURA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/04/16
      Vol:
    E98-D No:7
      Page(s):
    1306-1315

    We propose a method of spread spectrum digital watermarking with quantization index modulation (QIM) and evaluate the method on the basis of IHC evaluation criteria. The spread spectrum technique can make watermarks robust by using spread codes. Since watermarks can have redundancy, messages can be decoded from a degraded stego-image. Under IHC evaluation criteria, it is necessary to decode the messages without the original image. To do so, we propose a method in which watermarks are generated by using the spread spectrum technique and are embedded by QIM. QIM is an embedding method that can decode without an original image. The IHC evaluation criteria include JPEG compression and cropping as attacks. JPEG compression is lossy compression. Therefore, errors occur in watermarks. Since watermarks in stego-images are out of synchronization due to cropping, the position of embedded watermarks may be unclear. Detecting this position is needed while decoding. Therefore, both error correction and synchronization are required for digital watermarking methods. As countermeasures against cropping, the original image is divided into segments to embed watermarks. Moreover, each segment is divided into 8×8 pixel blocks. A watermark is embedded into a DCT coefficient in a block by QIM. To synchronize in decoding, the proposed method uses the correlation between watermarks and spread codes. After synchronization, watermarks are extracted by QIM, and then, messages are estimated from the watermarks. The proposed method was evaluated on the basis of the IHC evaluation criteria. The PSNR had to be higher than 30 dB. Ten 1920×1080 rectangular regions were cropped from each stego-image, and 200-bit messages were decoded from these regions. Their BERs were calculated to assess the tolerance. As a result, the BERs were less than 1.0%, and the average PSNR was 46.70 dB. Therefore, our method achieved a high image quality when using the IHC evaluation criteria. In addition, the proposed method was also evaluated by using StirMark 4.0. As a result, we found that our method has robustness for not only JPEG compression and cropping but also additional noise and Gaussian filtering. Moreover, the method has an advantage in that detection time is small since the synchronization is processed in 8×8 pixel blocks.

  • RX v2: Renesas's New-Generation MCU Processor

    Sugako OTANI  Hiroyuki KONDO  

     
    PAPER

      Vol:
    E98-C No:7
      Page(s):
    544-549

    RXv2 is the new generation of Renesas's processor architecture for microcontrollers with high-capacity flash memory. An enhanced instruction set and pipeline structure with an advanced fetch unit (AFU) provide an effective balance between power consumption performance and high processing performance. Enhanced instructions such as DSP function and floating point operation and a five-stage dual-issue pipeline synergistically boost the performance of digital signal applications. The RXv2 processor delivers 1.9 - 3.7 the cycle performance of the RXv1 in these applications. The decrease of the number of Flash memory accesses by AFU is a dominant determiner of reducing power consumption. AFU of RXv2 benefits from adopting branch target cache, which has a comparatively smaller area than that of typical cache systems. High code density delivers low power consumption by reducing instruction memory bandwidth. The implementation of RXv2 delivers up to 46% reduction in static code size, up to 30% reduction in dynamic code size relative to RISC architectures. RXv2 reaches 4.0 Coremark per MHz and operates up to 240MHz. The RXv2 processor delivers approximately more than 2.2 - 5.7x the power efficiency of the RXv1. The RXv2 microprocessor achieves the best possible computing performance in various applications such as building automation, medical, motor control, e-metering, and home appliances which lead to the higher memory capacity, frequency and processing performance.

  • Visual Speech Recognition Using Weighted Dynamic Time Warping

    Kyungsun LEE  Minseok KEUM  David K. HAN  Hanseok KO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/04/09
      Vol:
    E98-D No:7
      Page(s):
    1430-1433

    It is unclear whether Hidden Markov Model (HMM) or Dynamic Time Warping (DTW) mapping is more appropriate for visual speech recognition when only small data samples are available. In this letter, the two approaches are compared in terms of sensitivity to the amount of training samples and computing time with the objective of determining the tipping point. The limited training data problem is addressed by exploiting a straightforward template matching via weighted-DTW. The proposed framework is a refined DTW by adjusting the warping paths with judicially injected weights to ensure a smooth diagonal path for accurate alignment without added computational load. The proposed WDTW is evaluated on three databases (two in the public domain and one developed in-house) for visual recognition performance. Subsequent experiments indicate that the proposed WDTW significantly enhances the recognition rate compared to the DTW and HMM based algorithms, especially under limited data samples.

  • Variable Data-Flow Graph for Lightweight Program Slicing and Visualization

    Yu KASHIMA  Takashi ISHIO  Shogo ETSUDA  Katsuro INOUE  

     
    PAPER-Software Engineering

      Pubricized:
    2015/03/17
      Vol:
    E98-D No:6
      Page(s):
    1194-1205

    To understand the behavior of a program, developers often need to read source code fragments in various modules. System-dependence-graph-based (SDG) program slicing is a good candidate for supporting the investigation of data-flow paths among modules, as SDG is capable of showing the data-dependence of focused program elements. However, this technique has two problems. First, constructing SDG requires heavyweight analysis, so SDG is not suitable for daily uses. Second, the results of SDG-based program slicing are difficult to visualize, as they contain many vertices. In this research, we proposed variable data-flow graphs (VDFG) for use in program slicing techniques. In contrast to SDG, VDFG is created by lightweight analysis because several approximations are used. Furthermore, we propose using the fractal value to visualize VDFG-based program slice in order to reduce the graph complexity for visualization purposes. We performed three experiments that demonstrate the accuracy of VDFG program slicing with fractal value, the size of a visualized program slice, and effectiveness of our tool for source code reading.

  • Optimization Methods for Nop-Shadows Typestate Analysis

    Chengsong WANG  Xiaoguang MAO  Yan LEI  Peng ZHANG  

     
    PAPER-Dependable Computing

      Pubricized:
    2015/02/23
      Vol:
    E98-D No:6
      Page(s):
    1213-1227

    In recent years, hybrid typestate analysis has been proposed to eliminate unnecessary monitoring instrumentations for runtime monitors at compile-time. Nop-shadows Analysis (NSA) is one of these hybrid typestate analyses. Before generating residual monitors, NSA performs the data-flow analysis which is intra-procedural flow-sensitive and partially context-sensitive to improve runtime performance. Although NSA is precise, there are some cases on which it has little effects. In this paper, we propose three optimizations to further improve the precision of NSA. The first two optimizations try to filter interferential states of objects when determining whether a monitoring instrumentation is necessary. The third optimization refines the inter-procedural data-flow analysis induced by method invocations. We have integrated our optimizations into Clara and conducted extensive experiments on the DaCapo benchmark. The experimental results demonstrate that our first two optimizations can further remove unnecessary instrumentations after the original NSA in more than half of the cases, without a significant overhead. In addition, all the instrumentations can be removed for two cases, which implies the program satisfy the typestate property and is free of runtime monitoring. It comes as a surprise to us that the third optimization can only be effective on 8.7% cases. Finally, we analyze the experimental results and discuss the reasons why our optimizations fail to further eliminate unnecessary instrumentations in some special situations.

  • Inequality-Constrained RPCA for Shadow Removal and Foreground Detection

    Hang LI  Yafei ZHANG  Jiabao WANG  Yulong XU  Yang LI  Zhisong PAN  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/03/02
      Vol:
    E98-D No:6
      Page(s):
    1256-1259

    State-of-the-art background subtraction and foreground detection methods still face a variety of challenges, including illumination changes, camouflage, dynamic backgrounds, shadows, intermittent object motion. Detection of foreground elements via the robust principal component analysis (RPCA) method and its extensions based on low-rank and sparse structures have been conducted to achieve good performance in many scenes of the datasets, such as Changedetection.net (CDnet); however, the conventional RPCA method does not handle shadows well. To address this issue, we propose an approach that considers observed video data as the sum of three parts, namely a row-rank background, sparse moving objects and moving shadows. Next, we cast inequality constraints on the basic RPCA model and use an alternating direction method of multipliers framework combined with Rockafeller multipliers to derive a closed-form solution of the shadow matrix sub-problem. Our experiments have demonstrated that our method works effectively on challenging datasets that contain shadows.

  • Comparative Study of Open-Loop Transmit Diversity Schemes with Four Antennas in DFT-Precoded OFDMA Using Turbo FDE and Iterative Channel Estimation

    Lianjun DENG  Teruo KAWAMURA  Hidekazu TAOKA  Mamoru SAWAHASHI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:6
      Page(s):
    1065-1077

    This paper presents comprehensive comparisons on the block error rate (BLER) performance of rate-one open-loop (OL) transmit diversity schemes with four antennas for discrete Fourier transform (DFT)-precoded Orthogonal Frequency Division Multiple Access (OFDMA). One candidate scheme employs a quasi-orthogonal (QO) - space-time block code (STBC) in which four-branch minimum mean-square error (MMSE) combining is achieved at the cost of residual inter-code interference (ICI). Another candidate employs a combination of the STBC and selection transmit diversity called time switched transmit diversity (TSTD) (or frequency switched transmit diversity (FSTD)). We apply a turbo frequency domain equalizer (FDE) associated with iterative decision-feedback channel estimation (DFCE) using soft-symbol estimation to reduce channel estimation (CE) error. The turbo FDE includes an ICI canceller to reduce the influence of the residual ICI for the QO-STBC. Based on link-level simulation results, we show that a combination of the STBC and TSTD (or FSTD) is suitable as a four-antenna OL transmit diversity scheme for DFT-precoded OFDMA using the turbo FDE and iterative DFCE.

  • Performance Analysis and Optimum Resource Allocation in Mobile Multihop Relay System

    Taejoon KIM  Seong Gon CHOI  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E98-B No:6
      Page(s):
    1078-1085

    This paper analyzes the performance of a mobile multihop relay (MMR) system which uses intermediate mobile relay stations (RSs) to increase service coverage area and capacity of a communication system. An analytical framework for an MMR system is introduced, and a scheme for allocating the optimum radio resources to an MMR system is presented. It is very challenging to develop an analytical framework for an MMR system because more than two wireless links should be considered in analyzing the performance of such a system. Here, the joint effect of a finite queue length and an adaptive modulation and coding (AMC) scheme in both a base station (BS) and an RS are considered. The traffic characteristics from BS to RS are analyzed, and a three-dimensional finite-state Markov chain (FSMC) is built for the RS which considers incoming traffic from the BS as well. The RS packet loss rate and the RS average throughput are also derived. Moreover, maximum throughput is achieved by optimizing the amount of radio resources to be allocated to the wireless link between a BS and an RS.

  • Comments on “New Constructions of Perfect 8-QAM+/8-QAM Sequences”

    Fanxin ZENG  

     
    LETTER-Information Theory

      Vol:
    E98-A No:6
      Page(s):
    1334-1338

    In Xu, Chen, and Liu's letter, two constructions producing perfect 8-QAM+/8-QAM sequences were given. We show that their constructions are equivalent to Zeng, et al.'s constructions under unit constant transform. Since the autocorrelation of a perfect sequence under unit constant transform is invariable, Xu, et al.'s constructions are the special case of Zeng, et al.'s constructions.

  • Another Optimal Binary Representation of Mosaic Floorplans

    Katsuhisa YAMANAKA  Shin-ichi NAKANO  

     
    LETTER

      Vol:
    E98-A No:6
      Page(s):
    1223-1224

    Recently a compact code of mosaic floorplans with ƒ inner face was proposed by He. The length of the code is 3ƒ-3 bits and asymptotically optimal. In this paper, we propose a new code of mosaic floorplans with ƒ inner faces including k boundary faces. The length of our code is at most $3f - rac{k}{2} - 1$ bits. Hence our code is shorter than or equal to the code by He, except for few small floorplans with k=ƒ≤3. Coding and decoding can be done in O(ƒ) time.

  • Multi-Task Object Tracking with Feature Selection

    Xu CHENG  Nijun LI  Tongchi ZHOU  Zhenyang WU  Lin ZHOU  

     
    LETTER-Image

      Vol:
    E98-A No:6
      Page(s):
    1351-1354

    In this paper, we propose an efficient tracking method that is formulated as a multi-task reverse sparse representation problem. The proposed method learns the representation of all tasks jointly using a customized APG method within several iterations. In order to reduce the computational complexity, the proposed tracking algorithm starts from a feature selection scheme that chooses suitable number of features from the object and background in the dynamic environment. Based on the selected feature, multiple templates are constructed with a few candidates. The candidate that corresponds to the highest similarity to the object templates is considered as the final tracking result. In addition, we present a template update scheme to capture the appearance changes of the object. At the same time, we keep several earlier templates in the positive template set unchanged to alleviate the drifting problem. Both qualitative and quantitative evaluations demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods.

  • Far-Field Pattern Reconstruction Using an Iterative Hilbert Transform

    Fan FAN  Tapan K. SARKAR  Changwoo PARK  Jinhwan KOH  

     
    PAPER-Antennas and Propagation

      Vol:
    E98-B No:6
      Page(s):
    1032-1039

    A new approach to reconstructing antenna far-field patterns from the missing part of the pattern is presented in this paper. The antenna far-field pattern can be reconstructed by utilizing the iterative Hilbert transform, which is based on the relationship between the real and imaginary part of the Hilbert transform. A moving average filter is used to reduce the errors in the restored signal as well as the computation load. Under the constraint of the causality of the current source in space, we could successfully reconstruct the data. Several examples dealing with line source antennas and antenna arrays are simulated to illustrate the applicability of this approach.

4601-4620hit(20498hit)