The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Z(5900hit)

1841-1860hit(5900hit)

  • Polarization Dispersion Characteristics of Propagation Paths in Urban Mobile Communication Environment Open Access

    Tetsuro IMAI  Koshiro KITAO  

     
    PAPER-Radio Propagation

      Vol:
    E96-B No:10
      Page(s):
    2380-2388

    In order to employ Multiple-Input-Multiple-Output (MIMO) techniques, multiple antenna branches are necessary and as a consequence the installation space requirements are increased. Since the installation space is limited, much attention is now focused on utilizing polarization characteristics in MIMO configurations to relax the requirements. This is called Orthogonal Polarization-MIMO in this paper. To evaluate accurately the performance of Orthogonal Polarization-MIMO, a channel model that can handle the polarization dispersion characteristics of propagation paths is essential. Up to now, the spatial-temporal dispersion characteristics of paths have been investigated in detail. However, there are only a few reports on the polarization dispersion characteristics. In this paper, we propose a new power profile for the rotational polarized angle as an evaluation model for polarization dispersion, and clarify the analyzed power profile based on measurement data in an urban macrocell environment.

  • Quantum Steganography with High Efficiency with Noisy Depolarizing Channels

    Xin LIAO  Qiaoyan WEN  Tingting SONG  Jie ZHANG  

     
    LETTER-Cryptography and Information Security

      Vol:
    E96-A No:10
      Page(s):
    2039-2044

    Quantum steganography is to send secret quantum information through a quantum channel, such that an unauthorized user will not be aware of the existence of secret data. The depolarizing channel can hide quantum information by disguising it as channel errors of a quantum error-correcting code. We improve the efficiency of quantum steganography with noisy depolarizing channels, by modifying the twirling procedure and adding quantum teleportation. The proposed scheme not only meets the requirements of quantum steganography but also has higher efficiency.

  • Frame Synchronization for Depth-Based 3D Video Using Edge Coherence

    Youngsoo PARK  Taewon KIM  Namho HUR  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E96-D No:9
      Page(s):
    2166-2169

    A method of frame synchronization between the color video and depth-map video for depth based 3D video using edge coherence is proposed. We find a synchronized pair of frames using edge coherence by computing the maximum number of overlapped edge pixels between the color video and depth-map video in regions of temporal frame difference. The experimental results show that the proposed method can be used for synchronization of depth-based 3D video and that it is robust against Gaussian noise with σ = less than 30 and video compression by H.264/AVC with QP = less than 44.

  • Horizontal Spectral Entropy with Long-Span of Time for Robust Voice Activity Detection

    Kun-Ching WANG  

     
    LETTER-Speech and Hearing

      Vol:
    E96-D No:9
      Page(s):
    2156-2161

    This letter introduces innovative VAD based on horizontal spectral entropy with long-span of time (HSELT) feature sets to improve mobile ASR performance in low signal-to-noise ratio (SNR) conditions. Since the signal characteristics of nonstationary noise change with time, we need long-term information of the noisy speech signal to define a more robust decision rule yielding high accuracy. We find that HSELT measures can horizontally enhance the transition between speech and non-speech segments. Based on this finding, we use the HSELT measures to achieve high accuracy for detecting speech signal form various stationary and nonstationary noises.

  • Neighborhood Level Error Control Codes for Multi-Level Cell Flash Memories

    Shohei KOTAKI  Masato KITAKAMI  

     
    PAPER

      Vol:
    E96-D No:9
      Page(s):
    1926-1932

    NAND Flash memories are widely used as data storages today. The memories are not intrinsically error free because they are affected by several physical disturbances. Technology scaling and introduction of multi-level cell (MLC) has improved data density, but it has made error effect more significant. Error control codes (ECC) are essential to improve reliability of NAND Flash memories. Efficiency of codes depends on error characteristic of systems, and codes are required to be designed to reflect this characteristic. In MLC Flash memories, errors tend to direct values to neighborhood. These errors are a class of M-ary asymmetric symbol error. Some codes which reflect the asymmetric property were proposed. They are designed to correct only 1 level shift errors because almost all of the errors in the memories are in such errors. But technology scaling, increase of program/erase (P/E) cycles, and MLC storing the large number of bits can cause multiple-level shift. This paper proposes single error control codes which can correct an error of more than 1 levels shift. Because the number of levels to be corrected is selectable, we can fit it into noise magnitude. Furthermore, it is possible to add error detecting function for error of the larger shift. Proposed codes are equivalent to a conventional integer codes, which can correct 1 level shift, on a certain parameter. Therefore, the codes are said to be generalization of conventional integer codes. Evaluation results show information lengths to respective check symbol lengths are larger than nonbinary Hamming codes and other M-ary asymmetric symbol error correcting codes.

  • Optimal Trigger Time of Software Rejuvenation under Probabilistic Opportunities

    Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER

      Vol:
    E96-D No:9
      Page(s):
    1933-1940

    This paper presents the opportunity-based software rejuvenation policy and the optimization problem of software rejuvenation trigger time maximizing the system performance index. Our model is based on a basic semi-Markov software rejuvenation model by Dohi et al. 2000 under the environment where possible time, called opportunity, to execute software rejuvenation is limited. In the paper, we consider two stochastic point processes; renewal process and Markovian arrival process to represent the opportunity process. In particular, we derive the existence condition of the optimal trigger time under the two point processes analytically. In numerical examples, we illustrate the optimal design of the rejuvenation trigger schedule based on empirical data.

  • Synchronization of Two Different Unified Chaotic Systems with Unknown Mismatched Parameters via Sum of Squares Method

    Cheol-Joong KIM  Dongkyoung CHWA  

     
    PAPER-Nonlinear Problems

      Vol:
    E96-A No:9
      Page(s):
    1840-1847

    This paper proposes the synchronization control method for two different unified chaotic systems with unknown mismatched parameters using sum of squares method. Previously, feedback-linearizing and stabilization terms were used in the controller for the synchronization problem. However, they used just a constant matrix as a stabilization control gain, whose performance is shown to be valid only for a linear model. Thus, we propose the novel control method for the synchronization of the two different unified chaotic systems with unknown mismatched parameters via sum of squares method. We design the stabilization control input which is of the polynomial form by sum of squares method and also the adaptive law for the estimation of the unknown mismatched parameter between the master and slave systems. Since we can use the polynomial control input which is dependent on the system states as the stabilization controller, the proposed method can have better performance than the previous methods. Numerical simulations for both uni-directional and bi-directional chaotic systems show the validity and advantage of the proposed method.

  • Novel DCF-Based Multi-User MAC Protocol for Centralized Radio Resource Management in OFDMA WLAN Systems

    Shinichi MIYAMOTO  Seiichi SAMPEI  Wenjie JIANG  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E96-B No:9
      Page(s):
    2301-2312

    To enhance the throughput while satisfying the quality of service (QoS) requirements of wireless local area networks (WLANs), this paper proposes a distributed coordination function-based (DCF-based) medium access control (MAC) protocol that realizes centralized radio resource management (RRM) for a basic service set. In the proposed protocol, an access point (AP) acts as a master to organize the associated stations and attempts to reserve the radio resource in a conventional DCF-manner. Once the radio resource is successfully reserved, the AP controls the access of each station by an orthogonal frequency division multiple access (OFDMA) scheme. Because the AP assigns radio resources to the stations through the opportunistic two-dimensional scheduling based on the QoS requirements and the channel condition of each station, the transmission opportunities can be granted to the appropriate stations. In order to reduce the signaling overhead caused by centralized RRM, the proposed protocol introduces a station-grouping scheme which groups the associated stations into clusters. Moreover, this paper proposes a heuristic resource allocation algorithm designed for the DCF-based MAC protocol. Numerical results confirm that the proposed protocol enhances the throughput of WLANs while satisfying the QoS requirements with high probability.

  • Design Equations for Off-Nominal Operation of Class E Amplifier with Nonlinear Shunt Capacitance at D=0.5

    Tadashi SUETSUGU  Xiuqin WEI  Marian K. KAZIMIERCZUK  

     
    PAPER-Energy in Electronics Communications

      Vol:
    E96-B No:9
      Page(s):
    2198-2205

    Design equations for satisfying off-nominal operating conditions of the class E amplifier with a nonlinear shunt capacitance for a grading coefficient of 0.5 and the duty cycle D=0.5 are derived. By exploiting the off-nominal class E operation, various amplifier parameters such as input voltage, operating frequency, output power, and load resistance can be set as design specifications. As a result of the analysis in this paper, the following extension of the usability of the class E amplifier was achieved. With rising up the dc supply voltage, the shunt capacitance which achieves the off-nominal operation can be increased. This means that a transistor with higher output capacitance can be used for ZVS operation. This also means that maximum operating frequency which achieves ZVS can be increased. An example of a design procedure of the class E amplifier is given. The theoretical results were verified with an experiment.

  • Honeyguide: A VM Migration-Aware Network Topology for Saving Energy Consumption in Data Center Networks

    Hiroki SHIRAYANAGI  Hiroshi YAMADA  Kenji KONO  

     
    PAPER-Software System

      Vol:
    E96-D No:9
      Page(s):
    2055-2064

    Current network elements consume 10-20% of the total power in data centers. Today's network elements are not energy-proportional and consume a constant amount of energy regardless of the amount of traffic. Thus, turning off unused network switches is the most efficient way of reducing the energy consumption of data center networks. This paper presents Honeyguide, an energy optimizer for data center networks that not only turns off inactive switches but also increases the number of inactive switches for better energy-efficiency. To this end, Honeyguide combines two techniques: 1) virtual machine (VM) and traffic consolidation, and 2) a slight extension to the existing tree-based topologies. Honeyguide has the following advantages. The VM consolidation, which is gracefully combined with traffic consolidation, can handle severe requirements on fault tolerance. It can be introduced into existing data centers without replacing the already-deployed tree-based topologies. Our simulation results demonstrate that Honeyguide can reduce the energy consumption of network elements better than the conventional VM migration schemes, and the savings are up to 7.8% in a fat tree with k=12.

  • High-Accuracy and Quick Matting Based on Sample-Pair Refinement and Local Optimization

    Bei HE  Guijin WANG  Chenbo SHI  Xuanwu YIN  Bo LIU  Xinggang LIN  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E96-D No:9
      Page(s):
    2096-2106

    Based on sample-pair refinement and local optimization, this paper proposes a high-accuracy and quick matting algorithm. First, in order to gather foreground/background samples effectively, we shoot rays in hybrid (gradient and uniform) directions. This strategy utilizes the prior knowledge to adjust the directions for effective searching. Second, we refine sample-pairs of pixels by taking into account neighbors'. Both high confidence sample-pairs and usable foreground/background components are utilized and thus more accurate and smoother matting results are achieved. Third, to reduce the computational cost of sample-pair selection in coarse matting, this paper proposes an adaptive sample clustering approach. Most redundant samples are eliminated adaptively, where the computational cost decreases significantly. Finally, we convert fine matting into a de-noising problem, which is optimized by minimizing the observation and state errors iteratively and locally. This leads to less space and time complexity compared with global optimization. Experiments demonstrate that we outperform other state-of-the-art methods in local matting both on accuracy and efficiency.

  • A Fuzzy Geometric Active Contour Method for Image Segmentation

    Danyi LI  Weifeng LI  Qingmin LIAO  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E96-D No:9
      Page(s):
    2107-2114

    In this paper, we propose a hybrid fuzzy geometric active contour method, which embeds the spatial fuzzy clustering into the evolution of geometric active contour. In every iteration, the evolving curve works as a spatial constraint on the fuzzy clustering, and the clustering result is utilized to construct the fuzzy region force. On one hand, the fuzzy region force provides a powerful capability to avoid the leakages at weak boundaries and enhances the robustness to various noises. On the other hand, the local information obtained from the gradient feature map contributes to locating the object boundaries accurately and improves the performance on the images with heterogeneous foreground or background. Experimental results on synthetic and real images have shown that our model can precisely extract object boundaries and perform better than the existing representative hybrid active contour approaches.

  • Sensor-Pattern-Noise Map Reconstruction in Source Camera Identification for Size-Reduced Images

    Joji WATANABE  Tadaaki HOSAKA  Takayuki HAMAMOTO  

     
    LETTER-Pattern Recognition

      Vol:
    E96-D No:8
      Page(s):
    1882-1885

    For source camera identification, we propose a method to reconstruct the sensor pattern noise map from a size-reduced query image by minimizing an objective function derived from the observation model. Our method can be applied to multiple queries, and can thus be further improved. Experiments demonstrate the superiority of the proposed method over conventional interpolation-based magnification algorithms.

  • Comparative Study on Required Bit Depth of Gamma Quantization for Digital Cinema Using Contrast and Color Difference Sensitivities

    Junji SUZUKI  Isao FURUKAWA  

     
    PAPER-Image

      Vol:
    E96-A No:8
      Page(s):
    1759-1767

    A specification for digital cinema systems which deal with movies digitally from production to delivery as well as projection on the screens is recommended by DCI (Digital Cinema Initiative), and the systems based on this specification have already been developed and installed in theaters. The parameters of the systems that play an important role in determining image quality include image resolution, quantization bit depth, color space, gamma characteristics, and data compression methods. This paper comparatively discusses a relation between required bit depth and gamma quantization using both of a human visual system for grayscale images and two color difference models for color images. The required bit depth obtained from a contrast sensitivity function against grayscale images monotonically decreases as the gamma value increases, while it has a minimum value when the gamma is 2.9 to 3.0 from both of the CIE 1976 L* a* b* and CIEDE2000 color difference models. It is also shown that the bit depth derived from the contrast sensitivity function is one bit greater than that derived from the color difference models at the gamma value of 2.6. Moreover, a comparison between the color differences computed with the CIE 1976 L* a* b* and CIEDE2000 leads to a same result from the view point of the required bit depth for digital cinema systems.

  • Bit-Plane Coding of Lattice Codevectors

    Wisarn PATCHOO  Thomas R. FISCHER  

     
    LETTER-Coding Theory

      Vol:
    E96-A No:8
      Page(s):
    1817-1820

    In a sign-magnitude representation of binary lattice codevectors, only a few least significant bit-planes are constrained due to the structure of the lattice, while there is no restriction on other more significant bit-planes. Hence, any convenient bit-plane coding method can be used to encode the lattice codevectors, with modification required only for the lattice-defining, least-significant bit-planes. Simple encoding methods for the lattice-defining bit-planes of the D4, RE8, and Barnes-Wall 16-dimensional lattices are described. Simulation results for the encoding of a uniform source show that standard bit-plane coding together with the proposed encoding provide about the same performance as integer lattice vector quantization when the bit-stream is truncated. When the entire bit-stream is fully decoded, the granular gain of the lattice is realized.

  • Synthesis of Configuration Change Procedure Using Model Finder

    Shinji KIKUCHI  Satoshi TSUCHIYA  Kunihiko HIRAISHI  

     
    PAPER-Software System

      Vol:
    E96-D No:8
      Page(s):
    1696-1706

    Managing the configurations of complex systems consisting of various components requires the combined efforts by multiple domain experts. These experts have extensive knowledge about different components in the system they need to manage but little understanding of the issues outside their individual areas of expertise. As a result, the configuration constraints, changes, and procedures specified by those involved in the management of a complex system are often interrelated with one another without being noticed, and their integration into a coherent procedure for configuration represents a major challenge. The method of synthesizing the configuration procedure introduced in this paper addresses this challenge using a combination of formal specification and model finding techniques. We express the knowledge on system management with this method, which is provided by domain experts as first-order logic formulas in the Alloy specification language, and combine it with system-configuration information and the resulting specification. We then employ the Alloy Analyzer to find a system model that satisfies all the formulas in this specification. The model obtained corresponds to a procedure for system configurations that satisfies all expert-specified constraints. In order to reduce the resources needed in the procedure synthesis, we reduce the length of procedures to be synthesized by defining and using intermediate goal states to divide operation procedures into shorter steps. Finally, we evaluate our method through a case study on a procedure to consolidate virtual machines.

  • Throughput Enhancement with ACK/NACK Mechanism in Short-Range Millimeter-Wave Communication Systems

    Ryoko MATSUO  Tomoya TANDAI  Takeshi TOMIZAWA  Hideo KASAMI  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Vol:
    E96-B No:8
      Page(s):
    2162-2172

    The 60GHz millimeter-wave (mmWave) wireless technology is a focus of increasing attention, since its ability to transmit more than Gbps PHY data rate makes it suitable for high-speed, short-range applications such as peer-to-peer synchronization and kiosk terminals. In the case of short-range communication with a range of several tens of centimeters, only terminals present in this communication range will be affect and communication is considered to be on a one-to-one basis. In one-to-one communication, a simpler and more efficient access mechanism is preferable. The ability of current CSMA/CA based MAC, for example MAC of IEEE 802.11 WLAN systems, to achieve high throughput is limited by the low MAC efficiency caused by high signal exchange overhead, such as interframe space (IFS) and acknowledgement. This paper proposes an ACK/NACK mechanism that enhances the throughput in short-range one-to-one communication. The ACK/NACK mechanism uses Negative ACK (NACK) as the acknowledgement policy to reduce the overhead of ACK and the transmitter switches the required acknowledgement policy to ACK based on a switchover threshold. It solves a problem arising from NACK, namely, that NACK has no mechanism for keeping alive. We evaluate the throughput of the ACK/NACK mechanism by both theoretical analysis and computer simulation. The proposed ACK/NACK mechanism is implemented in 65 nm CMOS process (BBIC); we connect this BBIC to a 60 GHz RFIC and exchange frames wirelessly. In this experiment, it is verified that the ACK/NACK mechanism enhances throughput.

  • Fuzzy Matching of Semantic Class in Chinese Spoken Language Understanding

    Yanling LI  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Natural Language Processing

      Vol:
    E96-D No:8
      Page(s):
    1845-1852

    Semantic concept in an utterance is obtained by a fuzzy matching methods to solve problems such as words' variation induced by automatic speech recognition (ASR), or missing field of key information by users in the process of spoken language understanding (SLU). A two-stage method is proposed: first, we adopt conditional random field (CRF) for building probabilistic models to segment and label entity names from an input sentence. Second, fuzzy matching based on similarity function is conducted between the named entities labeled by a CRF model and the reference characters of a dictionary. The experiments compare the performances in terms of accuracy and processing speed. Dice similarity and cosine similarity based on TF score can achieve better accuracy performance among four similarity measures, which equal to and greater than 93% in F1-measure. Especially the latter one improved by 8.8% and 9% respectively compared to q-gram and improved edit-distance, which are two conventional methods for string fuzzy matching.

  • High Throughput Parallelization of AES-CTR Algorithm

    Nhat-Phuong TRAN  Myungho LEE  Sugwon HONG  Seung-Jae LEE  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:8
      Page(s):
    1685-1695

    Data encryption and decryption are common operations in network-based application programs that must offer security. In order to keep pace with the high data input rate of network-based applications such as the multimedia data streaming, real-time processing of the data encryption/decryption is crucial. In this paper, we propose a new parallelization approach to improve the throughput performance for the de-facto standard data encryption and decryption algorithm, AES-CTR (Counter mode of AES). The new approach extends the size of the block encrypted at one time across the unit block boundaries, thus effectively encrypting multiple unit blocks at the same time. This reduces the associated parallelization overheads such as the number of procedure calls, the scheduling and the synchronizations compared with previous approaches. Therefore, this leads to significant throughput performance improvements on a computing platform with a general-purpose multi-core processor and a Graphic Processing Unit (GPU).

  • Fast Single Image De-Hazing Using Characteristics of RGB Channel of Foggy Image

    Dubok PARK  David K. HAN  Changwon JEON  Hanseok KO  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E96-D No:8
      Page(s):
    1793-1799

    Images captured under foggy conditions often exhibit poor contrast and color. This is primarily due to the air-light which degrades image quality exponentially with fog depth between the scene and the camera. In this paper, we restore fog-degraded images by first estimating depth using the physical model characterizing the RGB channels in a single monocular image. The fog effects are then removed by subtracting the estimated irradiance, which is empirically related to the scene depth information obtained, from the total irradiance received by the sensor. Effective restoration of color and contrast of images taken under foggy conditions are demonstrated. In the experiments, we validate the effectiveness of our method compared with conventional method.

1841-1860hit(5900hit)