The search functionality is under construction.

Keyword Search Result

[Keyword] quantization(221hit)

101-120hit(221hit)

  • Influence of ADC Nonlinearity on the Performance of an OFDM Receiver

    Manabu SAWADA  Hiraku OKADA  Takaya YAMAZATO  Masaaki KATAYAMA  

     
    PAPER

      Vol:
    E89-B No:12
      Page(s):
    3250-3256

    This paper discusses the influence of the nonlinearity of analog-to-digital converters (ADCs) on the performance of orthogonal frequency division multiplexing (OFDM) receivers. We evaluate signal constellations and bit error rate performances while considering quantization errors and clippings. The optimum range for an ADC input amplitude is found as a result of the trade-off between quantization error and the effects of clipping. In addition, it is shown that the peak-to-average power ratio (PAPR) of the signal is not a good measure of the bit error rate (BER) performance, since the largest peaks occur only with very low probabilities. The relationship between the location of a subcarrier and its performance is studied. As a result, it is shown that the influence of the quantization error is identical for all subcarriers, while the effects of clipping depend on the subcarrier frequency. When clipping occurs, the BER performance of a subcarrier near the center frequency is worse than that near the edges.

  • JPEG Quantization Table Design for Face Images and Its Application to Face Recognition

    Gu-Min JEONG  Chunghoon KIM  Hyun-Sik AHN  Bong-Ju AHN  

     
    LETTER

      Vol:
    E89-A No:11
      Page(s):
    2990-2993

    This paper proposes a new codec design method based on JPEG for face images and presents its application to face recognition. Quantization table is designed using the R-D optimization for the Yale face database. In order to use in the embedded systems, fast codec design is also considered. The proposed codec achieves better compression rates than JPEG codec for face images. In face recognition experiments using the linear discriminant analysis (LDA), the proposed codec shows better performance than JPEG codec.

  • Fast K Nearest Neighbors Search Algorithm Based on Wavelet Transform

    Yu-Long QIAO  Zhe-Ming LU  Sheng-He SUN  

     
    LETTER-Vision

      Vol:
    E89-A No:8
      Page(s):
    2239-2243

    This letter proposes a fast k nearest neighbors search algorithm based on the wavelet transform. This technique exploits the important information of the approximation coefficients of the transform coefficient vector, from which we obtain two crucial inequalities that can be used to reject those vectors for which it is impossible to be k nearest neighbors. The computational complexity for searching for k nearest neighbors can be largely reduced. Experimental results on texture classification verify the effectiveness of our algorithm.

  • Suboptimal Decoding of Vector Quantization over a Frequency-Selective Rayleigh Fading CDMA Channel

    Son X. NGUYEN  Ha H. NGUYEN  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E89-B No:5
      Page(s):
    1688-1691

    The complexity of the optimal decoding for vector quantization (VQ) in code-division multiple access (CDMA) communications prohibits implementation. It was recently shown in [1] that a suboptimal scheme that combines a soft-output multiuser detector and individual VQ decoders provides a flexible tradeoff between decoder's complexity and performance. The work in [1], however, only considers an AWGN channel model. This paper extends the technique in [1] to a frequency-selective Rayleigh fading channel. Simulation results indicate that such a suboptimal decoder also performs very well over this type of channel.

  • Per-User Automatic Gain Control for an Uplink CDMA Receiver

    Jungwoo LEE  

     
    LETTER-Spread Spectrum Technologies and Applications

      Vol:
    E89-A No:4
      Page(s):
    1154-1157

    A per-user AGC technique is proposed to combat the signal level variation of an individual user in a DS-CDMA receiver. A simple signal model for a Rake receiver is derived, and the potential cause of the signal variation in the Rake receiver output is discussed. The adaptive scheme is also compared with a conventional fixed quantization scheme in simulations.

  • Capacity of Fading Channels with Quantized Channel Side Information

    Xiaofeng LIU  Hongwen YANG  Wenbin GUO  Dacheng YANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E89-B No:2
      Page(s):
    590-593

    In this letter, we study the capacity of fading channels with perfect channel side information (CSI) at the receiver and quantized CSI at the transmitter. We present a general algorithm for the joint design of optimal quantization and power control for maximizing the forward link capacity over flat fading channels. Numerical results for Rayleigh fading are given.

  • An Anomaly Intrusion Detection System Based on Vector Quantization

    Jun ZHENG  Mingzeng HU  

     
    PAPER-Intrusion Detection

      Vol:
    E89-D No:1
      Page(s):
    201-210

    Machine learning and data mining algorithms are increasingly being used in the intrusion detection systems (IDS), but their performances are laggard to some extent especially applied in network based intrusion detection: the larger load of network traffic monitoring requires more efficient algorithm in practice. In this paper, we propose and design an anomaly intrusion detection (AID) system based on the vector quantization (VQ) which is widely used for data compression and high-dimension multimedia data index. The design procedure optimizes the performance of intrusion detection by jointly accounting for accurate usage profile modeling by the VQ codebook and fast similarity measures between feature vectors to reduce the computational cost. The former is just the key of getting high detection rate and the later is the footstone of guaranteeing efficiency and real-time style of intrusion detection. Experiment comparisons to other related researches show that the performance of intrusion detection is improved greatly.

  • On Optimal Stepsize for Soft Decision Viterbi Decoding

    Eui-Cheol LIM  Hyung-Jin CHOI  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E88-B No:12
      Page(s):
    4651-4654

    This letter presents a method of finding the optimal quantization stepsize that minimizes quantization loss and maximizes coded BER performance. We define 'Information Error Rate'(IER) and obtain the equation of the modified constraint length (Km) to obtain an upper bound of coded BER performance of a l bit quantized soft decision Viterbi decoder. Using IER and Km, we determine the optimal quantization stepsize of a 2 bit and 3 bit quantized soft decision decoding system in an AWGN channl with respect to SNR, and verify our strategies by simulation results.

  • Hybrid Image Compression Scheme Based on PVQ and DCTVQ

    Zhe-Ming LU  Hui PEI  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E88-D No:10
      Page(s):
    2422-2426

    An efficient hybrid image vector quantization (VQ) technique based on a classification in the DCT domain is presented in this letter. This algorithm combines two kinds of VQ, predictive VQ (PVQ) and discrete cosine transform domain VQ (DCTVQ), and adopts a simple classifier which employs only three DCT coefficients in the 88 block. For each image block, the classifier switches to the PVQ coder if the block is relatively complex, and otherwise switches to the DCTVQ coder. Experimental results show that the proposed algorithm can achieve higher PSNR values than ordinary VQ, PVQ, JPEG, and JPEG2000 at the same bit-rate.

  • Query Learning Method for Character Recognition Methods Using Genetic Algorithm

    Hitoshi SAKANO  

     
    LETTER

      Vol:
    E88-D No:10
      Page(s):
    2313-2316

    We propose a learning method combining query learning and a "genetic translator" we previously developed. Query learning is a useful technique for high-accuracy, high-speed learning and reduction of training sample size. However, it has not been applied to practical optical character readers (OCRs) because human beings cannot recognize queries as character images in the feature space used in practical OCR devices. We previously proposed a character image reconstruction method using a genetic algorithm. This method is applied as a "translator" from feature space for query learning of character recognition. The results of an experiment with hand-written numeral recognition show the possibility of training sample size reduction.

  • A Steganographic Method for Hiding Secret Data Using Side Match Vector Quantization

    Chin-Chen CHANG  Wen-Chuan WU  

     
    PAPER-Application Information Security

      Vol:
    E88-D No:9
      Page(s):
    2159-2167

    To increase the number of the embedded secrets and to improve the quality of the stego-image in the vector quantization (VQ)-based information hiding scheme, in this paper, we present a novel information-hiding scheme to embed secrets into the side match vector quantization (SMVQ) compressed code. First, a host image is partitioned into non-overlapping blocks. For these seed blocks of the image, VQ is adopted without hiding secrets. Then, for each of the residual blocks, SMVQ or VQ is employed according to the smoothness of the block such that the proper codeword is chosen from the state codebook or the original codebook to compress it. Finally, these compressed codes represent not only the host image but also the secret data. Experimental results show that the performance of the proposed scheme is better than other VQ-based information hiding scheme in terms of the embedding capacity and the image quality. Moreover, in the proposed scheme, the compression rate is better than the compared scheme.

  • Performance Comparison between Equal-Average Equal-Variance Equal-Norm Nearest Neighbor Search (EEENNS) Method and Improved Equal-Average Equal-Variance Nearest Neighbor Search (IEENNS) Method for Fast Encoding of Vector Quantization

    Zhibin PAN  Koji KOTANI  Tadahiro OHMI  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E88-D No:9
      Page(s):
    2218-2222

    The encoding process of vector quantization (VQ) is a time bottleneck preventing its practical applications. In order to speed up VQ encoding, it is very effective to use lower dimensional features of a vector to estimate how large the Euclidean distance between the input vector and a candidate codeword could be so as to reject most unlikely codewords. The three popular statistical features of the average or the mean, the variance, and L2 norm of a vector have already been adopted in the previous works individually. Recently, these three statistical features were combined together to derive a sequential EEENNS search method in [6], which is very efficient but still has obvious computational redundancy. This Letter aims at giving a mathematical analysis on the results of EEENNS method further and pointing out that it is actually unnecessary to use L2 norm feature anymore in fast VQ encoding if the mean and the variance are used simultaneously as proposed in IEENNS method. In other words, L2 norm feature is redundant for a rejection test in fast VQ encoding. Experimental results demonstrated an approximate 10-20% reduction of the total computational cost for various detailed images in the case of not using L2 norm feature so that it confirmed the correctness of the mathematical analysis.

  • Quantization/DCT Conversion Scheme for DCT-Domain MPEG-2 to H.264/AVC Transcoding

    Joo-Kyong LEE  Ki-Dong CHUNG  

     
    PAPER

      Vol:
    E88-B No:7
      Page(s):
    2856-2863

    The latest video coding standard, H.264/AVC, adopts 44 approximate transform instead of 88 discrete cosine transform (DCT) to avoid the inverse transform mismatch problem. However, that is only one of the factors that make it difficult to transcode pre-coded video contents with the previous standards to H.264/AVC in the common domain without causing cascaded pixel-domain transcoding. In this paper, to support the existent DCT-domain transcoding schemes and to reduce computational complexity, we propose an efficient algorithm that converts the quantized 88 DCT block into four newly quantized 44 transformed blocks. The experimental results show that the proposed scheme reduces computational complexity by 5-11% and improves video quality by 0.1-0.5 dB compared with the cascaded pixel-domain transcoding scheme that exploits inverse quantization (IQ), inverse DCT (IDCT), DCT, and re-quantization (re-Q).

  • Semi-Automatic Video Object Segmentation Using LVQ with Color and Spatial Features

    Hariadi MOCHAMAD  Hui Chien LOY  Takafumi AOKI  

     
    PAPER-Image Processing and Multimedia Systems

      Vol:
    E88-D No:7
      Page(s):
    1553-1560

    This paper presents a semi-automatic algorithm for video object segmentation. Our algorithm assumes the use of multiple key video frames in which a semantic object of interest is defined in advance with human assistance. For video frames between every two key frames, the specified video object is tracked and segmented automatically using Learning Vector Quantization (LVQ). Each pixel of a video frame is represented by a 5-dimensional feature vector integrating spatial and color information. We introduce a parameter K to adjust the balance of spatial and color information. Experimental results demonstrate that the algorithm can segment the video object consistently with less than 2% average error when the object is moving at a moderate speed.

  • 2-Bit All-Optical Analog-to-Digital Conversion by Slicing Supercontinuum Spectrum and Switching with Nonlinear Optical Loop Mirror and Its Application to Quaternary ASK-to-OOK Modulation Format Converter

    Sho-ichiro ODA  Akihiro MARUTA  

     
    PAPER-Transmission Systems and Technologies

      Vol:
    E88-B No:5
      Page(s):
    1963-1969

    Recently, the research on all-optical analog-to-digital conversion (ADC) has been extensively attempted to break through inherently limited operating speed of electronic devices. In this paper, we describe a novel quantization scheme by slicing supercontinuum (SC) spectrum for all-optical ADC and then propose a 2-bit all-optical ADC scheme consisting of the quantization by slicing SC spectrum and the coding by switching pulses with a nonlinear optical loop mirror (NOLM). The feasibility of the proposed quantization scheme was confirmed by numerical simulation. We conducted proof-of-principle experiments of optical quantization by slicing SC spectrum with an arrayed waveguide grating and optical coding by switching pulses with NOLM. We successfully demonstrated optical quantization and coding, which allows us to confirm the feasibility of the proposed 2-bit ADC scheme.

  • Optimal Quantization Noise Allocation and Coding Gain in Transform Coding with Two-Dimensional Morphological Haar Wavelet

    Yasunari YOKOTA  Xiaoyong TAN  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E88-D No:3
      Page(s):
    636-645

    This paper analytically formulates both the optimal quantization noise allocation ratio and the coding gain of the two-dimensional morphological Haar wavelet transform. The two-dimensional morphological Haar wavelet transform has been proposed as a nonlinear wavelet transform. It has been anticipated for application to nonlinear transform coding. To utilize a transformation to transform coding, both the optimal quantization noise allocation ratio and the coding gain of the transformation should be derived beforehand regardless of whether the transformation is linear or nonlinear. The derivation is crucial for progress of nonlinear transform image coding with nonlinear wavelet because the two-dimensional morphological Haar wavelet is the most basic nonlinear wavelet. We derive both the optimal quantization noise allocation ratio and the coding gain of the two-dimensional morphological Haar wavelet transform by introducing appropriate approximations to handle the cumbersome nonlinear operator included in the transformation. Numerical experiments confirmed the validity of formulations.

  • Analysis and Evaluation of Required Precision for Color Images in Digital Cinema Application

    Junji SUZUKI  Isao FURUKAWA  Sadayasu ONO  

     
    PAPER-Image

      Vol:
    E87-A No:12
      Page(s):
    3409-3419

    Digital cinema will continue, for some time, to use image signals converted from the density values of film stock through some form of digitization. This paper investigates the required numbers of quantization bits for both intensity and density. Equations for the color differences created by quantization distortion are derived on the premise that the uniform color space L* a* b* can be used to evaluate color differences in digitized pictorial color images. The location of the quantized sample that yields the maximum color difference in the color gamut is theoretically analyzed with the proviso that the color difference must be below the perceivable limit of human visual systems. The result shows that the maximum color difference is located on a ridge line or a surface of the color gamut. This can reduce the computational burden for determining the required precision for color quantization. Design examples of quantization resolution are also shown by applying the proposed evaluation method to three actual color spaces: NTSC, HDTV, and ROMM.

  • Dependency of Distortion on Output Binary Pattern of the Hidden Layer for a Noisy LSP Quantization Neural Network

    Yoshinori MORITA  Tetsuo FUNADA  Hideyuki NOMURA  

     
    PAPER-Speech and Hearing

      Vol:
    E87-D No:10
      Page(s):
    2348-2355

    The bandwidth occupied by individual telecommunication devices in the field of mobile radio communication must be narrow in order to effectively exploit the limited frequency band. Therefore, it is necessary to implement low-bit-rate speech coding that is robust against background noise. We examine vector quantization using a neural network (NNVQ) as a robust LSP encoder. In this paper, we compare four types of binary patterns of a hidden layer, and clarify the dependency of quantization distortion on the bit pattern. By delayed decision (selection of low-distortion codes in decoding, i.e., EbD method) the spectral distortion (SD) can be decreased by 0.8 dB (20%). For noisy speech, the performance of the EbD method is better than that of the conventional VQ codebook mapping method. In addition, the SD can be decreased by 2.3 dB (40%) by using a method in which the neural networks for encoding and decoding are combined and re-trained. Finally, we examine the SD for speech having different signal-to-noise ratios (SNRs) from that used in training. The experimental results show that training using SNR between 30 and 40 dB is appropriate.

  • Numerical Evaluation of Incremental Vector Quantization Using Stochastic Relaxation

    Noritaka SHIGEI  Hiromi MIYAJIMA  Michiharu MAEDA  

     
    PAPER

      Vol:
    E87-A No:9
      Page(s):
    2364-2371

    Learning algorithms for Vector Quantization (VQ) are categorized into two types: batch learning and incremental learning. Incremental learning is more useful than batch learning, because, unlike batch learning, incremental learning can be performed either on-line or off-line. In this paper, we develop effective incremental learning methods by using Stochastic Relaxation (SR) techniques, which have been developed for batch learning. It has been shown that, for batch learning, the SR techniques can provide good global optimization without greatly increasing the computational cost. We empirically investigates the effective implementation of SR for incremental learning. Specifically, we consider five types of SR methods: ISR1, ISR2, ISR3, WSR1 and WSR2. ISRs and WSRs add noise input and weight vectors, respectively. The difference among them is when the perturbed input or weight vectors are used in learning. These SR methods are applied to three types of incremental learning: K-means, Neural-Gas (NG) and Kohonen's Self-Organizing Mapping (SOM). We evaluate comprehensively these combinations in terms of accuracy and computation time. Our simulation results show that K-means with ISR3 is the most comprehensively effective among these combinations and is superior to the conventional NG method known as an excellent method.

  • Quantization Noise Reduction for DCT Coded Images

    Ching-Chih KUO  Wen-Thong CHANG  

     
    PAPER-Multimedia Systems

      Vol:
    E87-B No:8
      Page(s):
    2342-2351

    By modelling the quantization error as additive white noise in the transform domain, Wiener filter is used to reduce quantization noise for DCT coded images in DCT domain. Instead of deriving the spectrum of the transform coefficient, a DPCM loop is used to whiten the quantized DCT coefficients. The DPCM loop predicts the mean for each coefficient. By subtracting the mean, the quantized DCT coefficient is converted into the sum of prediction error and quantization noise. After the DPCM loop, the prediction error can be assumed uncorrelated to make the design of the subsequent Wiener filter easy. The Wiener filter is applied to remove the quantization noise to restore the prediction error. The original coefficient is reconstructed by adding the DPCM predicted mean with the restored prediction error. To increase the prediction accuracy, the decimated DCT coefficients in each subband are interpolated from the overlapped blocks.

101-120hit(221hit)