The search functionality is under construction.

Keyword Search Result

[Keyword] quantization(221hit)

121-140hit(221hit)

  • Robust VQ-Based Digital Watermarking for the Memoryless Binary Symmetric Channel

    Jeng-Shyang PAN  Min-Tsang SUNG  Hsiang-Cheh HUANG  Bin-Yih LIAO  

     
    LETTER-Image

      Vol:
    E87-A No:7
      Page(s):
    1839-1841

    A new scheme for watermarking based on vector quantization (VQ) over a binary symmetric channel is proposed. By optimizing VQ indices with genetic algorithm, simulation results not only demonstrate effective transmission of watermarked image, but also reveal the robustness of the extracted watermark.

  • A Digital Watermarking Algorithm Using Correlation of the Tree Structure of DWT Coefficients

    Young-Ho SEO  Soon-Young CHOI  Sung-Ho PARK  Dong-Wook KIM  

     
    PAPER

      Vol:
    E87-A No:6
      Page(s):
    1347-1354

    This paper proposed a watermarking algorithm for image, which assumed an image compression based on DWT (Discrete Wavelet Transform). To reduce the amount of computation, this algorithm selects the watermarking positions by a threshold table which is statistically established from computing the energy correlation of the corresponding wavelet coefficients. The proposed algorithm can operate in a real-time if the image compression process operates in a real-time because the watermarking process was designed to operate in parallel with the compression process. Also it improves the property of losing the watermak and reducing the compresson ratio by the quantization and Huffman coding steps. It was done by considering the sign of the coefficients and the change in the value for watermarking. Visually recognizable pattern such as a binary image were used as the watermark. The experimental results showed that the proposed algorithm satisfied the properties of robustness and imperceptibility that are the major conditions of watermarking.

  • Design of a Robust LSP Quantizer for a High-Quality 4-kbit/s CELP Speech Coder

    Yusuke HIWASAKI  Kazunori MANO  Kazutoshi YASUNAGA  Toshiyuki MORII  Hiroyuki EHARA  Takao KANEKO  

     
    PAPER-Speech and Hearing

      Vol:
    E87-D No:6
      Page(s):
    1496-1506

    This paper presents an efficient LSP quantizer implementation for low bit-rate coders. The major feature of the quantizer is that it uses a truncated cepstral distance criterion for the code selection procedure. This approach has generally been considered too computationally costly. We utilized the quantizer with a moving-average predictor, two-stage-split vector quantizer and delayed decision. We have investigated the optimal parameter settings in this case and incorporated the quantizer thus obtained into an ITU-T 4-kbit/s speech coding candidate algorithm with a bit budget of 21 bits. The objective performance is better than that with a conventional weighted mean-square criterion, while the complexity is still kept to a reasonable level. The paper also describes the codebook design and techniques that were employed to achieve robustness in noisy channel conditions.

  • Robust Speaker Identification System Based on Multilayer Eigen-Codebook Vector Quantization

    Ching-Tang HSIEH  Eugene LAI  Wan-Chen CHEN  

     
    PAPER

      Vol:
    E87-D No:5
      Page(s):
    1185-1193

    This paper presents some effective methods for improving the performance of a speaker identification system. Based on the multiresolution property of the wavelet transform, the input speech signal is decomposed into various frequency subbands in order not to spread noise distortions over the entire feature space. For capturing the characteristics of the vocal tract, the linear predictive cepstral coefficients (LPCC) of the lower frequency subband for each decomposition process are calculated. In addition, a hard threshold technique for the lower frequency subband in each decomposition process is also applied to eliminate the effect of noise interference. Furthermore, cepstral domain feature vector normalization is applied to all computed features in order to provide similar parameter statistics in all acoustic environments. In order to effectively utilize all these multiband speech features, we propose a modified vector quantization as the identifier. This model uses the multilayer concept to eliminate the interference among the multiband speech features and then uses the principal component analysis (PCA) method to evaluate the codebooks for capturing a more detailed distribution of the speaker's phoneme characteristics. The proposed method is evaluated using the KING speech database for text-independent speaker identification. Experimental results show that the recognition performance of the proposed method is better than those of the vector quantization (VQ) and the Gaussian mixture model (GMM) using full-band LPCC and mel-frequency cepstral coefficients (MFCC) features in both clean and noisy environments. Also, a satisfactory performance can be achieved in low SNR environments.

  • Sampling Low Significance Bits Image to Reduce Quantized Bit Rate

    Asif HAYAT  Tae-Sun CHOI  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E87-D No:5
      Page(s):
    1276-1279

    The artifacts of low-bit rate quantization in images cannot be removed satisfactorily by known methods. We propose decomposition of images as HSI and LSI (higher- and lower- significance images), followed by subsampling and reconstruction methods for LSI. Experiments show significant improvement in image quality, as compared to other methods.

  • Dynamic Bit-Rate Reduction Based on Requantization and Frame-Skipping for MPEG-1 to MPEG-4 Transcoder

    Kwang-deok SEO  Seong-cheol HEO  Soon-kak KWON  Jae-kyoon KIM  

     
    PAPER-Image

      Vol:
    E87-A No:4
      Page(s):
    903-911

    In this paper, we propose a dynamic bit-rate reduction scheme for transcoding an MPEG-1 bitstream into an MPEG-4 simple profile bitstream with a typical bit-rate of 384 kbps. For dynamic bit-rate reduction, a significant reduction in the bit-rate is achieved by combining the processes of requantization and frame-skipping. Conventional requantization methods for a homogeneous transcoder cannot be used directly for a heterogeneous transcoder due to the mismatch in the quantization parameters between the MPEG-1 and MPEG-4 syntax and the difference in the compression efficiency between MPEG-1 and MPEG-4. Accordingly, to solve these problems, a new requantization method is proposed for an MPEG-1 to MPEG-4 transcoder consisting of R-Q (rate-quantization) modeling with a simple feedback and an adjustment of the quantization parameters to compensate for the different coding efficiency between MPEG-1 and MPEG-4. For bit-rate reduction by frame-skipping, an efficient method is proposed for estimating the relevant motion vectors from the skipped frames. The conventional FDVS (forward dominant vector selection) method is improved to reflect the effect of the macroblock types in the skipped frames. Simulation results demonstrated that the proposed method combining requantization and frame-skipping can generate a transcoded MPEG-4 bitstream that is much closer to the desired low bit-rate than the conventional method along with a superior objective quality.

  • A Fast Search Method for Vector Quantization Using Enhanced Sum Pyramid Data Structure

    Zhibin PAN  Koji KOTANI  Tadahiro OHMI  

     
    LETTER-Image

      Vol:
    E87-A No:3
      Page(s):
    764-769

    Conventional vector quantization (VQ) encoding method by full search (FS) is very heavy computationally but it can reach the best PSNR. In order to speed up the encoding process, many fast search methods have been developed. Base on the concept of multi-resolutions, the FS equivalent fast search methods using mean-type pyramid data structure have been proposed already in. In this Letter, an enhanced sum pyramid data structure is suggested to improve search efficiency further, which benefits from (1) exact computing in integer form, (2) one more 2-dimensional new resolution and (3) an optimal pair selecting way for constructing the new resolution. Experimental results show that a lot of codewords can be rejected efficiently by using this added new resolution that features lower dimensions and earlier difference check order.

  • A Modified Midtread Frequency Quantization Scheme for Digital Phase-Locked Loops

    Heejin ROH  Kyungwhoon CHEUN  

     
    LETTER-Transmission Systems and Transmission Equipment

      Vol:
    E87-B No:3
      Page(s):
    752-755

    A novel modified midtread quantizer is proposed for number-controlled oscillator frequency quantization in digital phase-locked loops (DPLLs). We show that DPLLs employing the proposed quantizer provide significantly improved cycle slip performance compared to those employing conventional midtread or midrise quantizers, especially when the number of quantization bits is small and the magnitude of input signal frequency normalized by the quantization interval is less than 0.5.

  • A Fast Codebook Design Algorithm for ECVQ Based on Angular Constraint and Hyperplane Decision Rule

    Ahmed SWILEM  Kousuke IMAMURA  Hideo HASHIMOTO  

     
    PAPER-Image

      Vol:
    E87-A No:3
      Page(s):
    732-739

    In this paper, we propose two fast codebook generation algorithms for entropy-constrained vector quantization. The first algorithm uses the angular constraint to reduce the search area and to accelerate the search process in the codebook design. It employs the projection angles of the vectors to a reference line. The second algorithm has feature of using a suitable hyperplane to partition the codebook and image data. These algorithms allow significant acceleration in codebook design process. Experimental results are presented on image block data. These results show that our new algorithms perform better than the previously known methods.

  • A Digital Image Watermarking Method Based on Labeled Bisecting Clustering Algorithm

    Shu-Chuan CHU  John F. RODDICK  Zhe-Ming LU  Jeng-Shyang PAN  

     
    LETTER-Information Security

      Vol:
    E87-A No:1
      Page(s):
    282-285

    This paper presents a novel digital image watermarking algorithm based on the labeled bisecting clustering technique. Each cluster is labeled either '0' or '1' based on the labeling key. Each input image block is then assigned to the nearest codeword or cluster centre whose label is equal to the watermark bit. The watermark extraction can be performed blindly. The proposed method is robust to JPEG compression and some spatial-domain processing operations. Simulation results demonstrate the effectiveness of the proposed algorithm.

  • Digital Image Watermarking Method Based on Vector Quantization with Labeled Codewords

    Zhe-Ming LU  Wen XING  Dian-Guo XU  Sheng-He SUN  

     
    LETTER-Applications of Information Security Techniques

      Vol:
    E86-D No:12
      Page(s):
    2786-2789

    This Letter presents a novel VQ-based digital image watermarking method. By modifying the conventional GLA algorithm, a codeword-labeled codebook is first generated. Each input image block is then reconstructed by the nearest codeword whose label is equal to the watermark bit. The watermark extraction can be performed blindly. Simulation results show that the proposed method is robust to JPEG compression, vector quantization (VQ) compression and some spatial-domain processing operations.

  • A Fast Encoding Method for Vector Quantization Using L1 and L2 Norms to Narrow Necessary Search Scope

    Zhibin PAN  Koji KOTANI  Tadahiro OHMI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:11
      Page(s):
    2483-2486

    A fast winner search method based on separating all codewords in the original codebook completely into a promising group and an impossible group is proposed. Group separation is realized by using sorted both L1 and L2 norms independently. As a result, the necessary search scope that guarantees full search equivalent PSNR can be limited to the common part of the 2 individual promising groups. The high search efficiency is confirmed by experimental results.

  • Upper Bounds for Quantization Errors in Digital Subtraction Angiography

    Ali REZA  

     
    PAPER-Medical Engineering

      Vol:
    E86-D No:11
      Page(s):
    2463-2471

    Digital Subtraction Angiography (DSA) is a technique used for enhancement of small details in angiogram imaging systems. In this approach, X-ray images of a subject, after injection, are subtracted from a reference X-ray image, taken from the same subject before injection. Due to the exponential absorption property of X-rays, effects of small details at different depth appear differently on X-ray images. Consequently, image subtraction cannot be employed on the original images without any adjustment or modification. Proper modification, in this case, is to use some form of logarithmic operation on images before subtraction. In medical imaging systems, the system designer has a choice to implement this logarithmic operation in the analog domain, before digitization of the video signal, or in the digital domain after analog-to-digital conversion (ADC) of the original video signal. In this paper, the difference between these two approaches is studied and upper bounds for quantization error in both cases are calculated. Based on this study, the best approach for utilization of the logarithmic function is proposed. The overall effects of these two approaches on the inherent signal noise are also addressed.

  • Memory-Enhanced MMSE Decoding in Vector Quantization

    Heng-Iang HSU  Wen-Whei CHANG  Xiaobei LIU  Soo Ngee KOH  

     
    PAPER-Speech and Hearing

      Vol:
    E86-D No:10
      Page(s):
    2218-2222

    An approach to minimum mean-squared error (MMSE) decoding for vector quantization over channels with memory is presented. The decoder is based on the Gilbert channel model that allows the exploitation of both intra- and inter-block correlation of bit error sequences. We also develop a recursive algorithm for computing the a posteriori probability of a transmitted index sequence, and illustrate its performance in quantization of Gauss-Markov sources under noisy channel conditions.

  • Iterative Decoding of High Dimensionality Parity Code

    Toshio FUKUTA  Yuuichi HAMASUNA  Ichi TAKUMI  Masayasu HATA  Takahiro NAKANISHI  

     
    PAPER-Coding Theory

      Vol:
    E86-A No:10
      Page(s):
    2473-2482

    Given the importance of the traffic on modern communication networks, advanced error correction methods are needed to overcome the changes expected in channel quality. Conventional countermeasures that use high dimensionality parity codes often fail to provide sufficient error correction capability. We propose a parity code with high dimensionality that is iteratively decoded. It provides better error correcting capability than conventional decoding methods. The proposal uses the steepest descent method to increase code bit reliability and the coherency between parities and code bits gradually. Furthermore, the quantization of the decoding algorithm is discussed. It is found that decoding with quantization can keep the error correcting capability high.

  • A Fast Encoding Method for Vector Quantization Based on 2-Pixel-Merging Sum Pyramid Data Structure

    Zhibin PAN  Koji KOTANI  Tadahiro OHMI  

     
    LETTER-Image

      Vol:
    E86-A No:9
      Page(s):
    2419-2423

    A fast winner search method for VQ based on 2-pixel-merging sum pyramid is proposed in order to reject a codeword at an earlier stage to reduce the computational burden. The necessary search scope of promising codewords is meanwhile narrowed by using sorted real sums. The high search efficiency is confirmed by experimental results.

  • Encoding of Still Pictures by Wavelet Transform with Vector Quantization Using a Rough Fuzzy Neural Network

    Shao-Han LIU  Jzau-Sheng LIN  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:9
      Page(s):
    1896-1902

    In this paper color image compression using a fuzzy Hopfield-model net based on rough-set reasoning is created to generate optimal codebook based on Vector Quantization (VQ) in Discrete Wavelet Transform (DWT). The main purpose is to embed rough-set learning scheme into the fuzzy Hopfield network to construct a compression system named Rough Fuzzy Hopfield Net (RFHN). First a color image is decomposed into 3-D pyramid structure with various frequency bands. Then the RFHN is used to create different codebooks for various bands. The energy function of RFHN is defined as the upper- and lower-bound fuzzy membership grades between training samples and codevectors. Finally, near global-minimum codebooks in frequency domain can be obtained when the energy function converges to a stable state. Therefore, only 32/N pixels are selected as the training samples if a 3N-dimensional color image was used. In the simulation results, the proposed network not only reduces the consuming time but also preserves the compression performance.

  • A Hybrid Learning Approach to Self-Organizing Neural Network for Vector Quantization

    Shinya FUKUMOTO  Noritaka SHIGEI  Michiharu MAEDA  Hiromi MIYAJIMA  

     
    PAPER-Neuro, Fuzzy, GA

      Vol:
    E86-A No:9
      Page(s):
    2280-2286

    Neural networks for Vector Quantization (VQ) such as K-means, Neural-Gas (NG) network and Kohonen's Self-Organizing Map (SOM) have been proposed. K-means, which is a "hard-max" approach, converges very fast. The method, however, devotes itself to local search, and it easily falls into local minima. On the other hand, the NG and SOM methods, which are "soft-max" approaches, are good at the global search ability. Though NG and SOM exhibit better performance in coming close to the optimum than that of K-means, the methods converge slower than K-means. In order to the disadvantages that exist when K-means, NG and SOM are used individually, this paper proposes hybrid methods such as NG-K, SOM-K and SOM-NG. NG-K performs NG adaptation during short period of time early in the learning process, and then the method performs K-means adaptation in the rest of the process. SOM-K and SOM-NG are similar as NG-K. From numerical simulations including an image compression problem, NG-K and SOM-K exhibit better performance than other methods.

  • An Efficient Quantization Watermarking on the Lowest Wavelet Subband

    Yong-Seok SEO  Sanghyun JOO  Ho-Youl JUNG  

     
    LETTER

      Vol:
    E86-A No:8
      Page(s):
    2053-2055

    A new method for blind watermarking based on quantization is proposed. The proposed scheme embeds a watermark on the lowest wavelet subband in order to be robust. Experimental results demonstrate the robustness of the algorithm against compression and other image processing attacks.

  • Fast Codeword Search Algorithm for Image Vector Quantization Based on Ordered Hadamard Transform

    Zhe-Ming LU  Dian-Guo XU  Sheng-He SUN  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:7
      Page(s):
    1318-1320

    This Letter presents a fast codeword search algorithm based on ordered Hadamard transform. Before encoding, the ordered Hadamard transform is performed offline on all codewords. During the encoding process, the ordered Hadamard transform is first performed on the input vector, and then a new inequality based on characteristic values of transformed vectors is used to reject the unlikely transformed codewords. Experimental results show that the algorithm outperforms many newly presented algorithms in the case of high dimensionality, especially for high-detail images.

121-140hit(221hit)