The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

2881-2900hit(3945hit)

  • Issue Queue Energy Reduction through Dynamic Voltage Scaling

    Vasily G. MOSHNYAGA  

     
    PAPER-Low-Power Technologies

      Vol:
    E85-C No:2
      Page(s):
    272-278

    With increased size and issue-width, instruction issue queue becomes one of the most energy consuming units in today's superscalar microprocessors. This paper presents a novel architectural technique to reduce energy dissipation of adaptive issue queue, whose functionality is dynamically adjusted at runtime to match the changing computational demands of instruction stream. In contrast to existing schemes, the technique exploits a new freedom in queue design, namely the voltage per access. Since loading capacitance operated in the adaptive queue varies in time, the clock cycle budget becomes inefficiently exploited. We propose to trade-off the unused cycle time with supply voltage, lowering the voltage level when the queue functionality is reduced and increasing it with the activation of resources in the queue. Experiments show that the approach can save up to 39% of the issue queue energy without large performance and area overhead.

  • An Efficient Laplacian-Model Based Dequantization for Uniformly Quantized DCT Coefficients

    Kwang-Deok SEO  Kook-Yeol YOO  Jae-Kyoon KIM  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E85-D No:2
      Page(s):
    421-425

    Quantization is an essential step which leads to compression in discrete cosine transform (DCT) domain. In this paper, we show how a statistically non-optimal uniform quantizer can be improved by employing an efficient reconstruction method. For this purpose, we estimate the probability distribution function (PDF) of original DCT coefficients in a decoder. By applying the estimated PDF into the reconstruction process, the dequantization distortion can be reduced. The proposed method can be used practically in any applications where uniform quantizers are used. In particular, it can be used for the quantization scheme of the JPEG and MPEG coding standards.

  • Highly Stable and Low Phase-Noise Oven-Controlled Crystal Oscillators (OCXOS) Using Dual-Mode Excitation

    Yasuaki WATANABE  Kiyoharu OZAKI  Shigeyoshi GOKA  Takayuki SATO  Hitoshi SEKIMOTO  

     
    PAPER

      Vol:
    E85-A No:2
      Page(s):
    329-334

    A highly stable oven-controlled crystal oscillator (OCXO) with low phase-noise characteristics has been developed using a dual-mode SC-cut quartz crystal oscillator. The OCXO uses a conventional oven-control system for coarse compensation and a digital-correction system, which uses B-mode signal in an SC-cut resonator as a temperature sensor, for fine compensation. Combining these two forms of compensation greatly improves the stability of the C-mode frequency without requiring a double-oven system. The experimental results indicated that the frequency stability of the proposed OCXO, including the frequency-temperature hysteresis, is ten times better than that of a conventional, free-running OCXO. The results also indicated that the proposed OCXO has good frequency retraceability and low phase-noise characteristics.

  • An Integrable Image Rejection System Using a Complex Analog Filter with Variable Bandwidth and Center Frequency Characteristics

    Cosy MUTO  Hiroshi HOSHIKAWA  

     
    PAPER

      Vol:
    E85-A No:2
      Page(s):
    309-315

    In this paper, we discuss an IF image rejection system with variable bandwidth and center frequency. The system is consists of a pair of frequency mixers multiplied by the complex sinusoid and a complex analog filter. By employing the complex leapfrog structure using OTA-C configuration and the frequency transformation from the normalized LPF, the proposed system is capable of variable bandwidth and center frequency characteristics. SPICE simulations result more than 43 [dB] image rejection is achieved for 6 [kHz] and 12 [kHz] bandwidths at 50 [kHz] IF.

  • Enhanced Mutual Exclusion Algorithm for Mobile Computing Environments

    Hyun Ho KIM  Sang Joon AHN  Tai Myoung CHUNG  Young Ik EOM  

     
    PAPER-Algorithms

      Vol:
    E85-D No:2
      Page(s):
    350-361

    The mobile computing system is a set of functions on a distributed environment organized to support mobile hosts. In this environment, mobile hosts should be able to move without any constraints and should remain connected to the network even while moving. Also, they should be able to get necessary information regardless of their current location and time. Distributed mutual exclusion methods for supporting distributed algorithms have hitherto been designed for networks only with static hosts. However, with the emergence of mobile computing environments, a new distributed mutual exclusion method needs to be developed for integrating mobile hosts with underlying distributed systems. In the sense, many issues that should be considered stem from three essential properties of mobile computing system such as wireless communication, portability, and mobility. Thus far, distributed mutual exclusion methods for mobile computing environments were designed based on a token ring structure, which has the drawback of requiring high costs in order to locate mobile hosts. In this paper, we propose not only a distributed mutual exclusion method that can reduce such costs by structuring the entire system as a tree-based logical structure but also recovery schemes that can be applied when a node failure occurs. Finally, we evaluate the operation costs for the mutual exclusion scheme and the recovery scheme.

  • A Gray Level Watermarking Algorithm Using Double Layer Hidden Approach

    Shih-Chang HSIA  I-Chang JOU  Shing-Ming HWANG  

     
    PAPER-Information Security

      Vol:
    E85-A No:2
      Page(s):
    463-471

    Watermarking techniques are widely used to protect the secret document. In some valuable literatures, most of them concentrate on the binary data watermarking by using comparisons of an original image and a watermarked image to extract the watermark. In this paper, an efficient watermarking algorithm is presented with two-layer hidden for gray-level image watermarking. In the first layer, the key information is found based on the codebook concept. Then the secret key is further hidden to the watermarked image adopting the encryption consisting of spatial distribution in the second layer. The simulations demonstrate that the watermarking information is perceptually invisible in the watermarked image. Moreover, the gray-level watermark can be extracted by referring key parameters rather than the original image, and the extracting quality is very good.

  • Sub-100 fs Higher Order Soliton Compression in Dispersion-Flattened Fibers

    Masahiro TSUCHIYA  Koji IGARASHI  Satoshi SAITO  Masato KISHI  

     
    INVITED PAPER-Optical Pulse Compression, Control and Monitoring

      Vol:
    E85-C No:1
      Page(s):
    141-149

    We review recent progresses in our studies on the fiber-optic soliton compression and related subjects with special emphasis on dispersion-flattened fibers (DFFs). As for the ultimately short pulse generation, it has been demonstrated to compress 5 ps laser diode pulses down to 20 fs with a 15.1 m-long single-stage step-like dispersion profiled fiber employed. The compression was brought about through a series of the higher order soliton processes in conjunction with a single and ordinary erbium-doped fiber preamplifier, and DFFs contained at its end played a major role. We have performed intensive investigations on the DFF compression mechanisms in the 100-20 fs range. A fairly reliable model was developed for the higher order soliton propagation along a DFF in the temporal range from 100 down to 30 fs by taking into consideration the higher order nonlinear and dispersion effects as well as incident pulse shape dependence. Through the simulation, parametric spectrum generation originating from the modulation instability gain was pointed out at frequencies apart from the pump wave frequency, which agrees with the experimental observation. Its possible application is also discussed.

  • On Finding Feasible Solutions for the Group Multicast Routing Problem

    Chor Ping LOW  Ning WANG  

     
    PAPER-Network

      Vol:
    E85-B No:1
      Page(s):
    268-277

    In this paper we addresses the problem of finding feasible solutions for the Group Multicast Routing Problem (GMRP). This problem is a generalization of the multicast routing problem whereby every member of the group is allowed to multicast messages to other members from the same group. The routing problem involves the construction of a set of low cost multicast trees with bandwidth requirements for all the group members in the network. We first prove that the problem of finding feasible solutions to GMRP is NP-complete. Following that we propose a new heuristic algorithm for constructing feasible solutions for GMRP. Simulation results show that our proposed algorithm is able to achieve good performance in terms of its ability of finding feasible solutions whenever one exist.

  • Recent Studies on InGaAsP and TiO2/Si Planar Asymmetric Coupled Waveguides as Dispersion Compensators

    Yong LEE  

     
    PAPER

      Vol:
    E85-C No:1
      Page(s):
    190-194

    Two planar asymmetric coupled waveguides were fabricated by using different materials (InGaAsP and TiO2/Si) and tested as dispersion compensators (or pulse compressors). Compression of a more-than-10-ps chirped pulse is experimentally demonstrated by using an InGaAsP planar asymmetric coupled waveguide whose group velocity dispersion (GVD) is enhanced by structural optimization and is spectrally tuned to an input pulse as precisely as possible. A large polarization dependence of the pulse compression was also observed and indicates that the observed pulse compression results from dispersion compensation due to the GVD associated with supermodes. A new planar, asymmetric coupled waveguide with a large difference in refractive indices of the two waveguides was fabricated by using a combination of dielectric (TiO2) and semiconductor (Si) materials in order to obtain better GVD characteristics than semiconductor (for example, InGaAsP) asymmetric coupled waveguides. A preliminary experiment on pulse compression using the TiO2/Si planar asymmetric coupled waveguide was conducted. A 2.8-ps blue chirped pulse was compressed down to about 1 ps by a 1-mm-long waveguide (compression ratio: 0.375, which is better than those of the previous InGaAsP planar asymmetric coupled waveguides). This compression ratio agrees well with a theoretical result obtained by a numerical model based on a supermode's GVD.

  • Detection of Calcifications in Digitized Mammograms Using Modification of Wavelet Packet Transform Coefficients

    Werapon CHIRACHARIT  Kosin CHAMNONGTHAI  

     
    PAPER-Image Processing

      Vol:
    E85-D No:1
      Page(s):
    96-107

    This paper presents a method for detection of calcification, which is an important early sign of breast cancer in mammograms. Since information of calcifications is located in inhomogeneous background and noises, it is hard to be detected. This method uses wavelet packet transform (WPT) for elimination of the background image related to low frequency components. However, very high frequency signals of noises exist with the calcifications and make it hard to suppress them. Since calcification location can be represented as vertical, horizontal, and diagonal edges in time-frequency domain, the edges in spatial domain can be utilized as a filter for noise suppression. Then the image from inverse transform will contain only required information. A free-response operating characteristic (FROC) curve is used to evaluate a performance of proposed method by applying it to thirty images of calcifications. The results show 82.19 percent true positive detection rate at the cost of 6.73 false positive per image.

  • Efficient Analysis of Electromagnetic Coupling Problem via Aperture into Parallel Plate Waveguide and Its Application to Electromagnetic Pulse (EMP) Coupling

    Young-Soon LEE  Jong-Kyu KIM  Young-Ki CHO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E85-C No:1
      Page(s):
    212-218

    A numerically efficient analysis method, combining closed-form Green's functions with the method of moments (MoM) of the mixed potential integral equation (MPIE) approach, is considered for the electromagnetic coupling problem through an aperture into a parallel plate waveguide (PPW), as a complementary problem to the microstrip patch structure problem, and then applied to the electromagnetic pulse (EMP) penetration problem. Some discussion on the advantages of the present method is also presented from the perspective of computational electromagnetics.

  • Complexity Scalability for ACELP and MP-MLQ Speech Coders

    Fu-Kun CHEN  Jar-Ferr YANG  Yu-Pin LIN  

     
    PAPER-Speech and Hearing

      Vol:
    E85-D No:1
      Page(s):
    255-263

    For multimedia communications, the computational scalability of a multimedia codec is required to match with different working platforms and integrated services of media sources. In this paper, two condensed stochastic codebook search approaches are proposed to progressively reduce the computation required for the algebraic code excited linear predictive (ACELP) and multi-pulse maximum likelihood quantization (MP-MLQ) coders. By reducing the candidates of the codebook before search procedure, the proposed methods can effectively diminish the computation required for the ITU-T G.723.1 dual rate speech coder. Simulation results show that the proposed methods can save over 50 percent for the stochastic codebook search with perceptually intangible degradation in speech quality.

  • Measuring the Degree of Reusability of the Components by Rough Set and Fuzzy Integral

    WanKyoo CHOI  IlYong CHUNG  SungJoo LEE  

     
    PAPER-Software Engineering

      Vol:
    E85-D No:1
      Page(s):
    214-220

    There were researches that measured effort required to understand and adapt components based on the complexity of the component, which is some general criterion related to the intrinsic quality of the component to be adapted and understood. They, however, don't consider significance of the measurement attributes and user must decide reusability of similar components for himself. Therefore, in this paper, we propose a new method that can measure the DOR (Degree Of Reusability) of the components by considering the significance of the measurement attributes. We calculates the relative significance of them by using rough set and integrate the significance with the measurement value by using Sugeno's fuzzy integral. Lastly, we apply our method to the source code components and show through statistical technique that it can be used as the ordinal and ratio scale.

  • Evaluation of the Response Function and Its Space Dependence in Chirp Pulse Microwave Computed Tomography (CP-MCT)

    Michio MIYAKAWA  Kentaroh ORIKASA  Mario BERTERO  

     
    PAPER-Measurement Technology

      Vol:
    E85-D No:1
      Page(s):
    52-59

    In Chirp-Pulse Microwave Computed Tomography (CP-MCT) the images are affected by the blur which is inherent to the measurement principle and is described by a space-variant Point Spread Function (PSF). In this paper we investigate the PSF of CP-MCT including the space dependence both experimentally and computationally. The experimental evaluation is performed by measuring the projections of a target consisting of a thin low-loss dielectric rod surrounded by a saline solution and placed at various positions in the measuring region. On the other hand, the theoretical evaluation is obtained by computing the projections of the same target via a numerical solution of Maxwell's equations. Since CP-MCT uses a chirp signal, the numerical evaluation is carried out by the use of a FD-TD method. The projections of the rod could be obtained by computing the field during the sweep time of the chirp signal for each position of the receiving antenna. Since this procedure is extremely time consuming, we compute the impulse response function of the system by exciting the transmitting antenna with a wide-band Gaussian pulse. Then the signal transmitted in CP-MCT is obtained by computing the convolution product in time domain of the input chirp pulse with the impulse response function of the system. We find a good agreement between measured and computed PSF. The rationality of the computed PSF is verified by three distinct ways and the usefulness of this function is shown by a remarkable effect in the restoration of CP-MCT images. Knowledge on the space-variant PSF will be utilized for more accurate image deblurring in CP-MCT.

  • Ultrahigh-Speed OTDM Transmission beyond 1 Tera Bit-Per-Second Using a Femtosecond Pulse Train

    Masataka NAKAZAWA  Takashi YAMAMOTO  Koichi Robert TAMURA  

     
    INVITED PAPER-OTDM Transmission System, Optical Regeneration and Coding

      Vol:
    E85-C No:1
      Page(s):
    117-125

    Progress on a single wavelength channel OTDM terabit/s transmission is described. In particular, we focus on 1.28 Tbit/s OTDM transmission over 70 km which we realized recently. A pre-chirping technique using a high speed phase modulator is emphasized to simultaneously compensate for third- and fourth-order dispersion. The input pulse width was 380 fs, and the pulse broadening after a 70 km transmission was as small as 20 fs. All 128 channels time-division-demultiplexed to 10 Gbit/s had a bit error rate of less than 110-9, in which we employed a lot of new technique for pulse generation, dispersion compensation and demultiplexing. These techniques help pave the path for OTDM technology of the 21 century.

  • A Lossless Image Compression for Medical Images Based on Hierarchical Sorting Technique

    Atsushi MYOJOYAMA  Tsuyoshi YAMAMOTO  

     
    PAPER-Image Processing

      Vol:
    E85-D No:1
      Page(s):
    108-114

    We propose new lossless medical image compression method based on hierarchical sorting technique. Hierarchical sorting is a technique to achieve high compression ratio by detecting the regions where image pattern varies abruptly and sorting pixel order by its value to increase predictability. In this method, we can control sorting accuracy along with size and complexity. As the result, we can reduce the sizes of the permutation-tables and reuse the tables to other image regions. Comparison using experimental implementation of this method shows better performance for medical image set measured by X-ray CT and MRI instruments where similar sub-block patterns appear frequently. This technique applies quad-tree division method to divide an image to blocks in order to support progressive decoding and fast preview of large images.

  • Proposal of a Nodule Density-Enhancing Filter for Plain Chest Radiographs on the Basis of the Thoracic Wall Outline Detected by Hough Transformation

    Tetsuo SHIMADA  Naoki KODAMA  Hideya SATOH  Kei HIWATASHI  Takuya ISHIDA  Yoshitaka NISHIMURA  Ichiroh FUKUMOTO  

     
    PAPER-Image Processing

      Vol:
    E85-D No:1
      Page(s):
    88-95

    In screening for primary lung cancer with plain chest radiography, computer-aided diagnosis systems are being developed to reduce chest radiologists' task and the risk of missing positive cases. We evaluated a difference filter that enhances nodule densities in the preprocessing of chest X-ray images. Since ribs often affect detection of pulmonary nodules, we designed an eye-shaped filter to fit the rib shape. Although this filter increased the nodule detection rate, it could not detect nodules near the thoracic wall. The thoracic wall was then outlined by computers with Hough transformation for line detection. On the basis of the outline, the direction of the eye-shaped filter was determined. With this technique, the filter was not affected by considerable changes in the shape of anatomical structures, such as ribs and the thoracic wall, and could detect pulmonary nodules regardless of their location.

  • Visualization of Interval Changes of Pulmonary Nodules Using High-Resolution CT Images

    Yoshiki KAWATA  Noboru NIKI  Hironobu OHMATSU  Noriyuki MORIYAMA  

     
    PAPER-Image Processing

      Vol:
    E85-D No:1
      Page(s):
    77-87

    This paper presents a method to analyze volumetrically evolutions of pulmonary nodules for discrimination between malignant and benign nodules. Our method consists of four steps; (1) The 3-D rigid registration of the two successive 3-D thoracic CT images, (2) the 3-D affine registration of the two successive region-of-interest (ROI) images, (3) non-rigid registration between local volumetric ROIs, and (4) analysis of the local displacement field between successive temporal images. In the preliminary study, the method was applied to the successive 3-D thoracic images of two pulmonary nodules including a metastasis malignant nodule and a inflammatory benign nodule to quantify evolutions of the pulmonary nodules and their surrounding structures. The time intervals between successive 3-D thoracic images for the benign and malignant cases were 150 and 30 days, respectively. From the display of the displacement fields and the contrasted image by the vector field operator based on the Jacobian, it was observed that the benign case reduced in the volume and the surrounding structure was involved into the nodule. It was also observed that the malignant case expanded in the volume. These experimental results indicate that our method is a promising tool to quantify how the lesions evolve their volume and surrounding structures.

  • Image Enhancement with Attenuated Blocking Artifact in Transform Domain

    Sung Kon OH  Jeong Hyun YOON  Yong Man RO  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E85-D No:1
      Page(s):
    291-297

    Image processing in transform domain has many advantages but it could be suffered from local effects such as a blocking artifact. In this paper, an image processing is performed by weighting coefficients in the compressed domain, i.e., filtering coefficients are appropriately selected according to the processing. Since we find the appropriate factors according to global image enhancement, blocking artifacts are reduced between inter-blocks. Experimental results show that the proposed technique has the advantages of simple computation and easy implementation.

  • A Random Walk through Eigenspace

    Matthew TURK  

     
    INVITED PAPER

      Vol:
    E84-D No:12
      Page(s):
    1586-1595

    It has been over a decade since the "Eigenfaces" approach to automatic face recognition, and other appearance-based methods, made an impression on the computer vision research community and helped spur interest in vision systems being used to support biometrics and human-computer interface. In this paper I give a personal view of the original motivation for the work, some of the strengths and limitation of the approach, and progress in the years since. Appearance-based approaches to recognition complement feature- or shape-based approaches, and a practical face recognition system should have elements of both. Eigenfaces is not a general approach to recognition, but rather one tool out of many to be applied and evaluated in the appropriate context.

2881-2900hit(3945hit)