The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42756hit)

39861-39880hit(42756hit)

  • A Practical Algorithm for Computing the Roundness

    Hiroyuki EBARA  Noriyuki FUKUYAMA  Hideo NAKANO  Yoshiro NAKANISHI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E75-D No:3
      Page(s):
    253-257

    Roundness is one of the most important geometric measures for circular objects in the process of mechanical assembly. It is the amount of variation in a circular size which can be permitted. To compute roundness, the authors have already proposed an exact polynomial-time algorithm whose time complexity is O(n2). In this paper, we show that this roundness algorithm can be improved more efficiently, by introducing the deletion of the unnecessary points, in practical applications. In addition, the computational experience of this revised algorithm is also presented.

  • FOREWORD

    Shin-ichi MURAKAMI  

     
    FOREWORD

      Vol:
    E75-B No:5
      Page(s):
    307-308
  • A Model for the Development of the Spatial Structure of Retinotopic Maps and Orientation Columns

    Klaus OBERMAYER  Helge RITTER  Klaus J. SCHULTEN  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    537-545

    Topographic maps begin to be recognized as one of the major computational structures underlying neural computation in the brain. They provide dimension-reducing projections between feature spaces that seem to be established and maintained under the participation of selforganizing, adaptive processes. In this contribution, we investigate how well the structure of such maps can be replicated by simple adaptive processes of the kind proposed by Kohonen. We will particularly address the important issue, how the dimensionality of the input space affects the spatial organization of the resulting map.

  • Perceptually Transparent Coding of Still Images

    V. Ralph ALGAZI  Todd R. REED  Gary E. FORD  Eric MAURINCOMME  Iftekhar HUSSAIN  Ravindra POTHARLANKA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    340-348

    The encoding of high quality and super high definition images requires new approaches to the coding problem. The nature of such images and the applications in which they are used prohibits the introduction of perceptible degradation by the coding process. In this paper, we discuss techniques for the perceptually transparent coding of images. Although technically lossy methods, images encoded and reconstructed using these techniques appear identical to the original images. The reconstructed images can be postprocessed (e.g., enhanced via anisotropic filtering), due to the absence of structured errors, commonly introduced by conventional lossy methods. The compression, ratios obtained are substantially higher than those achieved using lossless means.

  • Variable Rate Video Coding Scheme for Broadcast Quality Transmission and Its ATM Network Applications

    Kenichiro HOSODA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    349-357

    This paper describes the configuration and performance of a stable, high compression video coding scheme suitable for broadcast quality. This scheme was developed for application to high quality image packet transmission in Asynchronous Transfer Mode (ATM) networks. There are two problems in implementing image packet transmission in ATM networks, namely the achievement of a compression scheme with high coding efficiency, and the achievement of an effective compensation method for cell loss. We describe a scheme which resolves both these problems. It comprises the division of a two-dimensional spectral image signal into several sub-bands. In the case of the high frequency band, block-matching interframe prediction and Discrete Cosine Transform (DCT) are applied to achieve high compression ratio, while intraframe DCT coding is applied to the baseband. This scheme, moreover, provides a stable compensation for cell loss. It is shown that, based on this system, an original image signal of 216Mbit/s is compressed to about 1/10, and a high quality reconstructed image stable to cell loss is obtained.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : Analysis and Extensions of the Learning Algorithms

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    376-382

    Artificial neurons and neural networks have been shown to perform Principal Component Analysis (PCA) when gradient ascent learning rules are used, which are related to the constrained maximization of statistical objective functions. Due to their parallelism and adaptivity to input data, such algorithms and their implementations in neural networks are potentially useful in feature extraction and data compression. In the companion paper(9), two such learning rules were derived from two criteria, the Subspace Criterion and the Weighted Subspace Criterion. It was shown that the only solutions to the latter problem are dominant eigenvectors of the data covariance matrix, which are the basis vectors of PCA. It was suggested by a simulation that the corresponding learning algorithm converges to these eigenvectors. A homogeneous neural network implementation was proposed for the algorithm. The learning algorithm is analyzed here in detail and it is shown that it can be approximated by a continuous-time differential equation that is obtained by averaging. It is shown that the asymptotically stable limits of this differntial equation are the eigenvectors. The neural network learning algorithm is further extended to a case in which each neuron has a sigmoidal nonlinear feedback activity function. Then no parameters specific to each neuron are needed, and the learning rule is fully homogeneous.

  • Tag-Partitioned Join

    Jeong Uk KIM  Jae Moon LEE  Myunghwan KIM  

     
    PAPER-Databases

      Vol:
    E75-D No:3
      Page(s):
    291-297

    A tag-partitioned join algorithm is described. The algorithm partitions only one relation, while other partition-based algorithms partition both relations. It is performed as the joinable tuples of one relation are rearranged and some of them are duplicated according to the original sequence of the join attribute values of the other relation. To do this, the algorithm first finds the positions of all the tuples of the other relation which are joinable with each tuple of one relation, and then partitions joinable tuples of one relation into buckets by using the positions found. Final joining is performed on the partitioned relation and the other relation. We analyze and compare the performance of the algorithm with that of other partition-based join algorithms. The comparison shows that our method is better than other partition-based methods under the practical values of the analysis parameters.

  • High-Fidelity Sub-Band Coding for Very High Resolution Images

    Takahiro SAITO  Hirofumi HIGUCHI  Takashi KOMATSU  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    327-339

    Very high resolution images with more than 2,000*2.000 pels will play a very important role in a wide variety of applications of future multimedia communications ranging from electronic publishing to broadcasting. To make communication of very high resolution images practicable, we need to develop image coding techniques that can compress very high resolution images efficiently. Taking the channel capacity limitation of the future communication into consideration, the requisite compression ratio will be estimated to be at least 1/10 to 1/20 for color signals. Among existing image coding techniques, the sub-band coding technique is one of the most suitable techniques. With its applications to high-fidelity compression of very high resolution images, one of the major problem is how to encode high frequency sub-band signals. High frequency sub-band signals are well modeled as having approximately memoryless probability distribution, and hence the best way to solve this problem is to improve the quantization of high frequency sub-band signals. From the standpoint stated above, the work herein first compares three different scalor quantization schemes and improved permutation codes, which the authors have previously developed extending the concept of permutation codes, from the aspect of quantization performance for a memoryless probability distribution that well approximates the real statistical properties of high frequency sub-band signals, and thus demonstrates that at low coding rates improved permutation codes outperform the other scalor quatization schemes and that its superiority decreases as its coding rate increases. Moreover, from the results stated above, the work herein, develops a rate-adaptive quantization technique where the number of bits assigned to each subblock is determined according to the signal variance within the subblock and the proper quantization scheme is chosen from among different types of quantization schemes according to the allocated number of bits, and applies it to the high-fidelity encoding of sub-band signals of very high resolution images to demonstrate its usefulness.

  • Visual Communications in the U.S.

    Charles N. JUDICE  

     
    INVITED PAPER

      Vol:
    E75-B No:5
      Page(s):
    309-312

    To describe the state of visual communications in the U.S., two words come to mind: digital and anticipation. Although compressed, digital video has been used in teleconferencing systems for at least ten years, it is only recently that a broad consensus has developed among diverse industries anticipating business opportunities, value, or both in digital video. The drivers for this turning point are: advances in digital signal processing, continued improvement in the cost, complexity, and speed of VLSI, maturing international standards and their adoption by vendors and end users, and a seemingly insatiable consumer demand for greater diversity, accessibility, and control of communication systems.

  • A Mean-Separated and Normalized Vector Quantizer with Edge-Adaptive Feedback Estimation and Variable Bit Rates

    Xiping WANG  Shinji OZAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:3
      Page(s):
    342-351

    This paper proposes a Mean-Separated and Normalized Vector Quantizer with edge-Adaptive Feedback estimation and variable bit rates (AFMSN-VQ). The basic idea of the AFMSN-VQ is to estimate the statistical parameters of each coding block from its previous coded blocks and then use the estimated parameters to normalize the coding block prior to vector quantization. The edge-adaptive feedback estimator utilizes the interblock correlations of edge connectivity and gray level continuity to accurately estimate the mean and standard deviation of the coding block. The rate-variable VQ is to diminish distortion nonuniformity among image blocks of different activities and to improve the reconstruction quality of edges and contours to which the human vision is sensitive. Simulation results show that up to 2.7dB SNR gain of the AFMSN-VQ over the non-adaptive FMSN-VQ and up to 2.2dB over the 1616 ADCT can be achieved at 0.2-1.0 bit/pixel. Furthermore, the AFMSN-VQ shows a comparable coding performance to ADCT-VQ and A-PE-VQ.

  • Analysis of Time Transient EM Field Response from a Dielectric Spherical Cavity

    Hiroshi SHIRAI  Eiji NAKANO  Mikio YANO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E75-C No:5
      Page(s):
    627-634

    Transient responses by a dielectric sphere have been analyzed here for a dipole source located at the center. The formulation has been constructed first in the frequency domain, then transformed into the time domain to obtain for an impulsive response by two analytical methods, namely the Singularity Expansion Method and the Wavefront Expansion Method. While the former method collects the contributions around the singularities in the complex frequency domain, the latter gives us a result which is a summation of each successive wavefront arrivals. A Gaussian pulse has been introduced to simulate an impulse response result. The Gaussian pulse response is analytically formulated by convolving Gaussian pulse with the corresponding impulse response. Numercal inversion results are also calculated by Fast Fourier Transform Algorithm. Numerical examples are shown here to compare the results obtained by these three methods and good agreement are obtained between them. Comments are often made in connection with the corresponding two dimensional cylindrical case.

  • FOREWORD

    Tosio KOGA  Shun-ichi AMARI  

     
    FOREWORD

      Vol:
    E75-A No:5
      Page(s):
    529-530
  • The Computation of Nodal Points Generated by Period Doubling Bifurcation Points on a Locus of Turning Points

    Norio YAMAMOTO  

     
    PAPER-Nonlinear Systems

      Vol:
    E75-A No:5
      Page(s):
    616-621

    As the values of parameters in periodic systems vary, a nodal point appearing on a locus of period doubling bifurcation points crosses over a locus of turning points. We consider the nodal point lying just on the locus of turning points and consider its accurate location. To compute it, we consider an extended system which consists of an original equation and an additional equation. We present a result assuring that this extended system has an isolated solution containing the nodal point.

  • 2.5-V Bipolar/CMOS Circuits for 0.25-µm BiCMOS Technology

    Chih-Liang CHEN  

     
    PAPER

      Vol:
    E75-C No:4
      Page(s):
    383-389

    An ECL circuit with an active pull-down device, operated from a CMOS supply voltage, is described as a high-speed digital circuit for a 0.25-µm BiCMOS technology. A pair of ECL/CMOS level converters with build-in logic capability is presented for effective intermixing of ECL with CMOS circuits. Using a 2.5-V supply and a reduced-swing BiNMOS buffer, the ECL circuit has reduced power dissipation, while still providing good speed. A design example shows the implementation of complex logic by emitter and collector dottings and the selective use of ECL circuits to achieve high performance.

  • FOREWORD

    Akihiko MORINO  Bruce A. WOOLEY  

     
    FOREWORD

      Vol:
    E75-C No:4
      Page(s):
    361-362
  • Fault-Tolerant Architecture in a Cache Memory Control LSI

    Yasushi OOI  Masahiko KASHIMURA  Hidenori TAKEUCHI  Eiji KAWAMURA  

     
    PAPER

      Vol:
    E75-C No:4
      Page(s):
    405-412

    This paper describes a real-time degradable four-way set-associative cache memory control (CMC) LSI. Three kinds of errors, address parity error, comparator error, and multihit error, can cause functional degradation by killing the associative unit corresponding to the fault location. A 20-b tag parity generator, a double comparator, and a multihit detector are the key circuits for the fault detection. The parity generator and the double comparator have no effect on the timing-sensitive path delay because of the parallel configuration of the circuits. The multihit detector occupies about 16% of the propagation delay of the critical path, from the external address input to the hit/miss output.

  • FOREWORD

    Takao MATSUMOTO  Mikio OGAI  

     
    FOREWORD

      Vol:
    E75-B No:4
      Page(s):
    233-234
  • A Pulsed Sensing Scheme with a Limited Bit-Line Swing

    R. E. SCHEUERLEIN  Y. KATAYAMA  T. KIRIHATA  Y. SAKAUE  A. SATOH  T. SUNAGA  T. YOSHIKAWA  K. KITAMURA  S. H. DHONG  

     
    LETTER

      Vol:
    E75-C No:4
      Page(s):
    576-580

    This paper presents a pulsed sensing scheme with a limited bit-line swing designed for 4-Mb CMOS high-speed DRAM's (HSDRAM's) and beyond. It uses a standard CMOS cross-coupled sense amplifier and limits the swing by means of a pulsed sense clock. The signal loss that would occur if the bit-line swing was not exactly limited to one threshold above the word-line's low level is avoided by using a small reference voltage generator and trench decoupling capacitors. The new sensing scheme was successfully implemented on an experimental HSDRAM fabricated by using 0.7-µm Leff CMOS technology, and thus a high-speed random access time of 15 ns and a low power dissipation of 144 mW were obtained for 512-kb array activation with a fast cycle time of 60 ns at 3.6 V.

  • A 100-MHz 2-D Discrete Cosine Transform Core Processor

    Shin-ichi URAMOTO  Yoshitsugu INOUE  Akihiko TAKABATAKE  Jun TAKEDA  Yukihiro YAMASHITA  Hideyuki TERANE  Masahiko YOSHIMOTO  

     
    PAPER

      Vol:
    E75-C No:4
      Page(s):
    390-397

    The discrete cosine transform (DCT) has been recognized as one of the standard techniques in image compression. Therefore, a core processor which rapidly computes DCT has become a key component in image compression VLSI's. This paper describes a 100-MHz two-dimensional DCT core processor which is applicable to the real-time processing of HDTV signals. An excellent architecture utilizing a fast DCT algorithm and multiplier accumulators based on distributed arithmetic have contributed to reducing the hardware amount and to enhancing the speed performance. A layout scheme with a column-interleaved memory and a new ROM circuit are introduced for the efficient implementation of memory-based signal processing circuits. Furthermore, mean values of errors generated in the core were minimized to enhance the computational accuracy with the word-length constraints. Consequently, it features the fastest operating speed and the smallest area with its sufficient accuracy satisfying the specifications in CCITT recommendation H.261. The core integrates about 102K transistors, and occupies 21 mm2 using 0.8-µm double-metal CMOS technology.

  • Low-Power CMOS Digital Design

    Anantha P. CHANDRAKASAN  Samuel SHENG  Robert W. BRODERSEN  

     
    PAPER

      Vol:
    E75-C No:4
      Page(s):
    371-382

    Motivated by emerging battery-operated applications that demand intensive computation in portable environments, techniques are investigated which reduce power consumption in CMOS digital circuits while maintaining computational throughput. Techniques for low-power operation are shown which use the lowest possible supply voltage coupled with architectural, logic style, circuit, and technology optimizations. An architectural-based scaling strategy is presented which indicates that the optimum voltage is much lower than that determined by other scaling considerations. This optimum is achieved by trading increased silicon area for reduced power consumption.

39861-39880hit(42756hit)