The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

20321-20340hit(20498hit)

  • Design and Evaluation of Highly Prallel VLSI Processors for 2-D State-Space Digital Filters Using Hierarchical Behavioral Description Language and Synthesizer

    Masayuki KAWAMATA  Yasushi IWATA  Tatsuo HIGUCHI  

     
    PAPER-Design and Implementation of Multidimensional Digital Filters

      Vol:
    E75-A No:7
      Page(s):
    837-845

    This paper designs and evaluates highly parallel VLSI processors for real time 2-D state-space digital filters using hierarchical behavioral description language and synthesizer. The architecture of the 2-D state-space digital filtering system is a linear systolic array of homogeneous VLSI processors, each of which consists of eight processing elements (PEs) executing 1-D state-space digital filtering with multi-input and multi-output. Hierarchical behavioral description language and synthesizer are adopted to design and evaluate PE's and the VLSI processors. One 16 bit fixed-point PE executing a (4, 4)-th order 2-D state-space digital filtering is described on the basis of distributed arithmetic in about 1,200 steps by the description language and is composed of 15 K gates in terms of 2 input NAND gate. One VLSI processor which is a cascade connection of eight PEs is composed of 129 K gates and can be integrated into one 1515 [mm2] VLSI chip using 1 µm CMOS standard cell. The 2-D state-space digital filtering system composed of 128 VLSI processors at 25 MHz clock can execute a 1,0241,024 image in 1.47 [msec] and thus can be applied to real-time conventional video signal processing.

  • On Quality Improvement of Reconstructed Images in Diffraction Tomography

    Haruyuki HARADA  Mitsuru TANAKA  Takashi TAKENAKA  

     
    LETTER

      Vol:
    E75-A No:7
      Page(s):
    910-913

    This letter discusses the quality improvement of reconstructed images in diffraction tomography. An efficient iterative procedure based on the modified Newton-Kantorovich method and the Gerchberg-Papoulis algorithm is presented. The simulated results demonstrate the property of high-quality reconstruction even for cases where the first-order Born approximation fails.

  • Learning Non-parametric Densities in terms of Finite-Dimensional Parametric Hypotheses

    Kenji YAMANISHI  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    459-469

    This paper proposes a model for learning non-parametric densities using finite-dimensional parametric densities by applying Yamanishi's stochastic analogue of Valiant's probably approximately correct learning model to density estimation. The goal of our learning model is to find, with high probability, a good parametric approximation of the non-parametric target density with sample size and computation time polynomial in parameters of interest. We use a learning algorithm based on the minimum description length (MDL) principle and derive a new general upper bound on the rate of convergence of the MDL estimator to a true non-parametric density. On the basis of this result, we demonstrate polynomial-sample-size learnability of classes of non-parametric densities (defined under some smoothness conditions) in terms of exponential families with polynomial bases, and we prove that under some appropriate conditions, the sample complexity of learning them is bounded as O((1/ε)(2r1)/2r1n(2r1)/2r(1/ε)(1/ε)1n(1/δ) for a smoothness parameter r (a positive integer), where ε and δ are respectively accuracy and confidence parameters. Futher, we demonstrate polynomial-time learnability of classes of non-parametric densities (defined under some smoothness conditions) in terms of histogram densities with equal-length cells, and we prove that under some appropriate condition, the sample complexity of learning them is bounded as O((1/ε)3/21n3/2(1/ε)(1/ε)1n(1/δ)).

  • Advanced Dimensioning Tool for Circuit-Switched Networks

    Masaaki SHINOHARA  

     
    PAPER

      Vol:
    E75-B No:7
      Page(s):
    594-600

    We have developed an advanced tool for dimensioning circuit-switched networks, called CNEP (Circuit-Switched Network Evaluation Program) , for effective design of digital networks. CNEP features a high-reliability network structure (node dispersion, double homing, etc) , both-way circuit operation, and circuit modularity (or big module size), all of which are critical for digital networks. CNEP also solves other dimensioning problems such as the cost difference between existing and newly installed circuits, and handles multi-hour traffic conditions, dynamic routing, and multiple-switching-unit nodes. Operations Research techniques are applied to produce exact and heuristic algorithms for these problems. Algorithms with good time-performance trade-off characteristics are chosen for CNEP.

  • Optical Array Imaging System

    Osamu IKEDA  

     
    PAPER-Optical Signal Processing

      Vol:
    E75-A No:7
      Page(s):
    890-896

    An optical array imaging system is presented with basic experimental results. First, a remote object is illuminated with laser light at an angle and the reflected light is detected with an array sensor after interfering it with the reference light. This process is repeated by changing the illumination angle to collect a set of fringe patterns, which are A/D converted and stored in a harddisk in a computer. Then, the data are processed on a computer, first, to estimate the complex-amplitude object wave fields, second, to derive the eigenvector with the maximum eigenvalue for the correlation of the estimated object fields, and finally, to form an image of the object. The derivation of the eigenvector follows an iterative algorithm, which can be interpreted as the process of repeating backward wave propagation of the field between the two apertures illuminating and detecting laser light. The eigenvector field can be expected to backpropagate to focus at a point on the object with the maximum coefficient of reflection, so that a beam-steering operation is applied to the eigenvector to form an image of the object. The method uses only the information of the array data and the lateral spacings of the receiving array (CCD) elements. Hence, the method can give good images of objects even if the reference light is uncollimated with an unknown distorted wavefront, and even if the illuminating angles are imprecise in three dimensions. Basic experimental results clearly show the usefulness of the method.

  • Dynamic Path Assignment for Broadband Networks Based on Neural Computation

    Akira CHUGO  Ichiro IIDA  

     
    PAPER

      Vol:
    E75-B No:7
      Page(s):
    634-641

    This paper describes the application of a neural network to the optimal routing problem in broadband multimedia networks, where the objective is to maximize network utilization while considering the performance required for each call. In a multimedia environment, the performance required for each call is different, and an optimal path must be found whenever a call arrives. A neural network is appropriate for the computation of an optimal path, as it provides real-time solutions to difficult optimization problems. We formulated optimal routing based on the Hop field neural network model, and evaluated the basic behavior of neural networks. This evaluation confirmed the validity of the neural network formulation, which has a small computation time even if there are many nodes. This characteristic is especially suitable for a large-scale system. In addition, we performed a computer simulation of the proposed routing scheme and compared it with conventional alternate routing schemes. The results show the benefit of neural networks for the routing problem, as our scheme always balances the network load and attains high network utilization.

  • Chemical Structures of Native Oxides Formed during Wet Chemical Treatments of Silicon Surfaces

    Hiroki OGAWA  Takeo HATTORI  

     
    PAPER

      Vol:
    E75-C No:7
      Page(s):
    774-780

    Chemical structures of native oxides formed during wet chemical treatments of silicon surfaces were investigated using X-ray Photoelectron Spectroscopy (XPS) and Fourier Transformed Infrared. Attenuated Total Reflection (FT-IR-ATR). It was found that the amounts of Si-H bonds in native oxide and at native oxide/ silicon interface are negligibly small in the case of native oxides formed in H2SO4-H2O2 solution. Based on this discovery, it was found that native oxides can be characterized by the amount of Si-H bonds in the native oxide and the combination of various wet chemical treatments with the treatment in NH4OH-H2O2-H2O solution results in the drastic decrease in the amount of Si-H bonds in the native oxides.

  • Property of Circular Convolution for Subband Image Coding

    Hitoshi KIYA  Kiyoshi NISHIKAWA  Masahiko SAGAWA  

     
    PAPER-Image Coding and Compression

      Vol:
    E75-A No:7
      Page(s):
    852-860

    One of the problems with subband image coding is the increase in image sizes caused by filtering. To solve this, it has been proposed to process the filtering by transforming input sequence into a periodic one. Then filtering is implemented by circular convolution. Although this technique solves the problem, there are very strong restrictions, i.e., limitation on the filter type and on the filter bank structure. In this paper, development of this technique is presented. Consequently, any type of linear phase FIR filter and any structure of filter bank can be used.

  • Description and Realization of Separable-Denominator Two-Dimensional Transfer Matrix

    Naomi HARATANI  

     
    PAPER-Multidimensional Signals, Systems and Filters

      Vol:
    E75-A No:7
      Page(s):
    806-812

    In this paper, a new description of a separable-denominator (S-D) two-dimensional (2-D) transfer matrix is proposed, and its realization is considered. Some of this problem had been considered for the transfer matrices whose elements are two-variables rational functions. We shall propose a 2-D transfer matrix whose inputs-outputs relation is represented by a ratio of two-variables polynomial matrices, and present an algorithm to obtain a 2-D state-space model from it. Next, it is shown that the description proposed in this paper is always minimally realizable. And, we shall present a method of obtaining the description proposed in this paper from a S-D 2-D rational transfer matrix.

  • Reaction of H-Terminated Si(100) Surfaces with Oxidizer in the Heating and Cooling Process

    Norikuni YABUMOTO  Yukio KOMINE  

     
    PAPER

      Vol:
    E75-C No:7
      Page(s):
    770-773

    Thermal desorption spectroscopy (TDS) is applied to analyze the oxidation reactions of hydrogen-terminated Si(100) surfaces in both the heating and cooling processes after hydrogen desorption. The oxidation reaction of oxygen and water with a silicon surface after hydrogen desorption shows hysteresis in the heating and cooling processes. In the cooling process, oxidation finishes when the silicon surface is adequately oxidized to about a 10 thickness. Oxidation continues to occur at lower temperatures when the total volume of oxygen and water is too small to saturate the bare silicon surface. The reaction of water with silicon releases hydrogen at more than 500. Hydrogen does not adsorb on the silicon oxide surface. A trace amount of oxygen, less than 110-6 Torr, roughens the surface.

  • A 15 GFLOPS Parallel DSP System for Super High Definition Image Processing

    Tomoko SAWABE  Tetsurou FUJII  Hiroshi NAKADA  Naohisa OHTA  Sadayasu ONO  

     
    INVITED PAPER

      Vol:
    E75-A No:7
      Page(s):
    786-793

    This paper describes a super high definition (SHD) image processing system we have developed. The computing engine of this system is a parallel processing system with 128 processing elements called NOVI- HiPIPE. A new pipelined vector processor is introduced as a backend processor of each processing element in order to meet the great computing power required by SHD image processing. This pipelined vector processor can achieve 120 MFLOPS. The 128 pipelined vector processors installed in NOVI- HiPIPE yield a total system peak performance of 15 GFLOPS. The SHD image processing system consists of an SHD image scanner, and SHD image storage node, a full color printer, a film recorder, NOVI- HiPIPE, and a Super Frame Memory. The Super Frame Memory can display a ful color moving image sequence at a rate of 60 fps on a CRT monitor at a resolution of 2048 by 2048 pixels. Workstations, interconnected through an Ethernet, are used to control these units, and SHD image data can be easily transfered among the units. NOVI- HiPIPE has a frame memory which can display SHD still images on a color monitor, therefore, one processed frame can be directly displayed. We are developing SHD image processing algorithms and parallel processing methodologies using this system.

  • Multidimensional Signal Processing for NTSC TV Signals

    Takahiko FUKINUKI  Norihiro SUZUKI  

     
    INVITED PAPER

      Vol:
    E75-A No:7
      Page(s):
    767-775

    Multidimensional signal processing has recently been attracting attention in various fields, and has been studied theoretically. TV receives using 3-D (3-Dimensional: horizontal, vertical and temporal) processing, such as IDTV (ImproveD TV), are already available. In addition, television systems with high quality video and mostly with wide-aspect ratio are being studied worldwide. All the proposed systems adopt 3-D signal processing. 3-D processing can fully utilize the transmitted signal, and can take full advantage of the available bandwidth. This results in improved picture quality. This paper reviews the 3-D signal processing used in IDTV and EDTV (EnhanceD TV) in Japan. Video signals are analyzed in the 3-D frequency domain, and 3-D filter design is also studied.

  • A New Cleaning Solution for Metallic Impurities on the Silicon Wafer Surface

    Tsugio SHIMONO  Mikio TSUJI  

     
    PAPER

      Vol:
    E75-C No:7
      Page(s):
    812-815

    A new cleaning solution (FPM; HF-H2O2-H2O) was investigated in order to remove effectively metallic impurities on the silicon wafer surface. The removability of metallic impurities on the wafer surface and the concentrations of metallic impurities adsorbed on the wafer surface from each contaminated cleaning solution were compared between FPM and conventional cleaning solutions, such as HPM (HCl-H2O2-H2O), SPM (H2SO4-H2O2), DHF (HF-H2O) and APM (NH4OH-H2O2-H2O). This new cleaning solution had higher removability of metallic impurities than conventional ones. Adsorption of some kinds of metallic impurities onto the wafer surface was a serious problem for conventional cleaning solutions. This problem was solved by the use of FPM. FPM was important not only as a cleaning solution for metallic impurities, but also as an etchant. Furthemore, this new cleaning solution made possible to construct a simple cleaning system, because the concentrations of HF and H2O2 are good to be less than 1% for each, and it can be used at room temperature.

  • Heuristic Subcube Allocation in Hypercube Systems

    O Han KANG  Soo Young YOON  Hyun Soo YOON  Jung Wan CHO  

     
    PAPER-Computer Systems

      Vol:
    E75-D No:4
      Page(s):
    517-526

    The main objective of this paper is to propose a new top-down subcube allocation scheme which has complete subcube recognition capability with quick response time. The proposed subcube allocation scheme, called Heuristic Subcube Allocation (HSA) strategy, is based on a heuristic and undirected graph, called Subcube (SC)-graph, whose vertices represent the free subcubes, and edge represents inter-relationships between free subcubes. It helps to reduce the response time and internal/external fragmentation. When a new subcube is released, the higher dimension subcube is generated by the cycle detection in the SC-graph, and the heuristic is used to reduce the allocation time and to maintain the dimension of the free subcube as high as possible. It is theoretically shown that the HSA strategy is not only statically optimal but also it has a complete subcube recognition capability in a dynamic environment. Extensive simulation results show that the HSA strategy improves the performance and significantly reduces the response time compared to the previously proposed schemes.

  • Orthogonal Discriminant Analysis for Interactive Pattern Analysis

    Yoshihiko HAMAMOTO  Taiho KANAOKA  Shingo TOMITA  

     
    LETTER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:4
      Page(s):
    602-605

    In general, a two-dimensional display is defined by two orthogonal unit vectors. In developing the display, discriminant analysis has a shortcoming that the extracted axes are not orthogonal in general. First, in order to overcome the shortcoming, we propose discriminant analysis which provides an orthonormal system in the transformed space. The transformation preserves the discriminatory ability in terms of the Fisher criterion. Second, we present a necessary and sufficient condition that discriminant analysis in the original space provides an orthonormal system. Finally, we investigate the relationship between orthogonal discriminant analysis and the Karhunen-Loeve expansion in the original space.

  • A Method of Generating Tests for Combinational Circuits with Multiple Faults

    Hiroshi TAKAHASHI  Nobukage IUCHI  Yuzo TAKAMATSU  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E75-D No:4
      Page(s):
    569-576

    The single fault model is invalid in many cases. However, it is very difficult to generate tests for all multiple faults since an m-line circuit may have 3m --1 multiple faults. In this paper, we describe a method for generating tests for combinational circuits with multiple stuck-at faults. An input vector is a test for a fault on a target line, if it find the target line to be fault-free in the presence of undetected or undetectable lines. The test is called a robust test for fault on a target line. It is shown that the sensitizing input-pair for a completely single sensitized path can be a robust test-pair. The method described here consists of two procedures. We label these as SINGLE_SEN" procedure and DECISION" procedure. SINGLE_SEN generates a single sensitized path including a target line on it by using a PODEM-like method which uses a new seven-valued calculus. DECISION determines by utilizing the method proposed by H. Cox and J. Rajski whether the single sensitizing input-pair generated by the SINGLE_SEN is a robust test-pair. By using these two procedures the described method generates robust test-pairs for the combinational circuit with multiple stuck-at faults. Finally, we demonstrate by experimental results on the ISCAS85 benchmark circuits that SINGLE_SEN is effective for an algorithmic multiple fault test generation for circuits not including many XOR gates.

  • Example-Based Transfer of Japanese Adnominal Particles into English

    Eiichiro SUMITA  Hitoshi IIDA  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E75-D No:4
      Page(s):
    585-594

    This paper deals with the problem of translating Japanese adnominal particles into English according to the idea of Example-Based Machine Translation (EBMT) proposed by Nagao. Japanese adnominal particles are important because: (1) they are frequent function words; (2) to translate them into English is difficult because their translations are diversified; (3) EBMT's effectiveness for adnominal particles suggests that EBMT is effective for other function words, e. g., prepositions of European languages. In EBMT, (1) a database which consists of examples (pairs of a source language expression and its target language translation) is prepared as knowledge for translation; (2) an example whose source expression is similar to the input phrase or sentence is retrieved from the example database; (3) by replacements of corresponding words in the target expression of the retrieved example, the translation is obtained. The similarity in EBMT is computed by the summation of the distance between words multiplied by the weight of each word. The authors' method differs from preceding research in two important points: (1) the authors utilize a general thesaurus to compute the distance between words; (2) the authors propose a weight which changes for every input. The feasibility of our approach has been proven through experiments concerning success rate.

  • Error Analysis of Circle Drawing Using Logarithmic Number Systems

    Tomio KUROKAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:4
      Page(s):
    577-584

    Logarithmic number systems (LNS) provide a very fast computational method. Their exceptional speed has been demonstrated in signal processing and then in computer graphics. But the precision problem of LNS in computer graphics has not been fully examined. In this paper analysis is made for the problem of LNS in picture generation, in particular for circle drawing. Theoretical error analysis is made for the circle drawing. That is, some expressions are developed for the relative error variances. Then they are examined by simulation experiments. Some comparisons are also done with floating point arithmetic with equivalent word length and dynamic range. The results show that the theory and the experiments agree reasonably well and that the logarithmic arithmetic is superior to or at least comparable to the corresponding floating point arithmetic with equivalent word length and dynamic range. Those results are also verified by visual inspections of actually drawn circles. It also shows that the conversion error (from integer to LNS), which is inherent in computer graphics with LNS, does not make too much influence on the total computational error for circle drawing. But it shows that the square-rooting makes the larger influence.

  • Analogical Reasoning as a Form of Hypothetical Reasoning

    Ryohei ORIHARA  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    477-486

    The meaning of analogical reasoning in locally stratified logic programs are described by generalized stable model (GSM) semantics. Although studies on the theoretical aspects of analogical reasoning have recently been on the increase, there have been few attempts to give declarative semantics for analogical reasoning. This paper takes notice of the fact that GSM semantics gives meaning to the effect that the negated predicates represent exceptional cases. We define predicates that denote unusual cases regarding analogical reasoning; for example, ab(x)p(x)g(x), where p(s), q(s), p(t) are given. We also add rules with negated occurrences of such predicates into the original program. In this way, analogical models for original programs are given in the form of GSMs of extended programs. A proof procedure for this semantics is presented. The main objective of this paper is not to construct a practical analogical reasoning system, but rather to present a framework for analyzing characteristics of analogical reasoning.

  • On the Generative Capacity of Lexical-Functional Grammars

    Ryuichi NAKANISHI  Hiroyuki SEKI  Tadao KASAMI  

     
    PAPER-Automaton, Language and Theory of Computing

      Vol:
    E75-D No:4
      Page(s):
    509-516

    Lexical-Functional Grammars (LFG's) were introduced to define the syntax of natural languages. In LFG's, each node of a derivation tree has some attributes. An LFG G consists of a context-free grammar (cfg) G0 called the underlying cfg of G and a description Pfs of constraints between the values of the attributes. Pfs can specify (1) constraints between the value of an attribute of a node and those of its children, and (2) constraints between the value of an attribute of a node called a controller and that of a node called its controllee. RLFG's were introduced as a subclass of LFG's. In RLFG's, only constraints between the value of an attribute of a node and those of its children can be specified. It is shown in this paper that the class of languages generated by RLFG's is equal to the class of recursively enumerable languages. Some restrictions on LFG's were proposed for the purpose of efficient parsing. Among them are (1) the condition called a valid derivation, and (2) the condition that the underlying cfg is cycle-free. For an RLFG G, if the production rules of the underlying cfg of G are of the form AaB or Aa for nonterminal symbols A, B and a terminal symbol a, then G is called an R-RLFG. Every R-RLFG satisfies the above restriction (1) and (2). It is also shown in this paper that the class of languages generated by R-RLFG's contains an NP-hard language, which means that parsing in deterministic polynomial time of LFG's is impossible in general (unless PNP) even if the above restrictions (1) and (2) are satisfied.

20321-20340hit(20498hit)