The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] PA(8249hit)

7881-7900hit(8249hit)

  • Defect Detection of Passivation Layer by a Bias-Free Cu Decoration Method

    Tetsuaki WADA  Shinji NAKANO  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    585-589

    New detection method of passivation defect was studied. The method was the Cu decoration method without bias (bias-free Cu decoration). As the result of comparison with conventional method, it was found that a bias-free Cu decoration method was effective, sensitive and simple. In this method, the difference of humidity resistance induced by poor passivation coverage could be evaluated.

  • Matching of DUT Interconnection Pattern with CAD Layout in CAD-Linked Electron Beam Test System

    Koji NAKAMAE  Ryo NAKAGAKI  Katsuyoshi MIURA  Hiromu FUJIOKA  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    567-573

    Precise matching of the SEM (secondary electron microscope) image of the DUT (device under test) interconnection pattern with the CAD layout is required in the CAD-linked electron beam test system. We propose the point pattern matching method that utilizes a corner pattern in the CAD layout. In the method, a corner pattern which consists of a small number of pixels is derived by taking into account the design rules of VLSIs. By using the corner pattern as a template, the matching points of the template are sought in both the SEM image and CAD layout. Then, the point image obtained from the SEM image of DUT is matched with that from the CAD layout. Even if the number of points obtained in the DUT pattern is different from that in the CAD layout due to the influence of noise present in the SEM image of the DUT pattern, the point matching method would be successful. The method is applied to nonpassivated and passivated LSIs. Even for the passivated LSI where the contrast in the SEM image is mainly determined by voltage contrast, matching is successful. The computing time of the proposed method is found to be shortened by a factor of 4 to 10 compared with that in a conventional correlation coefficient method.

  • Ray-Optical Techniques in Dielectric Waveguides

    Masahiro HASHIMOTO  Hiroyuki HASHIMOTO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E77-C No:4
      Page(s):
    639-646

    We describe a geometrical optics approach for the analysis of dielectric tapered waveguides. The method is based on the ray-optical treatment for wave-normal rays defined newly to waves of light in open structures. Geometrical optics fields are represented in terms of two kinds of wave-normal rays: leaky rays and guided rays. Since the behavior of these rays is different in the two regions separated at critical incidence, the geometrical optics fields have certain classes of discontinuity in a transition region between leaky and guided regions. Guided wave solutions are given as a superposition of guided rays that zigzag along the guides, all of which are totally reflected upon the interfaces. By including some leaky rays adjacent to the guided rays, we obtain more accurate guided wave solutions. Calculated results are in excellent agreement with wave optics solutions.

  • Evaluation of Robustness in a Leaning Algorithm that Minimizes Output Variation for Handprinted Kanji Pattern Recognition

    Yoshimasa KIMURA  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    393-401

    This paper uses both network analysis and experiments to confirm that the neural network learning algorithm that minimizes output variation (BPV) provides much more robustness than back-propagation (BP) or BP with noise-modified training samples (BPN). Network analysis clarifies the relationship between sample displacement and what and how the network learns. Sample displacement generates variation in the output of the output units in the output layer. The output variation model introduces two types of deformation error, both of which modify the mean square error. We propose a new error which combines the two types of deformation error. The network analysis using this new error considers that BPV learns two types of training samples where the modification is either towards or away from the category mean, which is defined as the center of sample distribution. The magnitude of modification depends on the position of the training sample in the sample distribution and the degree of leaning completion. The conclusions is that BPV learns samples modified towards to the category mean more stronger than those modified away from the category mean, namely it achieves nonuniform learning. Another conclusion is that BPN learns from uniformly modified samples. The conjecture that BPV is much more robust than the other two algorithms is made. Experiments that evaluate robustness are performed from two kinds of viewpoints: overall robustness and specific robustness. Benchmark studies using distorted handprinted Kanji character patterns examine overall robustness and two specifically modified samples (noise-modified samples and directionally-modified samples) examine specific robustness. Both sets of studies confirm the superiority of BPV and the accuracy of the conjecture.

  • A Neurocomputational Approach to the Correspondence Problem in Computer Vision

    Hiroshi SAKO  Hadar Itzhak AVI-ITZHAK  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    507-515

    A problem which often arises in computer vision is that of matching corresponding points of images. In the case of object recognition, for example, the computer compares new images to templates from a library of known objects. A common way to perform this comparison is to extract feature points from the images and compare these points with the template points. Another common example is the case of motion detection, where feature points of a video image are compared to those of the previous frame. Note that in both of these example, the point correspondence is complicated by the fact that the point sets are not only randomly ordered but have also been distorted by an unknown transformation and having quite different coordinates. In the case of object recognition, there exists a transformation from the object being viewed, to its projection onto the camera's imaging plane, while in the motion detection case, this transformation represents the motion (translation and rotation) of the ofject. If the parameters of the transformation are completely unknow, then all n! permutations must be compared (n : number of feature points). For each permutation, the ensuing transformation is computed using the least-squared projection method. The exponentially large computation required for this is prohibitive. A neural computational method is propopsed to solve these combinatorial problems. This method obtains the best correspondence matching and also finds the associated transform parameters. The method was applied to two dimensional point correspondence and three-to-two dimensional correspondence. Finally, this connectionist approach extends readily to a Boltzmann machine implementation. This implementation is desirable when the transformation is unknown, as it is less sensitive to local minima regardless of initial conditions.

  • Experimental Design of a 32-bit Fully Asynchronous Microprocessor (FAM)

    Kyoung-Rok CHO  Kazuma OKURA  Kunihiro ASADA  

     
    PAPER-Electronic Circuits

      Vol:
    E77-C No:4
      Page(s):
    615-623

    This paper describes a 32-bit fully asynchronous microprocessor, with 4-stage pipeline based on a RISC-like architecture. Issues relevant to the processor such as design of self-timed datapath, asynchronous controller and interconnection circuits are discussed. Simulation results are included using parameters extracted from layout, which showed about the 300 MIPS processing speed and used 71,000 transistors with 0.5 µm CMOS technology.

  • Minimizing the Data Transfer in Evaluating an Expression in a Distributed-Memory Parallel-Processing System

    Hiroshi OHTA  Kousuke SAKODA  Koichiro ISHIHARA  

     
    PAPER-Computer Systems

      Vol:
    E77-D No:3
      Page(s):
    288-298

    In a distributed-memory parallel-processing system, the overhead of data transfer among the processors is so large that it is important to reduce the data transfer. We consider the data transfer in evaluating an expression consisting of data distributed among the processors. We propose some algorithms which assign the operators in the expression to the processors so as to minimize the number or the cost of data transfers, on the condition that the data allocation to the processors is given. The basic algorithm is given at first, followed by some variations.

  • Fast Algorithms for Minimum Covering Run Expression

    Supoj CHINVEERAPHAN  AbdelMalek B.C. ZIDOURI  Makoto SATO  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:3
      Page(s):
    317-325

    The Minimum Covering Run (MCR) expression used for representing binary images has been proposed [1]-[3]. The MCR expression is an adaptation from horizontal and vertical run expression. In the expression, some horizontal and vertical runs are used together for representing binary images in which total number of them is minimized. It was shown that, sets of horizontal and vertical runs representing any binary image could be viewed as partite sets of a bipartite graph, then the MCR expression of binary images was found analogously by constructing a maximum matching as well as a minimum covering in the corresponding graph. In the original algorithm, the most efficient algorithm, proposed by Hopcroft, solving the graph-theoretical problems mentioned above, associated with the Rectangular Segment Analysis (RSA) was used for finding the MCR expression. However, the original algorithm still suffers from a long processing time. In this paper, we propose two new efficient MCR algorithms that are beneficial to a practical implementation. The new algorithms are composed of two main procedures; i.e., Partial Segment Analysis (PSA) and construction of a maximum matching. It is shown in this paper that the first procedure which is directly an improvement to the RSA, appoints well a lot of representative runs of the MCR expression in regions of text and line drawing. Due to the PSA, the new algorithms reduce the number of runs used in the technique of solving the matching problem in corresponding graphs so that satisfactory processing time can be obtained. To clarify the validity of new algorithms proposed in this paper, the experimental results show the comparative performance of the original and new algorithms in terms of processing time.

  • Performance Bounds for MLSE Equalization and Decoding with Repeat Request for Fading Dispersive Channels

    Hiroshi NOGAMI  Gordon L. STÜBER  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E77-A No:3
      Page(s):
    553-562

    Upper bounds on the bit error probability and repeat request probability, and lower bounds on the throughput are derived for a Hybrid-ARQ scheme that employs trellis-coded modulation on a fading dispersive channel. The receiver employs a modified Viterbi algorithm to perform joint maximum likelihood sequence estimation (MLSE) equalization and decoding. Retransmissions are generated by using the approach suggested by Yamamoto and Itoh. The analytical bounds are extended to trellis-coded modulation on fading dispersive channels with code combining. Comparison of the analytical bounds with simulation results shows that the analytical bounds are quite loose when diversity reception is not employed. However, no other analytical bounds exist in the literature for the trellis-coded Hybrid ARQ system studied in this paper. Therefore, the results presented in this paper can provide the basis for comparison with more sophisticated analytical bounds that may be derived in the future.

  • Identification of the Particle Source in LSI Manufacturing Process Equipment

    Yoshimasa TAKII  Nobuo AOI  Yuichi HIROFUJI  

     
    PAPER-Process Technology

      Vol:
    E77-C No:3
      Page(s):
    486-491

    Today, defect sources of LSI device mainly lie in the process equipments. The particles generating in these equipments are introduced onto the wafer, and form the defects resulting in functional failures of LSI device. Thus, reducing these particles is acquired for increasing production yield and higher productivity, and it is important to identify the particle source in the equipment. In this study, we discussed new two methods to identify this source in the equipment used in the production line. The important point of identifing is to estimate the particle generation with short time and high accuracy, and to minimize long time stop of the equipment requiring disassembly. First, we illustrated "particle distribution analysis method." In this method, we showed the procedure to express the particle distribution mathematically. We applied this method to our etching equipment, and could identify the particle source without stopping this etching equipment. Secondly, we illustrated the method of "in-situ particle monitoring method," and applied this method to our AP-CVD equipment. As a result, it was clear the main particle source of this equipment and the procedure for decreasing these particles. By using this method, we could estimate the particle generation at real time in process without stopping this equipment. Thus, both methods shown in this study could estimate the particle generation and identify the particle source with short time and high accuracy. Furthermore, they do not require long time stop of the process equipment and interrupting the production line. Therefore, these methods are concluded to be very useful and effective in LSI manufacturing process.

  • Comparison of Classifiers in Small Training Sample Size Situations for Pattern Recognition

    Yoshihiko HAMAMOTO  Shunji UCHIMURA  Shingo TOMITA  

     
    LETTER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:3
      Page(s):
    355-357

    The main problem in statistical pattern recognition is to design a classifier. Many researchers point out that a finite number of training samples causes the practical difficulties and constraints in designing a classifier. However, very little is known about the performance of a classifier in small training sample size situations. In this paper, we compare the classification performance of the well-known classifiers (k-NN, Parzen, Fisher's linear, Quadratic, Modified quadratic, Euclidean distance classifiers) when the number of training samples is small.

  • Modified Numerical Technique for Beam Propagation Method Based on the Galerkin's Technique

    Guosheng PU  Tetsuya MIZUMOTO  Yoshiyuki NAITO  

     
    PAPER-Opto-Electronics

      Vol:
    E77-C No:3
      Page(s):
    510-514

    A modified beam propagation method based on the Galerkin's technique (FE-BPM) has been implemented and applied to the analysis of optical beam propagation in a tapered dielectric waveguide. It is based on a new calculation procedure using non-uniform sampling spacings along the transverse coordinate. Comparison with a conventional FE-BPM shows a definite improvement in saving computation time. The differences of a propagation field and a mean square power given by the proposed FE-BPM are discussed in comparison with the conventional FE-BPM.

  • Removal of Particles on Si Wafers in SC-1 Solution

    Hiroyuki KAWAHARA  Kenji YONEDA  Izumi MUROZONO  Yoshihiro TODOKORO  

     
    PAPER-Process Technology

      Vol:
    E77-C No:3
      Page(s):
    492-497

    We have investigated the relationship between particle removal efficiency and etched depth in SC-1 solution (the mixture composed of ammonium hydroxide, hydrogen peroxide and DI water) for Si wafers. The Si etching rate increases with increasing NH4OH (ammonium hydroxide) concentration. The particle removal efficiency depends on the etched Si depth, and is independent of NH4OH concentration. The minimum required Si etching depth to get over 95% particle removal efficiency is 4 nm. Particles on the Si wafers exponentially decrease with increasing the etched Si depth. However the particle removal efficiency is not affected by particle size ranging from 0.2 to 0.5 µm. The particle removal mechanism on the Si wafers in SC-1 solution is dominated by the lift-off of particles due to Si undercutting and redeposition of the removed particle.

  • Stability of an Active Two Port Network in terms of S Parameters

    Yoshihiro MIWA  

     
    PAPER-Electronic Circuits

      Vol:
    E77-C No:3
      Page(s):
    498-509

    The stability conditions and stability factors of terminated active two port networks are investigated. They are expressed with the S parameters of active devices and the radii and centers of the circles defined by source and load terminations. The stability conditions are applied to specific cases. Some of the results correspond to the stability conditions expressed in Z, Y, H or G parameters and one of the other stability conditions of terminated two port network is similar to that for passive terminations which is expressed in S parameters. The various results derived in this paper are very useful for checking the stability of amplifiers, because both stability conditions and stability factors are simply calculated by using the S parameters without using the graphical method or transforming S parameters to Z, Y, H or G parameters. These stability conditions can be also used even if negative input or output resistance appears and even if the real part of source or load immittance is negative.

  • Flexible Information Sharing and Handling System--Towards Knowledge Propagation--

    Yoshiaki SEKI  Toshihiko YAMAKAMI  Akihiro SHIMIZU  

     
    PAPER

      Vol:
    E77-B No:3
      Page(s):
    404-410

    The use of computers with private networks has accelerated the electronic storage of business information in office systems. With the rapid progress in processing capability and small sizing of the computer world, private networks are going to be more intelligent. The utilization of shared information is a key issue in modern organizations, in order to increase the productivity of white-collar workers. In the CSCW research field, it is said that informal and unstructured information is important in group work contexts but difficult to locate in a large organization. Many researchers are paying particular attention to the importance of support systems for such information. These kinds of information are called Organizational memory or Group Memory. Our research focuses on knowledge propagation with private networks in the organization. This means emphasis on the process; with which organized information or the ability to use information is circulated throughout the organization. Knowledge propagation has three issues: knowledge transmission, destination locating and source locating. To cope with these issues we developed FISH, which stands for Flexible Information Sharing and Handling system. FISH was designed to provide cooperative information sharing in a group work context and to explore knowledge propagation. FISH stores fragmental information as cards with multiple keywords and content. This paper discusses a three-layered model that describes computer supported knowledge transmission. Based on this model, three issues are discussed regarding knowledge propagation. FISH and its two-year experiment are described and knowledge propagation is explored based on the results of this experiment.

  • Degradation Mechanisms of Thin Film SIMOX SOI-MOSFET Characteristics--Optical and Electrical Evaluation--

    Mitsuru YAMAJI  Kenji TANIGUSHI  Chihiro HAMAGUCHI  Kazuo SUKEGAWA  Seiichiro KAWAMURA  

     
    PAPER-Device Technology

      Vol:
    E77-C No:3
      Page(s):
    373-378

    Optical and electrical measurements of thin film n-channel SOI-MOSFETs reveal that the exponential tail in photon emission spectra originates from electron-hole recombination. Bremsstrahlung radiation model as a physical mechanism of photon emission was experimentally negated. Negative threshold voltage shift at the initial stage of high field stress is found to be caused by hole trapping in buried oxide. Subsequent turnover characteristics is explained by a competing process between electron trapping in the front gate oxide and hole trapping in the buried oxide. As to the degradation of transconductance, generated surface state as well as trapped holes in the buried oxide which reduce vertical electric field in SOI film are involved in the complicate degradation of transconductance.

  • Design Rule Relaxation Approach for High-Density DRAMs

    Takanori SAEKI  Eiichiro KAKEHASHI  Hidemitu MORI  Hiroki KOGA  Kenji NODA  Mamoru FUJITA  Hiroshi SUGAWARA  Kyoichi NAGATA  Shozo NISHIMOTO  Tatsunori MUROTANI  

     
    PAPER-Device Technology

      Vol:
    E77-C No:3
      Page(s):
    406-415

    A design rule relaxation approach is one of the most important requirements for high density DRAMs. The approach relaxes the design rule of a element in comparison with the memory cell size and provides high density DRAMs with the minimum development of a scaled-down MOS structure and a fine patterning lithography process. This paper describes two design rule relaxation approaches, a close-packed folded (CPF) bit-line cell array layout and a Boosted Dual Word-Line scheme. The CPF cell array provides 1.26 times wider active area pitch and maximum 1.5 times wider isolation width. The Boosted Dual Word-Line scheme provides 2n times wider 1st Al pitch on memory cell array, double word-line driver pitch and 1.5 times larger design rule for 1st Al and contacts under 1st Al. Especially wide design rule of the Boosted Dual Word-Line scheme provides several times depth of focus (DOF) for 1st Al wiring which gives several times higher storage node and larger capacitance for capacitor over bit-line (COB) stacked capacitor cells. These approaches are successfully implemented in a 4 Mb DRAM test chip with a 0.91.8 µm2 memory cell.

  • Extraction of Glossiness of Curved Surfaces by the Use of Spatial Filter Simulating Retina Function

    Seiichi SERIKAWA  Teruo SHIMOMURA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:3
      Page(s):
    335-342

    Although the perception of gloss is based on human visual perception, some methods for extracting glossiness, in contrast to human ability, have been proposed involving curved surfaces. Glossiness defined in these methods, however, does not correspond with psychological glossiness perceived by the human eye over the wide range from relatively low gloss to high gloss. In addition, the obtained glossiness in these methods changes remarkably when the curvature radius of the high-gloss object becomes larger than 10mm. In reality, psychological glossiness does not change. These methods, furthermore, are available only for spherical objects. A new method for extracting glossiness is proposed in this study. For the new definition of glossiness, a spatial filter which simulates human retina function is utilized. The light intensity distribution of the curved object is convoluted with the spatial filter. The maximum value Hmax of the convoluted distribution has a high correlation with psychological glossiness Gph. From the relationship between Gph and Hmax, new glossiness Gf is defined. The gloss-extraction equipment consists of a light source, TV camera, an image processor and a personal computer. Cylinders with the curvature radii of 3-30 mm are used as the specimens in addition to spherical balls. In all specimens, a strong correlation, with a correlation coefficient of more than 0.97, has been observed between Gf and Gph over a wide range. New glossiness Gf conforms to Gph even if the curvature radius in more than 10 mm. Based on these findings, it is found that this method for extracting glossiness is useful for the extraction of glossiness of spherical and cylindrical objects over a wide range from relatively low gloss to high gloss.

  • A Circuit Partitioning Approach for Parallel Circuit Simulation

    Tetsuro KAGE  Fumiyo KAWAFUJI  Junichi NIITSUMA  

     
    PAPER-Modeling and Simulation

      Vol:
    E77-A No:3
      Page(s):
    461-466

    We have studied a circuit partitioning approach in the view of parallel circuit simulation on a MIMD parallel computer. In parallel circuit simulation, a circuit is partitioned into equally sized subcircuits while minimizing the number of interconnection nodes. Besides circuit partitioning time should be short enough compared with the total simulation time. From the details of circuit simulation time, we found that balancing subcircuits is critical for low parallel processing, whereas minimizing the interconnection nodes is critical for highly parallel processing. Our circuit partitioning approach consists of four steps: Grouping transistors, initial partitioning the transistor-groups, minimizing the number of interconnection nodes, and balancing the subcircuits. It is based on an algorithmic approach, and can directly control the tradeoffs between balancing subcircuits and minimizing the interconnection nodes by adjusting the parameters. We partitioned a test circuit with 3277 transistors into 4, 9, ... , 64 subcircuits, and did parallel simulations using PARACS, our parallel circuit simulator, on an AP1000 parallel computer. The circuit partitioning time was short enough-less than 3 percent of the total simulation time. The highest performance of parallel analysis using 49 processors was 16 times that of a single processor, and that for total simulation was 9 times.

  • A Symbolic Analysis Method Using Signal Block Diagrams and Its Application to Bias Synthesis of Analog Circuits

    Hideyuki KAWAKITA  Seijiro MORIYAMA  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:3
      Page(s):
    502-509

    In this paper, an efficient and robust circuit parameter determination method suitable for analog circuit synthesis is presented. The method uses block diagram representation of circuits as implicit design knowledge. Circuit parameter determination is carried out by propagating known values along signal flow in the block diagram. The circuit parameter determination using signal propagation performs successfully when unknown circuit parameters can be solved in one way. However, when the block diagram involves implicit calculation, the propagation stops before all unknown parameters are determined. In order to cope with this problem, we introduced a method that employs a symbolic analysis technique combined with a numerical method. When the propagation of known values stops, one of unknown signals is selected, a unique symbol is assigned to the selected signal, and the signal propagation is restarted. This operation is repeated until there is no unknown signal. When the symbol propagation reaches the signal where the signal value is already set, one nonlinear equation for the signal is obtained by equating both signal values. It can be solved by a numerical method, such as Newton's method. The parameter determination method using procedural description is superior to the optimization based method because it is straightforward to incorporate design knowhow in the description. However, it is burdensome for designers to develop design procedures for each circuit to be synthesized. Because the block diagram based calculation method can be used as subroutine calls during the design procedure development, it simplifies the design procedural description and lowers the burden of designers. The method was applied to the element value determination of bias circuits to demonstrate its effectiveness.

7881-7900hit(8249hit)