The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42756hit)

38221-38240hit(42756hit)

  • An 0(mn) Algorithm for Embedding Graphs into a 3-Page Book

    Miki SHIMABARA MIYAUCHI  

     
    PAPER-Graphs, Networks and Matroids

      Vol:
    E77-A No:3
      Page(s):
    521-526

    This paper studies the problem of embedding a graph into a book with nodes on a line along the spine of the book and edges on the pages in such a way that no edge crosses another. Atneosen as well as Bernhart and Kainen has shown that every graph can be embedded into a 3-page book when each edge can be embedded in more than one page. The time complexity of Bernhart and Kainen's method is Ω(ν(G)), where ν(G) is the crossing number of a graph G. A new 0(mn) algorithm is derived in this paper for embedding a graph G=(V, E), where m=│E│ and n= │V│ . The number of points at which edges cross over the spine in embedding a complete graph into a 3-page book is also investigated.

  • Stochastic Gradient Algorithms with a Gradient-Adaptive and Limited Step-Size

    Akihiko SUGIYAMA  

     
    PAPER-Adaptive Signal Processing

      Vol:
    E77-A No:3
      Page(s):
    534-538

    This paper proposes new algorithms for adaptive FIR filters. The proposed algorithms provide both fast convergence and small final misadjustment with an adaptive step size even under an interference to the error. The basic algorithm pays special attention to the interference which contaminates the error. To enhance robustness to the interference, it imposes a special limit on the increment/decrement of the step-size. The limit itself is also varied according to the step-size. The basic algorithm is extended for application to nonstationary signals. Simulation results with white signals show that the final misadjustment is reduced by up to 22 dB under severe observation noise at a negligible expense of the convergence speed. An echo canceler simulation with a real speech signal exhibits its potential for a nonstationary signal.

  • Design Rule Relaxation Approach for High-Density DRAMs

    Takanori SAEKI  Eiichiro KAKEHASHI  Hidemitu MORI  Hiroki KOGA  Kenji NODA  Mamoru FUJITA  Hiroshi SUGAWARA  Kyoichi NAGATA  Shozo NISHIMOTO  Tatsunori MUROTANI  

     
    PAPER-Device Technology

      Vol:
    E77-C No:3
      Page(s):
    406-415

    A design rule relaxation approach is one of the most important requirements for high density DRAMs. The approach relaxes the design rule of a element in comparison with the memory cell size and provides high density DRAMs with the minimum development of a scaled-down MOS structure and a fine patterning lithography process. This paper describes two design rule relaxation approaches, a close-packed folded (CPF) bit-line cell array layout and a Boosted Dual Word-Line scheme. The CPF cell array provides 1.26 times wider active area pitch and maximum 1.5 times wider isolation width. The Boosted Dual Word-Line scheme provides 2n times wider 1st Al pitch on memory cell array, double word-line driver pitch and 1.5 times larger design rule for 1st Al and contacts under 1st Al. Especially wide design rule of the Boosted Dual Word-Line scheme provides several times depth of focus (DOF) for 1st Al wiring which gives several times higher storage node and larger capacitance for capacitor over bit-line (COB) stacked capacitor cells. These approaches are successfully implemented in a 4 Mb DRAM test chip with a 0.91.8 µm2 memory cell.

  • Stochastic Interpolation Model Scheme and Its Application to Statistical Circuit Analysis

    Jin-Qin LU  Kimihiro OGAWA  Masayuki TAKAHASHI  Takehiko ADACHI  

     
    PAPER-Modeling and Simulation

      Vol:
    E77-A No:3
      Page(s):
    447-453

    IC performance simulation for statistical purpose is usually very time-consuming since the scale and complexity of IC have increased greatly in recent years. A common approach for reduction of simulation cost is aimed at the nature of simple modeling instead of actual circuit performance simulations. In this paper,a stochastic interpolation model (SIM) scheme is proposed which overcomes the drawbacks of the existing polynomial-based approximation schemes. First,the dependence of the R2press statistic upon a parameter in SIM is taken into account and by maximizing R2press this enables SIM to achieve the best approximation accuracy in the given sample points without any assumption on the sample data. Next, a sequential sampling strategy based on variance analysis is described to effectively construct SIM during its update process. In each update step, a new sample point with a maximal value of variance is added to the former set of the sample points. The update process will be continued until the desired approximation accuracy is reached. This would eventually lead to the realization of SIM with a quite small number of sample points. Finally, the coefficient of variance is introduced as another criterion for approximation accuracy check other than the R2press statistic. The effectiveness of presented implementation scheme is demonstrated by several numerical examples as well as a statistical circuit analysis example.

  • Extraction of Glossiness of Curved Surfaces by the Use of Spatial Filter Simulating Retina Function

    Seiichi SERIKAWA  Teruo SHIMOMURA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:3
      Page(s):
    335-342

    Although the perception of gloss is based on human visual perception, some methods for extracting glossiness, in contrast to human ability, have been proposed involving curved surfaces. Glossiness defined in these methods, however, does not correspond with psychological glossiness perceived by the human eye over the wide range from relatively low gloss to high gloss. In addition, the obtained glossiness in these methods changes remarkably when the curvature radius of the high-gloss object becomes larger than 10mm. In reality, psychological glossiness does not change. These methods, furthermore, are available only for spherical objects. A new method for extracting glossiness is proposed in this study. For the new definition of glossiness, a spatial filter which simulates human retina function is utilized. The light intensity distribution of the curved object is convoluted with the spatial filter. The maximum value Hmax of the convoluted distribution has a high correlation with psychological glossiness Gph. From the relationship between Gph and Hmax, new glossiness Gf is defined. The gloss-extraction equipment consists of a light source, TV camera, an image processor and a personal computer. Cylinders with the curvature radii of 3-30 mm are used as the specimens in addition to spherical balls. In all specimens, a strong correlation, with a correlation coefficient of more than 0.97, has been observed between Gf and Gph over a wide range. New glossiness Gf conforms to Gph even if the curvature radius in more than 10 mm. Based on these findings, it is found that this method for extracting glossiness is useful for the extraction of glossiness of spherical and cylindrical objects over a wide range from relatively low gloss to high gloss.

  • Influences of Magnesium and Zinc Contaminations on Dielectric Breakdown Strength of MOS Capacitors

    Makoto TAKIYAMA  Susumu OHTSUKA  Tadashi SAKON  Masaharu TACHIMORI  

     
    PAPER-Process Technology

      Vol:
    E77-C No:3
      Page(s):
    464-472

    The dielectric breakdown strength of thermally grown silicon dioxide films was studied for MOS capacitors fabricated on silicon wafers that were intentionally contaminated with magnesium and zinc. Most of magnesium was detected in the oxide film after oxidation. Zinc, some of which evaporated from the surface of wafers, was detected only in the oxide film. The mechanism of the dielectric degradation is dominated by formation of metal silicates, such as Mg2SiO4 (Forsterite) and Zn2SiO4 (Wilemite). The formation of metal silicates has no influence on the generation lifetime of minority carriers, however, it provides the flat-band voltage shift less than 0.3 eV, and forces to increase the density of deep surface states with the zinc contamination.

  • Highly Reliable Ultra-Thin Tantalum Oxide Capacitors for ULSI DRAMs

    Satoshi KAMIYAMA  Hiroshi SUZUKI  Pierre-Yves LESAICHERRE  Akihiko ISHITANI  

     
    PAPER-Device Technology

      Vol:
    E77-C No:3
      Page(s):
    379-384

    This paper describes the formation of ultra-thin tantalum oxide capacitors, using rapid thermal nitridation (RTN) of the storage-node polycrystalline-silicon surface prior to low-pressure chemical vapor deposition of tantalum oxide, using penta-ethoxy-tantalum [(Ta(OC2H5)5) and oxygen gas mixture. The films are annealed at 600-900 in dry O2 atmosphere. Densification of the as-deposited film by annealing in dry O2 is indispensable to the formation of highly reliable ultra-thin tantalum oxide capacitors. The RTN treatment reduces the SiO2 equivalent thickness and leakage current of the tantalum oxide film, and improves the time dependent dielectric breakdown characteristics of the film.

  • A Linear Time Pattern Matching Algorithm between a String and a Tree

    Tatsuya AKUTSU  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:3
      Page(s):
    281-287

    This paper presents a linear time algorithm for testing whether or not there is a path ,vm> of an undiercted tree T (|V(T)|n) that coincides with a string ss1sm (i.e., label(v1)label(vm)s1sm). Since any path of the tree is allowed, linear time substring matching algorithms can not be directly applied and a new method is developed. In the algorithm, O(n/m) vertices are selected from V(T) such that any path pf length more than m 2 must contain at least one of the selected vertices. A search is performed using the selected vertices as 'bases' and two tables of size O(m) are constructed for each of the selected vertices. A suffix tree, which is a well-known-data structure in string matching, is used effectively in the algorithm. From each of the selected vertices, a search is performed with traversing the suffix tree associated with s. Although the size of the alphabet is assumed to be bounded by a constant in this paper, the algorithm can be applied to the case of unbounded alphabets by increasing the time complexity to O(n log m).

  • LAN Internetworking through Broadband ISDN

    Masayuki MURATA  Hideo MIYAHARA  

     
    INVITED PAPER

      Vol:
    E77-B No:3
      Page(s):
    294-305

    A local area network (LAN) can now provide high-speed data communications in a local area environment to establish distributed processing among personal computers and workstations, and the need for interconnecting LANs, which are geographically distributed, is naturally arising. Asynchronous Transfer Mode (ATM) technology has been widely recognized as a promising way to provide the high-speed wide area networks (WAN) for Broadband Integrated Services Digital Network (B-ISDN), and the commercial service offerings are expected in the near future. The ATM network seems to have a capability as a backbone network for interconnecting LANs, and the LAN interconnection is expected to be the first service in ATM networks. However, there remain some technical challenges for this purpose; one of the main difficulties in LAN interconnection is the support of connectionless traffic by the ATM network, which is basically a connection-oriented network. Another one is the way of achieving the very high-speed data transmission over the ATM network. In this paper, we first discuss a LAN internetworking methodology based on the current technology. Then, the recent deployments of LAN interconnection methods through B-ISDN are reviewed.

  • Broadband Communication Network Architecture for Distributed Computing Environments

    Akira CHUGO  Kazuo SAKAGAWA  Teruhisa NAKAMURA  Jun OGAWA  

     
    PAPER

      Vol:
    E77-B No:3
      Page(s):
    343-350

    It is important for distributed computing environments that communication networks are transparent to applications. This allows applications to make the best use of computer resources, To realize network transparency, communication platforms which support distributed computing environments should have a system configuration like an extension of a workstation's internal bus. Such communication platforms require high-speed communication paths, ability to handle different transmission speeds, high reliability, and scalability. This paper proposes a broadband distributed data network which satisfies the above requirements, and provides a distributed computing environment. Our system uses basic nodes called ATM-HUBs and ATM-Gateways (ATM-GWs) as its central components. The nodes consist of cell switch modules which can be made up of building blocks, ATM interface modules, and other functional modules. The switch module is connected to functional modules through a unified interface. The ATM-HUB in particular has conventional LAN interface modules. Using the conventional LAN interface and ATM interface module in an ATM-HUB, a wide variety of terminals, including conventional LAN terminals and ATM terminals, can be accommodated, so offering flexibility of communication modes to users. Furthermore, the use of star wiring around the ATM-HUB and media access control (MAC) address routing gives a higher transfer rate comparable to the speed of a physical transmission line for communication between ATM terminals, or between conventional LAN terminals.

  • Frequency and Time Division Multiple Access with Demand-Assignment Using Multicarrier Modulation for Indoor Wireless Communications Systems

    Yoshiyuki KINUGAWA  Kazuya SATO  Minoru OKADA  Shinsuke HARA  Norihiko MORINAGA  

     
    PAPER

      Vol:
    E77-B No:3
      Page(s):
    396-403

    In order to construct a high-capacity and high-reliable indoor wireless communications system, it is essential to design the modulation/demodulation, coding and access schemes with high and variable data rate transmission capabilities, which meet the technical requirements inherent to wireless communications, i.e., high frequency utilization efficiency and robustness for fading. In this paper, we propose the frequency and time division multiple access with demand-assignment (FTDMA/DA) using multicarrier modulation as a frequency and time synchronous answer to meet the requirements, and analyze the performance of the FTDMA/DA system, taking account of teletraffic characteristics of multimedia information sources.

  • A Circuit Partitioning Approach for Parallel Circuit Simulation

    Tetsuro KAGE  Fumiyo KAWAFUJI  Junichi NIITSUMA  

     
    PAPER-Modeling and Simulation

      Vol:
    E77-A No:3
      Page(s):
    461-466

    We have studied a circuit partitioning approach in the view of parallel circuit simulation on a MIMD parallel computer. In parallel circuit simulation, a circuit is partitioned into equally sized subcircuits while minimizing the number of interconnection nodes. Besides circuit partitioning time should be short enough compared with the total simulation time. From the details of circuit simulation time, we found that balancing subcircuits is critical for low parallel processing, whereas minimizing the interconnection nodes is critical for highly parallel processing. Our circuit partitioning approach consists of four steps: Grouping transistors, initial partitioning the transistor-groups, minimizing the number of interconnection nodes, and balancing the subcircuits. It is based on an algorithmic approach, and can directly control the tradeoffs between balancing subcircuits and minimizing the interconnection nodes by adjusting the parameters. We partitioned a test circuit with 3277 transistors into 4, 9, ... , 64 subcircuits, and did parallel simulations using PARACS, our parallel circuit simulator, on an AP1000 parallel computer. The circuit partitioning time was short enough-less than 3 percent of the total simulation time. The highest performance of parallel analysis using 49 processors was 16 times that of a single processor, and that for total simulation was 9 times.

  • Mixed Mode Circuit Simulation Using Dynamic Network Separation and Selective Trace

    Masakatsu NISHIGAKI  Nobuyuki TANAKA  Hideki ASAI  

     
    PAPER-Modeling and Simulation

      Vol:
    E77-A No:3
      Page(s):
    454-460

    For the efficient circuit simulation, several direct/relaxation-based mixed mode simulation techniques have been studied. This paper proposes the combination of selective trace, which is well-known in the logic simulation, with dynamic network separation. In the selective trace method, the time points to be analyzed are selected for each subcircuit. Since the separation technique enables the analysis of each subcircuit independently, it is possible to skip solving the latent subcircuits, according to selective trace. Selecting the time points in accordance with activity of each subcircuit is analogous to multirate numerical integration technique used in the waveform relaxation algorithm.

  • Datagram Delivery in an ATM-Internet

    Hiroshi ESAKI  Yoshiyuki TSUDA  Takeshi SAITO  Shigeyasu NATSUBORI  

     
    PAPER

      Vol:
    E77-B No:3
      Page(s):
    314-326

    This paper proposes a datagram delivery (class D service) architecture in an ATM-Internet, which is the network interconnecting ATM-LANs through the IWUs, Inter-Working Unit. We can provide a fast datagram delivery system through the following techniques. The datagram delivery to the destination terminal is performed by the datagram delivery server, so called CLS, which is located in the ATM-LAN where the destination terminal belongs to. Each CLS only manages the addresses for the terminals belonging to the corresponding ATM-LAN. The cells belonging to a certain datagram are transferred through a single (seamless) ATM connection from the source terminal to the CLS in the ATM-LAN where the destination terminal belongs to. The source terminal only resolves the access point address corresponding to the ATM-LAN where the destination terminal belongs to, when it submits the cells to the network to transfer the datagram to the corresponding destination terminal. The proposed datagram delivery architecture can be applied to the ATM-LAN system based on VPI routing architecture, easily. The number of the required ATM connections so as to provide datagram delivery through the proposed architecture is less than 1.0% of the ATM connections that the ATM-Internet can provide. Also, the required address space at UNI to provide datagram delivery are less than 1.0% of the UNI address space which is available to be used as an ATM connection identifier.

  • Identification of the Particle Source in LSI Manufacturing Process Equipment

    Yoshimasa TAKII  Nobuo AOI  Yuichi HIROFUJI  

     
    PAPER-Process Technology

      Vol:
    E77-C No:3
      Page(s):
    486-491

    Today, defect sources of LSI device mainly lie in the process equipments. The particles generating in these equipments are introduced onto the wafer, and form the defects resulting in functional failures of LSI device. Thus, reducing these particles is acquired for increasing production yield and higher productivity, and it is important to identify the particle source in the equipment. In this study, we discussed new two methods to identify this source in the equipment used in the production line. The important point of identifing is to estimate the particle generation with short time and high accuracy, and to minimize long time stop of the equipment requiring disassembly. First, we illustrated "particle distribution analysis method." In this method, we showed the procedure to express the particle distribution mathematically. We applied this method to our etching equipment, and could identify the particle source without stopping this etching equipment. Secondly, we illustrated the method of "in-situ particle monitoring method," and applied this method to our AP-CVD equipment. As a result, it was clear the main particle source of this equipment and the procedure for decreasing these particles. By using this method, we could estimate the particle generation at real time in process without stopping this equipment. Thus, both methods shown in this study could estimate the particle generation and identify the particle source with short time and high accuracy. Furthermore, they do not require long time stop of the process equipment and interrupting the production line. Therefore, these methods are concluded to be very useful and effective in LSI manufacturing process.

  • An Optimal Time for Software Testing under the User's Requirement of Failure-Free Demonstration before Release

    Byung Chul CHO  Kyung Soo PARK  

     
    PAPER-Reliability, Availability and Vulnerability

      Vol:
    E77-A No:3
      Page(s):
    563-570

    A new approach to the problem of optimal software testing time is described. Most models implicitly assume the testing is terminated at the end of a prescribed period of time without user's approval. It means the release time and the in-service reliability are determined unilaterally by the developer. If software developer uses and maintains it, the assumption is appropriate. But, it may be inappropriate, if a software requiring more stringent reliability is developed by second party on a contract basis. In this case, the time of release is usually determined with the user's approval. To overcome the weaknesses of the assumption, a two stage testing with failure-free release policy is proposed. A software, after being tested by the developer for some time (in-house testing), is transferred to acceptance testing performed jointly with the user. During the acceptance testing, it is released when τ units of time specified by user is observed to be failure-free for the first time. The policy may be attractive to a user because he can determine the time of release, and extend the testing time by increasing τ. A software cost model for the policy is developed. For the software developer, an optimal in-house testing time minimizing software cost, and various quantities of interests, such as expected periods of acceptance testing, are derived based on the Jelinski-Moranda software reliability model. Finally, numerical examples are shown to illustrate the results.

  • Recovered Bounds for the Solution to the Discrete Lyapunov Matrix Equation

    Takehiro MORI  

     
    LETTER-Control and Computing

      Vol:
    E77-A No:3
      Page(s):
    571-572

    For a discrete Lyapunov matrix equation, we present another such equation that shares the solution to the original one. This renders some existing lower bounds for measures of the size of the solution meaningful, when they yield only trivial bounds. A generalization of this result is suggested.

  • Enhancement of Defocus Characteristics with Intermediate Phase Interference in Phase Shift Method

    Hiroshi OHTSUKA  Toshio ONODERA  Kazuyuki KUWAHARA  Takashi TAGUCHI  

     
    PAPER-Process Technology

      Vol:
    E77-C No:3
      Page(s):
    438-444

    A new phase shift lithography method has been developed that allows different integrated circuit features to be focused on different optical planes that conform to the wafer surface topography. In principle, each pattern in the circuit has its own unique focal plane. The direction and magnitude of each focus shift is determined by the design of the shifter patterns. This method is applicable for use with conventional opaque mask patterns and unattenuated phase shift patterns. The characteristics of this multiple-focus-plane technique have been evaluated experimentally and confirmed theoretically through mathematical modeling using TCC optical imaging theory. Experiments were conducted using i-line positive resist processes for different phase-shift patterns. This paper discusses the effects of changes in phase shift and recommends practical mask design approaches.

  • Hot Carrier Evaluation of TFT by Emission Microscopy

    Junko KOMORI  Jun-ichi MITSUHASHI  Shigenobu MAEDA  

     
    PAPER-Device Technology

      Vol:
    E77-C No:3
      Page(s):
    367-372

    A new evaluation technique of hot carrier degradation is proposed and applied to practical evaluation of p-channel polycrystalline silicon thin film transistors (TFT). The proposed technique introduces emission microscopy which is particularly effective for evaluating TFT devices. We have developed an automatic measurement system in which measurement of the electrical characteristics and monitoring the photo emission are done simultaneously. Using this system, we have identified the dominant mechanism of hot carrier degradation in TFTs, and evaluated the effect of plasma hydrogenation on hot carrier degradation.

  • Modified Numerical Technique for Beam Propagation Method Based on the Galerkin's Technique

    Guosheng PU  Tetsuya MIZUMOTO  Yoshiyuki NAITO  

     
    PAPER-Opto-Electronics

      Vol:
    E77-C No:3
      Page(s):
    510-514

    A modified beam propagation method based on the Galerkin's technique (FE-BPM) has been implemented and applied to the analysis of optical beam propagation in a tapered dielectric waveguide. It is based on a new calculation procedure using non-uniform sampling spacings along the transverse coordinate. Comparison with a conventional FE-BPM shows a definite improvement in saving computation time. The differences of a propagation field and a mean square power given by the proposed FE-BPM are discussed in comparison with the conventional FE-BPM.

38221-38240hit(42756hit)