The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Y(22683hit)

22381-22400hit(22683hit)

  • AC Resistivity and Power Loss of Mn-Zn Ferrites

    Seiichi YAMADA  Etsuo OTSUKI  Tsutomu OTSUKA  

     
    PAPER

      Vol:
    E75-B No:11
      Page(s):
    1192-1198

    Ac resistivity and power loss values for Mn-Zn ferrite material have been investigated by electrical and magnetic measurements. The ac resistivity shows an inductive dependency on frequency for the low dc resistive samples or for highly dc resistive ones at high temperature, while a capacitive dependency on frequency was observed for the highly resistive materials at the room temperature. These phenomena were interpreted by the dependence of ac resistivity on the dc resistivity, complex permeability and complex permittivity. The dependency of the power losses on the dc resistivity, temperature and frequence were also examined with analysis of power loss term. Dividing the power loss into hysteresis loss and eddy current loss, the frequency dependence of the eddy current loss was found to vary with the magnitude of the dc resistivity as follows: The eddy current loss of low dc resistive materials depends on the dc resistivity. On the other hand, the eddy current loss for high resistive materials is determined by the ac resistivity, contributed from dielectric loss.

  • A Newton Algorithm for Computing the Capacity of Discrete Memoryless Channels

    Kiyotaka YAMAMURA  

     
    PAPER-Numerical Analysis and Self-Validation

      Vol:
    E75-A No:11
      Page(s):
    1583-1589

    This paper presents an efficient algorithm for computing the capacity of discrete memoryless channels. The algorithm uses Newton's method which is known to be quadratically convergent. First, a system of nonlinear equations termed Kuhn-Tucker equations is formulated, which has the capacity as a solution. Then Newton's method is applied to the Kuhn-Tucker equations. Since Newton's method does not guarantee global convergence, a continuation method is also introduced. It is shown that the continuation method works well and the convergence of the Newton algorithm is guaranteed. By numerical examples, effectiveness of the algorithm is verified. Since the proposed algorithm has local quadratic convergence, it is advantageous when we want to obtain a numerical solution with high accuracy.

  • Array Structure Using Basic Wiring Channels for WSI Hypercube

    Hideo ITO   

     
    PAPER-Fault Tolerant Computing

      Vol:
    E75-D No:6
      Page(s):
    884-893

    A new design method is proposed for realizing a hypercube network (HC) structured multicomputer system on a wafer using wafer-scale integration (WSI). The probability that an HC can be constructed on a wafer is higher in this method than in the conventional method; this probavility is called a construction probability. We adopt the FUSS method for the processor (PE) address allocation in our desing because it has a high success probability in the allocation. Even if the design renders the address allocation success probalility hegher, it is of no use if it makes either the maximum wiring length between PEs or the array size (wiring area) larger. A new wiring channel structure capable of connecting PEs on a wafer is proposed in this paper, where a channel, called a basic channel, is used. A one-dimensional-array sub-HC row network (RN) or column networks (CN) can be constructed using the basic channel. The sub-HC construction method, which embeds wirings into the basic channel, is also proposed. It requires almost the same wiring width as conventional method. However, it has an advantage in that maximum wiring length between PEs can be about half that of the conventional method. If PEs must be shifted in the case of PE defects, they can be shifted and connected to the basic channel using other PE shifting channels, and an RN or CN can be constructed. The maximum wiring length between PEs, array size, and construction probability will also be derived, and it will be shown that the proposed design is superior to the conventional one.

  • Eliminating Redundant Components While Building Solid Models by Surface Points Evaluation

    Chun YANG  Shan Jun ZHANG  Toshio KAWASHIMA  Yoshinao AOKI  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E75-A No:11
      Page(s):
    1561-1569

    Existing solid models often contain redundant primitives and null blocks, which both slows down the rendering process and makes the process complex. There has been recent progress toward solving this problem, but existing modeling schemes cannot support eliminating all the redundancies, especially the null blocks, from the solid models. This paper proposed a technique that can eliminate redundancies. By dividing a primitive into some surface dispersed points, a new primitive representation is obtained. The sample segments of the primitive or the object are used to locate composition position to prevent the null primitives from being generated. By drawing out the geometric shape points set corresponding to a common acting area, the volume boundary of a primitive or an object is evaluated by only the Boolean set operations. The null blocks can be picked out in terms of the volume boundary. The resulting solid model generated in this way has no redundancies and is suitable for fast rendering of the image.

  • Discrete Time Modeling and Digital Signal Processing for a Parameter Estimation of Room Acoustic Systems with Noisy Stochastic Input

    Mitsuo OHTA  Noboru NAKASAKO  Kazutatsu HATAKEYAMA  

     
    PAPER

      Vol:
    E75-A No:11
      Page(s):
    1460-1467

    This paper describes a new trial of dynamical parameter estimation for the actual room acoustic system, in a practical case when the input excitation is polluted by a background noise in contrast with the usual case when the output observation is polluted. The room acoustic system is first formulated as a discrete time model, by taking into consideration the original standpoint defining the system parameter and the existence of the background noise polluting the input excitation. Then, the recurrence estimation algorithm on a reverberation time of room is dynamically derived from Bayesian viewpoint (based on the statistical information of background noise and instantaneously observed data), which is applicable to the actual situation with the non-Gaussian type sound fluctuation, the non-linear observation, and the input background noise. Finally, the theoretical result is experimentally confirmed by applying it to the actual estimation problem of a reverberation time.

  • Waveform Estimation of Sound Sources in a Reverberant Environment with Inverse Filters

    Kiyohito FUJII  Masato ABE  Toshio SONE  

     
    PAPER

      Vol:
    E75-A No:11
      Page(s):
    1484-1492

    This paper proposes a method to estimate the waveform of a specified sound source in a noisy and reverberant environment using a sensor array. Previously, we proposed an iterative method to estimate the waveform. However, in this method the effect of reflection sound reduces to 1/M, where M is the number of microphones. Therefore, to solve the reverberation problem, we propose a new method using inverse filters of the transfer functions from the sound sources to each microphone. First, the transfer function from each sound source to each microphone is measured by the cross-spectrum technique and each inverse filter is calculated by the QR method. Then the initially estimated waveform of a sound source is the averaged signal of the inverse filter outputs. Since this waveform still contains the effects of the other sound sources, the iterative technique is adopted to estimate the waveform more precisely, reducing the effects of the other sound and the reflection sound. Some computer simulations and experiments were carried out. The results show the effectiveness of our method.

  • A New Adaptive Algorithm Focused on the Convergence Characteristics by Colored Input Signal: Variable Tap Length KMS

    Tsuyoshi USAGAWA  Hideki MATSUO  Yuji MORITA  Masanao EBATA  

     
    PAPER

      Vol:
    E75-A No:11
      Page(s):
    1493-1499

    This paper proposes a new adaptive algorithm of the FIR type digital filter for an acoustic echo canceller and similar application fields. Unlike an echo canceller for line, an acoustic echo canceller requires a large number of taps, and it must work appropriately while it is driven by colored input signal. By controlling the filter tap length and updating filter coefficients multiple times during a single sampling interval, the proposed algorithm improves the convergence characteristics of adaptation even if colored input signal is introduced. This algorithm is maned VT-LMS after variable tap length LMS. The results of simulation show the effectiveness of the proposed algorithm not only for white noise but also for colored input signal such as speech. The VT-LMS algorithm has better convergence characteristice with very little extra computational load compared to the conventional algorithm.

  • Designing Multi-Level Quorum Schemes for Highly Replicated Data

    Bernd FREISLEBEN  Hans-Henning KOCH  Oliver THEEL  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    763-770

    In this paper we present and analyze multi-level quorum schemes for maintaining the consistency of replicated data in the presence of concurrency and failures in a large distributed environment. The multi-level quorum method operates on a logical hierarchy of the nodes in the network and applies well known flat voting algorithms for replicated data concurrency control in a layered fashion. We show how the number of hierarchy levels, the number of logical entities per level and the voting algorithms used on each level affect the costs and the degree of availability associated with a wide range of multi-level quorum schemes. The results of the analysis are used to provide guidelines for designing the most suitable multi-level quorum strategy for a given application scenario. Comparative performance measurements in a simulated network are presented to illustrate the properties of multi-level approaches when some of the assumptions of the analytical investigation do not hold.

  • A Design Method of SFS and SCD Combinational Circuits

    Shin'ichi HATAKENAKA  Takashi NANYA  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    819-823

    Strongly Fault-Secure (SFS) circuits are known to achieve the TSC goal of producing a non-codeword as the first erroneous output due to a fault. Strongly Code-Disjoint (SCD) circuits always map non-codeword inputs to non-codeword outputs even in the presence of faults so long as the faults are undetectable. This paper presents a new generalized design method for the SFS and SCD realization of combinational circuits. The proposed design is simple, and always gives an SFS and SCD combinational circuit which implements any given logic function. The resulting SFS/SCD circuits can be connected in cascade with each other to construct a larger SFS/SCD circuit if each interface is fully exercised.

  • A Tool for Computing the Output Code Spaces and Verifying the Self-Checking Properties in Complex Self-checking Systems

    Makhtar BOUDJIT  Michael NICOLAIDIS  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    824-834

    In complex self-checking systems several blocks (i.e. functional blocks and checkers) are embedded. In order to check the self-checking properties of such blocks we need to know the set of vectors they receive from the blocks feeding their inputs (i.e. the code word output spaces of the source blocks). In a complex system the computation of the output spaces by means of exhaustive simulation of the system is intractable. In this paper we present a tool which performs this computation with low CPU time. Some other tools allowing to verify the self-checking properties of embedded blocks (like the strongly fault secure property of embedded PLAs and the self-testing property of embedded checkers), have also been developed and experimented.

  • Comparison of Aliasing Probability for Multiple MISRs and M-Stage MISRs with m Inputs

    Kazuhiko IWASAKI  Shou-Ping FENG  Toru FUJIWARA  Tadao KASAMI  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    835-841

    MISRs are widely used as signature circuits for VLSI built-in self tests. To improve the aliasing probability of MISRs, multiple MISRs and M-stage MISRs with m inputs are available, where M is grater than m. The aliasing probability as a function of the test length is analyzed for the compaction circuits for a binary symmetric channel. It is observed that the peak aliasing probability of the double MISRs is less than that of M-stage MISRs with m inputs. It is also shown that the final aliasing probability for a multiple MISR with d MISRs is 2dm and that for an M-stage MISR with m imputs is 2M if it is characterized by a primitive polynomial.

  • Derivation of a Parallel Bottom-Up Parser from a Sequential Parser

    Kazuko TAKAHASHI  

     
    PAPER-Software Theory

      Vol:
    E75-D No:6
      Page(s):
    852-860

    This paper describes the derivation of a parallel program from a nondeterministic sequential program using a bottom-up parser as an example. The derivation procedure consists of two stages: exploitation of AND-parallelism and exploitation of OR-parallelism. An interpreter of the sequential parser BUP is first transformed so that processes for the nodes in a parsing tree can run in parallel. Then, the resultant program is transformed so that a nondeterministic search of a parsing tree can be done in parallel. The former stage is performed by hand-simulation, and the latter is accomplished by the compiler of ANDOR-, which is an AND/OR parallel logic programming language. The program finally derived, written in KL1 (Kernel Language of the FGCS Project), achieves an all-solution search without side effects. The program generated corresponds to an interpreter of PAX, a revised parallel version of BUP. This correspondence shows that the derivation method proposed in this paper is effective for creating efficient parallel programs.

  • A New Indexing Technique for Nested Queries on Composite Objects

    Yong-Moo KWON  Yong-Jin PARK  

     
    PAPER-Databases

      Vol:
    E75-D No:6
      Page(s):
    861-872

    A new indexing technique for rapid evaluation of nested query on composite object is propoced, reducing the overall cost for retrieval and update. An extended B+ tree is introduced in which object identifier (OID) to be searched and path information usud for update of index record are stored in leaf node and subleaf node, respectively. In this method, the retrieval oeration is applied only for OIDs in the leaf node. The index records of both leaf and subleaf nodes are updated in such a way that the path information in the subleaf node and OIDs in the leaf node are reorganized by deleting and inserting the OIDs. The techniaue presented offers advantages over currently related indexing techniques in data reorganization and index allocation. In the proposed index record, the OIDs to be reorganized are always consecutively provided, and thus only the record directory is updated when an entire page should be removed. In addition, the proposed index can be allocate to a path with the length greater than 3 without splitting the path. Comparisons under a variety of conditions are given with current indexing techniques, showing improved performance in cost, i.e., the total number of pages accessed for retrieval and update.

  • Improvement of Reverse Recovery Characteristic in Synchronous Rectifiers Using a Bipolar Transistor Driven by a Current Transformer

    Eiji SAKAI  Koosuke HARADA  

     
    PAPER

      Vol:
    E75-B No:11
      Page(s):
    1179-1185

    It has been reported that the efficiency of a low voltage power supply is improved by replacing diodes in an output-stage with synchronous rectifiers (SR). A SR consists of a bipolar junction transistor with a low-saturation voltage and a current transformer. Although the SR has low offset-voltage, its reverse recovery characteristic is usually poor. In this paper, an RCD circuit which improves the reverse recovery characteristic of the SR is proposed. This circuit is simple, and it is composed of a diode, a capacitor and a resistor. The analysis and the experimental results of the SR with the proposed RCD circuit are presented. The optimum design of the RCD to improve the reverse recovery characteristic of SR is discussed.

  • Fault Tolerance Assurance Methodology of the SXO Operating System for Continuous Operation

    Hiroshi YOSHIDA  Hiroyuki SUZUKI  Kotaro OKAZAKI  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    797-803

    In developing the SXO operating system for the SURE SYSTEM 2000 continuous operation system, we aimed to create an unprecedentedly high software and hardware fault tolerance. We devised a fault tolerant architecture and various methodologies to ensure fault tolerance. We implemented these techniques systematically throughout operating system development. In the design stage, we developed a design methodology called the recovery process chart to verify that recovery mechanisms were complete. In the manufacturing stage, we applied the concept of critical routes to recovery and other processes essential to high dependability. We also developed a method of finding critical routes in a recovery process chart. In the test stage, we added an artificial software fault injection mechanism to the operating system. It generates various reproducible errors at appropriate times and reduces the number of personnel needed for test, making system reliability evaluation easy.

  • Guaranteed Storing of Limit Cycles into a Discrete-Time Asynchronous Neural Network

    Kenji NOWARA  Toshimichi SAITO  

     
    PAPER-Neural Networks

      Vol:
    E75-A No:11
      Page(s):
    1579-1582

    This article discusses a synthesis procedure of a discrete-time asynchronous neural network whose information is a limit cycle. The synthesis procedure uses a novel connection matrix and can be reduced into a linear epuation. If all elements of desired limit cycles are independent at each transition step, the equation can be solved and all desired limit cycles can be stored. In some experiments, our procedure exhibits much better storing performance than previous ones.

  • Modeling and Simulation of the Sliding Window Algorithm for Fault-Tolerant Clock Synchronization

    Manfred J. PFLUEGL  Douglas M. BLOUGH  

     
    PAPER

      Vol:
    E75-D No:6
      Page(s):
    792-796

    Synchronous clocks are an essential requirement for a variety of distributed system applications. Many of these applications are safety-critical and require fault tolerance. In this paper, a general probabilistic clock synchronization model is presented. This model is uniformly probabilistic, incorporating random message delays, random clock drifts, and random fault occurrences. The model allows faults in any system component and of any type. Also, a new Sliding Window Clock Synchronization Algorithm (SWA) providing increased fault tolerance is proposed. The probabilistic model is used for an evaluation of SWA which shows that SWA is capable of tolerating significantly more faults than other algorithms and that the synchronization tightness is as good or better than that of other algorithms.

  • ULSI Technology Trends toward 256K/1G DRAMs

    Masahiro KASHIWAGI  

     
    INVITED PAPER

      Vol:
    E75-C No:11
      Page(s):
    1304-1312

    If a perspective of the "256M/1G era" were to be made from this present, namely the last stage of the development of 64 M DRAMs, the process technologies will show a variety of progress. Some of them would remain only in the extension of the present ones, but others would show a fundamental change including their technological constitutions. The optical lithography will survive even the "256M/1G era" mainly with the innovations of mask technologies. The etching technologies will remain basically the same as the present ones, but will be much more refined. The studies on plasma/redical related surface reactions, however, will bring a variety of surface treatment technologies of new function. The interconnection technologies will encounter various kinds of difficulties both in materials and in processign, and mechanical processing will become one of ULSI processing technologies. The shallow junction technology will merge with the metallization and epitaxial growth technology. The thin dielectrics will approach a critical situation, and it might enhance the device structural change to three dimensional ones. Corresponding to this, the necessity of "vertical processing" will become larger. The bonding SOI technology might overcome these situations of increasing difficulties. On the other hand, the contamination control will be the base of these technology innovations and improvements, exploring a new technology field in addition to the conventional process technology fields.

  • Recursive Copy Networks for Large Multicast ATM Switches

    Shigeru SHIMAMOTO  Wen De ZHONG  Yoshikuni ONOZATO  Jaidev KANIYIL  

     
    PAPER-Switching and Communication Processing

      Vol:
    E75-B No:11
      Page(s):
    1208-1219

    This paper presents a new architecture of a copy network which employs the principle of recursive generation of copy cells. The proposed architecture achieves high utilization of the links and buffers of the copy network, and preserves the cell sequence. The architecture lends itself modularity so that large multicast ATM switches can be fabricated by employing the proposed copy network. Two different modular structures - one for reduced latency of the unicast cell and the master cell from which copies are made, and the other for reduced hardware overhead - for realizing large multicast ATM switches are configured. The hardware of functional elements of the copy network are the same as those of the functional elements of a modular point-to-point switch proposed earlier, thereby resulting in the modularity of functional elements as well. Simulation studies show that the proposed copy network achieves high throughput and low cell loss probability, and the required buffer sizes are small. The delay of cells is found to be very small for traffic loads up to 90%.

  • Rete-Based Congestion Control in High Speed Packet-Switching Networks

    Hiroshi INAI  Yuji KAMICHIKA  Masayuki MURATA  Hideo MIYAHARA  

     
    PAPER-Communication Networks and Service

      Vol:
    E75-B No:11
      Page(s):
    1199-1207

    Rate-based congestion/flow control is a promising way to achieve high throughput in high speed packet-switching networks. We consider a rate-based congestion control to aim at obtaining high throughput and fair sharing of the communication resources. In the scheme, each intermediate node informs its congestion status to the source node. Two kinds of control packets are used for this mechanism. One (a choke packet) is to throttle the rate and another (a loosen packet) is to allow increase of the rate. The source node initiates transmission with a low rate and increases the rate slowly to avoid a rapid increase of the packet queueing at an intermediate node. When the source node receives a choke packet, it decreases the rate rapidly to relieve congestion as soon as possible. The source node upon receipt a loosen packet increases the rate slowly again. We develop a queueing model to investigate the parameter settings to provide a good performance via simulation. The increasing and decreasing parameters of the rate control function are first investigated in various load conditions. We next examine the effect of the queue-length threshold value for the indication of congestion at the intermediate node. The numerical results indicate that the threshold value should be small to obtain a good performance. We finally introduce a technique which accurately recognizes congestion and inhibits an acceptable queueing of the packets at intermediate nodes.

22381-22400hit(22683hit)