Sadao IDA Atsumi KURAMOCHI Hiroshi WATANABE Mitsuhiko KOYAMA Kazutoshi GOTO
This paper describes mixed gas systems of SO2 and NO2 which are the essential corrosive gases in an ordinary atmospheric environment of electronic parts. It describes the corrosion product compositions and the behavior of copper in mixed and separate gases. Results of our tests show the following: (1) The weight of corrosion products with the SO2-NO2 mixed gas approximate the sum of those with the individual gases, however, the corrosion products of SO2 are affected by NO2. (2) Tests of the SO2-NO2 mixed gas closely simulates tests of electronic parts in the ordinary atmospheric environment.
Mitsugi SAITA Tatsuo YOSHIE Katsumi WATANABE Kiyoshi MURAMORI
In 1963, the authors began to develop a tuning circuit (hereafter referred to as the 'circuit') consisting of an inductor, fixed capacitors and a variable capacitor. The circuit required very high accuracy and stability, and the aging influence on resonant frequency needed to be Δf/f0 0.12% for 20 years. When we started, there was no methodology available for designing such a long-term stable circuit, so we reinvestigated our previous studies concerning aging characteristics and formed a design concept. We designed the circuit by bearing in mind that an inductor was subject to natural and stress demagnetization (as indicated by disaccommodation), and assumed that a capacitor changed its characteristics linearly over a logarithmic scale of time. (This assumption was based on short-term test results derived from previous studies.) We measured the aging characteristics of the circuits at room temperature for 20 years, from 1966. The measurement results from the 20-year study revealed that the aging characteristics predicted by the design concept were reasonably accurate.
This paper is concerned with the continuous relation between models of the plant and the predicted performances of the system designed based on the models. To state the problem more precisely, let P be the transfer matrix of a plant model, and let A be the transfer matrix of interest of the designed system, which is regarded as a performance measure for evaluating the designed responses. A depends upon P and is written as A=A(P). From the practical point of view, it is necessary that the function A(P) should be continuous with respect to P. In this paper we consider the linear quadratic optimal servosystem with integrators (LQI) scheme as the design methodology, and prove that A(P) depends continuously on the plant transfer matrix P if the topology of the family of plants models is the graph topology. A numerical example is given for illustrating the result.
Jun TAKEDA Shin-ichi URAMOTO Masahiko YOSHIMOTO
It is important for LSI system designers to estimate computational errors when designing LSI's for numeric computations. Both for the prediction of the errors at an early stage of designing and for the choice of a proper hardware configuration to achieve a target performance, it is desirable that the errors can be estimated in terms of a minimum of parameters. This paper presents a theoretical error analysis of multiply-accumulation implemented by distributed arithmetic(DA) and proposes a new method for estimating the mean-squared error. DA is a method of implementing the multiply-accumulation that is defined as an inner product of an input vector and a fixed coefficient vector. Using a ROM which stores partial products. DA calculates the output by accumulating the partial products bitserially. As DA uses no parallel multipliers, it needs a smaller chip area than methods using parallel multipliers. Thus DA is effectively utilitzed for the LSI implementation of a digital signal processing system which requires the multiply-accumulation. It has been known that, if the input data are uniformly distributed, the mean-squared error of the multiply-accumulation implemented by DA is a function of only the word lengths of the input, the output, and the ROM. The proposed method for the error estimation can calculate the mean-squared error by using the same parameters even when the input data are not uniformly distributed. The basic idea of the method is to regard the input data as a combination of uniformly distributed partial data with a different word length. Then the mean-squared error can be predicted as a weighted sum of the contribution of each partial data, where the weight is the ratio of the partial data to the total input data. Finally, the method is applied to a two-dimensional inverse discrete cosine transform (IDCT) and the practicability of the method is confirmed by computer simulations of the IDCT implemented by DA.
Eiji WATANABE Masato ITO Nobuo MURAKOSHI Akinori NISHIHARA
It is often desired to change the cutoff frequencies of digital filters in some applications like digital electronic instruments. This paper proposes a design of variable lowpass digital filters with wider ranges of cutoff frequencies than conventional designs. Wave digital filters are used for the prototypes of variable filters. The proposed design is based on the frequency scaling in the s-domain, while the conventional ones are based on the z-domain lowpass-to-lowpass transformations. The first-order approximation by the Taylor series expansion is used to make multiplier coefficients in a wave digital filters obtained from a frequency-scaled LC filter become linear functions of the scaling parameter, which is similar to the conventional design. Furthermore this paper discusses the reduction of the approximation error. The curvature is introduced as the figure of the quality of the first-order approximation. The use of the second-order approximation to large-curvature multiplier coefficients instead of the first-order one is proposed.
An efficient algorithm is presented for finding all solutions of piecewise-linear resistive circuits. In this algorithm, a simple sign test is performed to eliminate many linear regions that do not contain a solution. This makes the number of simultaneous linear equations to be solved much smaller. This test, in its original form, is applied to each linear region; but this is time-consuming because the number of linear regions is generally very large. In this paper, it is shown that the sign test can be applied to super-regions consisting of adjacent linear regions. Therefore, many linear regions are discarded at the same time, and the computational efficiency of the algorithm is substantially improved. The branch-and-bound method is used in applying the sign test to super-regions. Some numerical examples are given, and it is shown that all solutions are computed very rapidly. The proposed algorithm is simple, efficient, and can be easily programmed.
When we use the finite-difference time-domain (FD-TD) method to study time-domain electromagnetic fields in the unbounded surroundings, we frequently use a radiation boundary condition (RBC) by means of one-way wave equations. The reflection coefficient by the RBC is independent of frequency, but the reflection coefficient of the finite difference approximation for the RBC depends on a frequency also; this study examines how the reflection characteristics are affected by the frequency, and the study presents the coefficients used in the RBC which gives expected reflection characteristics for a frequency, and presents the application to simulation of the matched termination of a rectangular waveguide.
Shigeru YAMADA Mitsuhiro KIMURA Hiroaki TANAKA Shunji OSAKI
In this paper, we propose a plausible software reliability growth model by applying a mathematical technique of stochastic differential equations. First, we extend a basic differential equation describing the average behavior of software fault-detection processes during the testing phase to a stochastic differential equation of ItÔ type, and derive a probability distribution of its solution processes. Second, we obtain several software reliability measures from the probability distribution. Finally, applying a method of maximum-likelihood we estimate unknown parameters in our model by using available data in the actual software testing procedures, and numerically show the stochastic behavior of the number of faults remaining in the software system. Further, the model is compared among the existing software reliability growth models in terms of goodness-of-fit.
Masafumi SASAKI Naohiko YAMAGUCHI Tetsushi YUGE Shigeru YANAGI
Mean Time Between Failures (MTBF) is an important measure of practical repairable systems, but it has not been obtained for a repairable linear consecutive-k-out-of-n: F system. We first present a general formula for the (steady-state) availability of a repairable linear consecutive-k-out-of-n: F system with nonidentical components by employing the cut set approach or a topological availability method. Second, we present a general formula for frequency of system failures of a repairable linear consecutive-k-out-of-n: F system with nonidentical components. Then the MTBF for the repairable linear consecutive-k-out-of-n: F system is shown by using the frequency of system failure and availability. Lastly, we derive some figures which show the relationship between the MTBF and repair rate µorρ(=λ/µ) in the repairable linear consecutive-k-out-of-n: F system. The figures can be easily used and are useful for reliability design.
Leakage enhancement after an ESD event has been analyzed for output buffer LDD MOSFETs. The HBM ESD failure threshold for the LDD MOSFETs is only 200-300 V and the failure is the leakage enhancement of the off-state MOSFETs called as "soft breakdown" leakage. This leakage enhancement is supposed to be caused by trapped electrons in the gate oxide and/or creation of interface states at the gate overlapped drain region due to snap-back stress during the ESD event. The mechanism of the lekage can be explained by band-to-band and/or interface state-to-band tunneling of electrons. The improvement of the HBM ESD threshold has been also evaluated by using two types of drain engineering which are additional arsenic implantation for the output LDD MOSFETs and "offset" gate MOSFET as a protection circuit for the output pins. By using these drain engineering, the threshold can be improved to more than 2000 V.
New focused ion beam (FIB) methods for microscopic cross-sectioning and observation, microscopic crosssectioning and elemental analysis, and aluminum film microstructure observation are presented. The new methods are compared to the conventional methods and the conventional FIB methods, from the four viewpoints such as easiness of analysis, analysis time, spatial resolution, and pinpointing precision. The new FIB methods, as a result, are shown to be the best ones totally judging from the viewpoints shown above.
Expressions for electromagnetic fields generated by vertical and horizontal electric dipoles located in the air or in a lossy half-space near its boundary with air are obtained from Hertz vectors by the method of operators under the condition |n|3
Ryutaro KAWAMURA Hisaya HADAMA Ken-ichi SATO Ikuo TOKIZAWA
This paper proposes a high-speed VP bandwidth control scheme for ATM networks that utilizes a distributed control mechanism. First, the characteristics of VPs are compared with those of digital paths in STM networks. A distributed control scheme is adopted for rapid control. The basic elements and the necessary distributed function, the control algorithm, and the message transmission mechanism, are elucidated. The bandwidth alteration time with the proposed algorithm is estimated by considering network element processing and queuing delay. The proposed VP bandwidth control scheme can be applied to both public networks and leased line services. Finally, this paper focuses on its application to leased line services, and discusses the resource reduction effects of the proposed scheme.
Youichiro NIITSU Hiroyuki MIYAKAWA Osamu HIDAKA
Narrow emitter effects in self-aligned bipolar transistors are discussed. Besides the increase of a non-ideal base current, the decrease of an ideal base current is newly observed, and a consequent fluctuation of the current gain becomes wider in the smaller emitter geometry. Both phenomena are attributed to the reduction of an emitter-impurity concentration.
Hideo ITOH Seiji MUKAI Hiroyoshi YAJIMA
Beam-steering devices are attractive for spatial optical interconnections. Those devices are essential not only for fixed connecting routed optical interconnections, but for flexible connecting routed optical interconnections. The flexible connecting routed optical interconections are more powerful than the conventional fixed connecting routed ones. Structures and characteristics of beam-steering devices, a beam-scanning laser diode and a fringe-shifting laser diode, are reported for those interconnections. Using these lasers, the configurations of several optical interconnections, such as optical buses and optical data switching links as examples of fixed and flexible connecting routed optical interconnections are discussed.
Hiroshi ESAKI Kazuaki IWAMURA Toshikazu KODAMA Takeo FUKUDA
The connection admission control is one of preventive traffic control in ATM networks. The one objective of connection admission control is to keep the network load moderate so as to achieve a performance objective associated with quality of services (QOS). Because the cell loss rate is more sensitive to offered load than the average queuing delay in ATM networks, QOS requirement associated with cell loss rate is considered. The connection admission control acts as one of the major roles in traffic control. The job of connection admission control is to make an acceptance decision for connection set-up request to control the network load. This paper proposed and evaluated a connection admission control method. The proposed method is suitable for real time operation even in large diversity of connection types, because the amount of calculation for connection admission control is reduced remarkably compared to conventional algorithms. Moreover, the amount of calculation for the algorithm does not increase even when the number of connection types increases. The proposed method uses probability function for the number of cells transferred from multiplexed connections and uses recursive equations in estimating cell loss rate.
This paper proposes a new combined fast algorithm for transversal adaptive filters. The fast transversal filter (FTF) algorithm and the normalized LMS (NLMS) are combined in the following way. In the initialization period, the FTF is used to obtain fast convergence. After converging, the algorithm is switched to the NLMS algorithm because the FTF cannot be used for a long time due to its numerical instability. Nonstationary environment, that is, time varying unknown system for instance, is classified into three categories: slow time varying, fast time varying and sudden time varying systems. The NLMS algorithm is applied to the first situation. In the latter two cases, however, the NLMS algorithm cannot provide a good performance. So, the FTF algorithm is selected. Switching between the two algorithms is automatically controlled by using the difference of the MSE sequence. If the difference exceeds a threshold, then the FTF is selected. Other wise, the NLMS is selected. Compared with the RLS algorithm, the proposed combined algorithm needs less computation, while maintaining the same performance. Furthermore, compared with the FTF algorithm, it provides numerically stable operation.
Hiroki SHIZUYA Kenji KOYAMA Toshiya ITOH
This paper presents a zero-knowledge interactive protocol that demonstrates two factors a and b of a composite number n (=ab) are really known by the prover, without revealing the factors themselves. Here the factors a and b need not be primes. The security of the protocol is based on the difficulty of computing discrete logarithms modulo a large prime.
Takahide ISHIKAWA Kenji HOSOGI Masafumi KATSUMATA Hiroyuki MINAMI Yasuo MITSUI
This paper describes the reliability on recess type T-shaped gate HEMTs and their major failure mechanism investigated by accelerated life tests and following failure analysis. In this study, high temperature storage tests with a DC bias condition have been conducted on three different recess depths of 100, 125, and 150 nm. The results have clarified that the shallow recess devices of under 125 nm depth have no degration in minimum noise figure Fmin or gain Ga characteristics, indicating that standard HEMT devices, whose recess depth is chosen to be far under 125 nm, possess a sufficient reliability level. However, the devices with deep recess of 150 nm have shown degradation in both Fmin and Ga. Precise failure analyses including SEM observation and von Mises stress simulation have firstly revealed that the main failure mode in deeply recessed T-shaped gate HEMTs is increase in gate electrode's parasitic resistance Rg, which is caused by separation of "head" and "stem" parts of the T-shaped gate electrode due to thermo-mechanical stress concentration.
Tsuyoshi KONISHI Jun TANIDA Yoshiki ICHIOKA
We propose an optical computing architecture called pure optical parall array logic system (P-OPALS) as an instance of sophisticated optical computing system. On the P-OPALS, high density images can be processed in parallel using the optical system with high resolving power. We point out problems on the way to develop the P-OPALS and propose logical foundation of the P-OPALS called single-input optical array logic (S-OAL) as a solution of those problems. Based on the proposed architecture, an experimental system of the P-OPALS is constructed by using three optical techniques: birefringent encoding, selectable discrete correlator, and birefringent decoding. To show processing capability of the P-OPALS, some basic parallel operations are demonstrated. The results obtained indicate that image consisting of 300 100 pixels can be processed in parallel on the experimental P-OPALS. Finally, we estimate potential capability of the P-OPALS.