We describe a formal verification algorithm for pipelined processors. This algorithm proves the equivalence between a processor's design and its specifications by using rewriting of recursive functions and a new type of mathematical induction: extended recursive induction. After the user indicates only selectors in the design, this algorithm can automatically prove processors having more than 10(1010) states. The algorithm is manuary applied to benchmark processors with pipelined control, and we discuss how data width, memory size, and the numbers of pipeline stages and instructions influence the computation cost of proving the correctness of the processors. Further, this algorithm can be used to generate a pipeline invariant.
Hiroto KAWAKAMI Yutaka MIYAMOTO Tomoyoshi KATAOKA Kazuo HAGIMOTO
This paper discusses an all-optical tank circuit that uses the comb-shaped gain spectrum generated by a Brillouin amplifier. The theory of timing clock extraction is shown for two cases: with two gains and with three gains. In both cases, the waveform of the extracted timing clock is simulated. According to the simulation, unlike an ordinary tank circuit, the amplitude of the extracted clock is not constant even though the quality factor (Q) is infinite. The extracted clock is clearly influenced by the pattern of the original data stream if the Brillouin gain is finite. The ratio of the maximum extracted clock amplitude to the minimum extracted amplitude is calculated as a function of Brillouin gain. The detuning of the pump light frequency is also discussed. It induces not only changes in the Brillouin gain, but also phase shift in the amplified light. The relation between the frequency drift of the pump lights and the jitter of the extracted timing clock is shown, in both cases: two pump lights are used and three pump lights are used. It is numerically shown that when the all pump lights have the same frequency drift, i.e., their frequency separation is constant, the phase of the extracted clock is not influenced by the frequency drift of the pump lights. The operation principle is demonstrated at 5Gbit/s, 2.5Gbit/s, and 2Gbit/s using two pumping techniques. The parameters of quality factor and the suppression ratio in the baseband domain are measured. Q and the suppression ratio are found to be 160 and 28dB, respectively.
This paper proposes a novel updating technique, dynamically updating, for achieving extension or modification of functions in a distributed system. Usual updating technique requires synchronous suspension for multiple processes for avoiding unspecified reception caused by the conflict of different versions of processes. Thus, this technique needs very high overhead and it must restrict the types of distributed systems, to which it can be applied, to RPC (remote procedure call) type or client-server type. Using the proposed dynamically updating technique, updating management can be invoked asynchronously by each process with assurance of correct execution of the system, i.e., the system can cope with the effect of unspecified reception caused by mixture of different version processes. Therefore, low overhead updating can be achieved in partner type distributed systems, that is more general type including communications systems or computer networks. Dynamically updating technique is implemented by using a novel distributed algorithm that consists of group communication, checkpoint setting, and rollback recovery. By using the algorithm proposed in this paper, rollback recovery can be achieved with the lowest overhead, i.e., a set of checkpoint determines the last global state for consistent rollback recovery and a set of processes that need to rollback simultaneously is the smallest one. This paper also proves the correctness of the proposed algorithm.
Masahito KURIHARA Azuma OHUCHI
We extend the theorem of Gramlich on modular termination of term rewriting systems, by relaxing the disjointness condition and introducing the composability instead. More precisely, we prove that if R1, R-1 are composable, terminating term rewriting systems such that their union is nonterminating then for some a {1, -1}, Ra OR is nonterminating and R-a
A key element in the CDMA transmission is DS spreading. Spreading in a DS/SSMA system are provided in two categories-synchronization and data. For synchronization sequences, good auto-correlation and cross-correlation properties are required in order to guarantee fast acquistion with a minimum false alarm probability. On the other hand, the auto-correlation property may not be so important in data spreading since synchronization is obtained by synchronization spreading. In this paper we provide a set of synchronization sequences and a set of data sequences--each a set of binary N-tuples--that have the necessary correlation constraints.
We model a road network as a directed graph G(V,E) with a source s and a sink t, where each edge e has a positive length l(e) and each vertex v has a distribution function αv with respect to the traffic entering and leaving v. This paper proposes a polynomial time algorithm for evaluating the importance of each edge e E whicn is defined to be the traffic f(e) passing through e in order to assign the required traffic Fst(0) from s to t along only shortest s-t paths in accordance with the distribution function αv at each vertex v.
This paper presents improvement of data error rate against burst noise by using both chip interleaving and hard limiter in direct sequence spread spectrum (DS/SS) communication systems. Chip interleaving, which is a unique method of DS/SS systems, is effective when burst noise power is small. However, when the burst noise power is large, date error rate is degraded. While, though hard limiter suppresses burst noise power, it gives little effectiveness when the burst noise length is long. Using chip interleaving and hard limiter together, as they work complementary, stable and considerable improvement of data error rate is achieved.
Takashi KANEKO Yutaka YAMAGATA Takaharu IDOGAKI Tadashi HATTORI Toshiro HIGUCHI
A 3-dimensional specific thickness profile was fabricated on a thin glass diaphragm lens to reduce aberration at short focal distances for greater dynamic focusing. The deformation of the diaphragm was calculated by stress analysis utilizing the Finite Element Method (FEM). Geometric non linearity is considered in the FEM analysis. The glass diaphragm is 10 mm in diameter and the average thickness is 11 µm. To obtain both a curved shape and an optical surface on the glass diaphragm, the 3-dimensional precision grinding technique was utilized. The processed shape matches the designed one with less than 0.3 µm deviation, and the average surface roughness is 0.005 µm. Optical characteristics of the dynamic focusing lens having a specific thickness profile, were measured by Modulation Transfer Function (MTF) measurement equipment. At a focal distance of 250 mm, the specific thickness diaphragm lens resolution is 10 cycles/mm, whereas, the uniform thickness diaphragm is 4 cycles/mm. Even at other focal distances, the specific thickness diaphragm shows superior optical characteristics in comparison with those of the uniform thickness diaphragm. The 3-dimensional profile diaphragm resolution is 2.5 times finer at a focal distance of 250 mm, thus, being capable of displacement control for variable optic devices. This was achieved by employing semiconductor processing methods in conjunction with precision grinding techniques which are necessary for fabricating micro structures.
Ryutaro MATSUMURA Osamu MIYAGISHI
Managed Objects (MOs) can be specified by the combination of a static information model and a dynamic process model. First, this paper presents a mapping of attributes from a process model diagram to an information model diagram. Then, it introduces a concept of topology into these two models and proposes a hypothesis about the relationship of topology in these two models. To explicitly explain the hypothesis, it can be stated that all attributes of incoming or outgoing data related to a process in a process model are mapped to an information model where these attributes are interconnected by an explicit relationship which corresponds to a specific meaning, such as physical containment or logical connection. From an intuitive perspective, it can be said that if two attributes have a close topological relationship in a process model, the mapped attributes also have a close topological relationship in an information model. This hypothesis provides clues for determining whether there is an error in an attribute either in the process model or in the information model. By examining the way attributes of incoming or outgoing data related to a process are mapped to an information model, we can detect whether there is an error with respect to the process. The error correction is performed with the assistance of probability analysis. The method of error detection and correction can be implemented in a computer aided tool. Then, error detection on the attribute level becomes automatic, and error correction on the attribute level becomes interactive through the computer aided tool. Finally, the validity of the hypothesis is confirmed by analyzing ITU-T Recommendation M.3100. The specification of the fabric object class defined in M.3 100 is transformed into these two models and the hypothesis is validated for the analysis of the mapping between these two models.
Makoto TSUJIGADO Teruo HIKITA Jun GINBAYASHI
In formal specification languages for parallel processes, such as CSP and LOTOS, algebraic laws for basic operators are provided that can be used to transform process expressions, and in particular, composition of processes can be calculated using these laws. Process composition can be used to simplify and improve the specification, and also to prove properties of the specification such as deadlock absence. We here test the practicality of process composition using CSP and suggest useful techniques, working in an example with nontrivial size and complexity. We emphasize that the size explosion of composed processes, caused by interleaving of the events of component processes, is a serious problem. Then we propose a technique, which we name two-way pipe, that can be used to reduce the size of the composed process, regarded as a program optimization at specification level.
Tetsushi KOIDE Yoshinori KATSURA Katsumi YAMATANI Shin'ichi WAKABAYASHI Noriyoshi YOSHIDA
This paper presents a heuristic floorplanning method that improves the method proposed by Vijayan and Tsay. It is based on tentative insertion of constraints, that intentionally produces redundant constraints to make it possible to search in a wide range of solution space. The proposed method can reduce the total area of blocks with the removal and insertion of constraints on the critical path in both horizontal and vertical constraint graphs. Experimental results for MCNC benchmarks showed that the quality of solutions of the proposed method is better than [7],[8] by about 15% on average, and even for the large number of blocks, the proposed method keeps the high quality of solutions.
Jiro NAGANUMA Takeshi OGURA Tamio HOSHINO
This paper proposes a new environment for high-level VLSI design specification validation using "Algorithmic Debugging" and evaluates its benefits on three significant examples (a protocol processor, an 8-bit CPU, and a Prolog processor). A design is specified at a high-level using the structured analysis (SA) method, which is useful for analyzing and understanding the functionality to be realized. The specification written in SA is transformed into a logic programming language and is simulated in it. The errors (which terminate with an incorrect output in the simulation) included in the three large examples are efficiently located by answering junt a few queries from the algorithmic debugger. The number of interactions between the designer and the debugger is reduced by a factor of ten to a hundred compared to conventional simulation based validation methodologies. The correct SA specification can be automatically translated into a Register Transfer Level (RTL) specification suitable for logic synthesis. In this environment, a designer is freed from the tedious task of debugging a RTL specification, and can concentrate on the design itself. This environment promises to be an important step towards efficient high-level VLSI design specification validation.
Hiroyuki HAMAZUMI Yasuhiro ITO Hiroshi MIYAZAWA
This paper describes an adaptively weighted code division multiplexing (AW-CDM) system, in other words, power controlled spread-spectrum multiplexing system and describes its application to hierarchical digital broadcasting of television signals. The AW-CDM, being combined with multi-resolutional video encoder, can provide such a hierarchical transmission that allows both high quality services for fixed receivers and reduced quality services for mobile/portable receivers. The carrier and the clock are robustly regenerated by using a spread-spectrum multiplexed pseudorandom noise (PN) sounder as a reference in the receiver. The PN reference is also used for Rake combining with signals via different paths, and for adaptive equalization (EQ). In a prototype AW-CDM modem, three layers of hierarchical video signals (highs: 5.91Mbps, middles: 1.50Mbps, and lows: 0.46 Mbps) are divided into a pair of 64 orthogonal spread-spectrum subchannels, each of which can be given a different priority and therefore a different threshold. In this case, three different thresholds are given. The modem's transmission rate is 9.7Mbps in the 6MHz band. Indoor transmission tests confirm that lows (weighted power layer I), middles (averaged power layer II), and highs (lightened power layer III) are retrievable under conditions in which the desired to undesired signal ratios (DURs) are respectively 0dB, 8.5dB, and 13.5dB. If the undesired signals are multipaths, these performances are dramatically improved by Rake combining and EQ. The AW-CDM system can be used for 20-30 Mbps advanced television (ATV) transmission in the 6-MHz bandwidth simply by changing the binary inputs into quaternary or octonary inputs.
A stepwise refinement method of communications service specifications is proposed to generate communications software that can conform to any network architecture. This method uses a two-layered language; one layer is a service specification description language (STR), and the other layer is a supplementary specification description language for implementing STR description on a communications system (STR/D). STR specifies terminal behaviors that can be recognized from a perspective outside of the communications systems. With STR, a communications service is defined by a set of rules that can be described without detailed knowledge of communications systems or communications network architectures. Each STR rule describes a global state transition of terminals. Supplementary specifications, such as terminal control and network control, are needed to implement communications services specified by STR rules. These supplementary specifications are described by STR/D rules. Communications services, such as UPT (Universal Personal Telecommunication), are standardized so that they can be provided on a given functional model consisting of functional entities. Specifications for each functional entity in a network are obtained from the two kinds of initially described specifications mentioned above. The obtained specifications are described by STR(L) and STR/D(L) rules, which specify local specifications of a functional entity. These specifications for functional entities are then transformed into software specifications, and finally communications software is generated from these software specifications. This stepwise refinement method makes it possible to generate communications software that can conform to any functional model from service specifications.
Takahiro OIE Tadamitsu IRITANI Hiroshi KAWAKAMI
In this paper, we subjects the case that frequency–shift–keying (FSK) modulation and phase locked loop (PLL) demodulator are used in frequency hopped spread spectrum (FH–SS) communication system. So the carrier frequencies of undesired transmitters may come into collision with the carrier frequency of desired transmitter in this communication system, we evaluate the response of PLL by two sinusoidal inputs so as to estimate how the response of PLL demodulator is affected by the collision of carrier frequencies. First, we compute the synchronization diagrams of PLL with two sinusoids. From this, it is indicated that allowable value of amplitude ratio of interference transmitter's signal to disired transmitter's signal decreases with increasing FSK modulation width of desired transmitter. Next, we calculated the output of PLL demodulator with two sinusoids. To this end, it is shown that the allowable value of amplitude ratio is bounded by a constant value even if FSK modulation width is enough small.
Kouji OHUCHI Hiromasa HABUCHI Takaaki HASEGAWA
Synchronization has been one of the problems in M–ary spread spectrum communication systems. In this letter, we propose the frame synchronization method using the Hadamard matrix and a frame synchronization method of PCM communication systems. Moreover, we analyze the probabilities of keeping synchronous state and frame renewal rates, and we evaluate the relationship between these probabilities and the number of stages of counters.
Manabu SAWADA Masaaki KATAYAMA Takaya YAMAZATO Akira OGAWA
This paper discusses the characteristics of the nonlinearly amplified spread–spectrum (SS) signals. We evaluate the symbol error–rate performance with the conventional receiver, changing the length of the spreading sequence. In addition, we also propose the receiver with MLSE. The configuration of the MLSE for the nonlinearly amplified signals is generally complicated; however we show that the complexity of the MLSE receiver can be reduced, as the number of required reference sequences in the receiver for an SS signal is small. As the result, it is shown that the error rate performance of the nonlinearly amplified SS signal can be improved by this proposed receiver and that the degradation caused by the nonlinear amplification can be made negligibly small with a sufficiently long spreading sequence.
Yasufumi SASAKI Masanobu KOMINAMI Hiroji KUSAKA
An efficient full–wave spectral domain moment method is developed to compute the current distribution and the radiation associated with microstrip discontinuities. Two techniques are used to increase the efficiency of the method of moments algorithm so that a transmission line of moderate electrical size can be analyzed in reasonable time.
Kiyoshi YOSHIDA Atsuo TAKAHASHI
The authors have studied mechanism of transition from metallic phase to gaseous phase in contact break arc. For further elucidation of the mechanism, we have carried out spectroscopic measurement. The spectrum measurement system which had high time resolution was composed using two monochromators and a bifurcated image fiber, which had one input port and two output ports. The input port received the arc light, and the two monochromators received the arc light from the two output ports, respectively. The spectral sensitivity of the two monochromators was corrected with a standard lamp. We have measured simultaneously two spectra of break arc for Ag in laboratory air, under the condition where source voltage E=48 V, load inductance L=2.3 mH, and closed contact current I0=6 A. As a result, the time-varying tendency of spectrum intensity is similar for the same element, even if the wavelength is different. And from the comparison of time average spectrum intensity, it is clarified that average intensity for gas spectrum does not attain to 10% of that for metallic atomic spectrum (Ag I, 520.91 nm). In addition, the decrease point of Ag II (ion) spectrum has been found to correspond with the peak of Ag I (atom) spectrum.
Mitsuru TAKEUCHI Takayoshi KUBONO
This paper describes a simple system of measuring the spatial distributions of spectral intensities with AgI-421 nm and AgI-546 nm among many optical spectrums emitted from an arc discharge between separating Ag contacts. In order to detect the intensities of two optical spectrums, the prototype equipment has two sets assembled with a CCD color linear image sensor, a lens and optical filters, which are arranged on rectangularity. The intensities of two spectrums can be recorded with 2 ms time-resolution within a long arc duration on a digital memory. The recorded digital signals are processed by using a personal computer in order to reconstruct two spatial distributions of spectral intensities in a cross section of arc column with the Algebraic Reconstruction Technique.