Paul G. SCROBOHACI Ting-wei TANG
Impact ionization () in two n+-n--n+ device structures is investigated. Data obtained from self-consistent Monte-Carlo (SCMC) simulations of the devices is used to show that the average energy (
Tsuyoshi KONISHI Jun TANIDA Yoshiki ICHIOKA
We propose an optical computing architecture called pure optical parall array logic system (P-OPALS) as an instance of sophisticated optical computing system. On the P-OPALS, high density images can be processed in parallel using the optical system with high resolving power. We point out problems on the way to develop the P-OPALS and propose logical foundation of the P-OPALS called single-input optical array logic (S-OAL) as a solution of those problems. Based on the proposed architecture, an experimental system of the P-OPALS is constructed by using three optical techniques: birefringent encoding, selectable discrete correlator, and birefringent decoding. To show processing capability of the P-OPALS, some basic parallel operations are demonstrated. The results obtained indicate that image consisting of 300 100 pixels can be processed in parallel on the experimental P-OPALS. Finally, we estimate potential capability of the P-OPALS.
Masaharu KOMATSU Yukuo HAYASHIDA Kozo KINOSHITA
In this paper, we analyze the throughput of the Stop-and-wait and Go-back-N ARQ schemes over an unreliable channel modeled by the two-state Markov process. Generally, in these states, block error probabilities are different. From analytical results and numerical examples, we show that the throughput of the Stop-and-wait ARQ scheme only depends on overall average error probability, while that of the Go-back-N ARQ scheme depends on the characteristic of the Markov process.
Server aided secret computation (SASC) protocol also called the verifiable implicit asking protocol, is a protocol such that a powerful untrusted auxiliary device (server) can help a smart card (client) for computing a secret function efficiently. In this paper, we extend the concept of addition sequence to the secure addition sequence and develop an efficient algorithm to construct such sequence. By incorporating the secure addition sequence into the SASC protocol the performance of SASC protocol can be further enhanced.
In this paper an identity-based non-interactive key sharing scheme (IDNIKS) is proposed in order to realize the original concept of identity-based cryptosystem, of which secure realization scheme has not been proposed. First the necessary conditions for secure realization of IDNIKS are considered from two different poinrts of view: (i) the possibility to share a common-key non-interactively and (ii) the security for entity's conspiracy. Then a new non-interactive key sharing scheme is proposed, of which security depends on the difficulty of factoring. The most important contribution is to have succeeded in obtaining any entity's secret information as an exponent of the obtainer's identity information. The security of IDNIKS for entity's conspiracy is also considered in details.
Ryuichi SAKAI Masakatu MORII Masao KASAHARA
For improving the RSA cryptosystem, more desirable conditions on key structures have been intensively studied. Recently, M.J.Wiener presented a cryptanalytic attack on the use of small RSA secret exponents. To be secure against the Wiener's attack, the size of a secret exponent d should be chosen more than one-quarter of the size of the modulus n = pq (in bits). Besides, it is more desirable, in frequent cases, to make the public exponent e as small as possible. However if small d is chosen first, in such case as the digital signature system with smart card, the size of e is inevitably increased to that of n when we use the conventional key generation algorithm. This paper presents a new algorithm, Algorithm I, for generating of the secure RSA keys against Wiener's attack. With Algorithm I, it is possible to choose the smaller sizes of the RSA exponents under certain conditions on key parameters. For example, with Algorithm I, we can construct the RSA keys with the public exponent e of two-thirds and secret exponent d of one-third of the size of modulus n (in bits). Furthermore we present a modified version of Algorithm I, Algorithm II, for generating of the strong RSA keys having the difficulty of factoring n. Finally we analyze the performances of Algorithm I and Algorithm II.
Kyoichi NAKASHIMA Hitoshi MATZNAGA
For systems in which the probability that an incorrect output is observed differs with input values, we adopt the redundant usage of n copies of identical systems which we call the n-redundant system. This paper presents a method to find the optimal redundancy of systems for minimizing the probability of dangerous errors. First, it is proved that a k-out-of-n redundancy or a mixture of two kinds of k-out-of-n redundancies minimizes the probability of D-errors under the condition that the probability of output errors including both dangerous errors and safe errors is below a specified value. Next, an algorithm is given to find the optimal series-parallel redundancy of systems by using the properties of the distance between two structure functions.
Koblitz and Miller proposed a method by which the group of points on an elliptic curve over a finite field can be used for the public key cryptosystems instead of a finite field. To realize signature or identification schemes by a smart card, we need less data size stored in a smart card and less computation amount by it. In this paper, we show how to construct such elliptic curves while keeping security high.
When used for automotive applications, microcomputers have to meet two requirements more demanding than those for general use. One of these requirements is to respond to external events within a time scale of microseconds; the other is the high quality and high reliability necessary for the severe environmental operating conditions and the ambitious market requirements inherent to automotive applications. These needs especially the latter one have been responded to by further elaboration of each basic technology involved in semiconductor manufacturing. At the same time, various logic parts have been built into the microcomputer. This paper deals with several design approaches to the high quality and high reliability objective. First, testability improvement by the logical separation method focusing on the logic simulation model for generating test vectors, which enables us to reduce the time required for test vector development in half. Next, noise suppression methods to gain electromagnetic compatibility (EMC). Then, simplified memory transistor's analysis to evaluate the V/I-characteristics directly via external pins without opening the model seal, removing the passivation and placing a probe needle on the chip. Finally, increased reliability of on-chip EPROM using a special circuit raising the threshold value by approximately 1(V) compared to EPROM's without such a circuit.
Tomonori SEKIGUCHI Kazuhito FURUYA
The potential distribution around a linear array of donor atoms in a semiconductor crystal is calculated, approximating the linear array by a continuous line charge. Two methods are used for the analysis. One is the self-consistent calculation of Poisson's equation and the effective mass Schrödinger's equation, and the other is the Thomas-Fermi approximation. Results of both methods agree very well, and it is shown that it is possible to form a potential distribution as fine as the electron wavelength by appropriate arrangement of the impurity atoms. Arrays of impurity atoms therefore can act as buiding elements for future electron wave devices.
Hiroshi UEDA Masaya OHTA Akio OGIHARA Kunio FUKUNAGA
In this article, the autocorrelation associative neural network that is one of well-known applications of neural networks is improved to extend its capacity and error correcting ability. Our approach of the improvement is based on the consideration that negative self-feedbacks remove spurious states. Therefore, we propose a method to determine the self-feedbacks as small as possible within the range that all stored patterns are stable. A state transition rule that enables to escape oscillation is also presented because the method has a possibility of falling into oscillation. The efficiency of the method is confirmed by means of some computer simulations.
Hiroaki KOBAYASHI Hideyuki KUBOTA Susumu HORIGUCHI Tadao NAKAMURA
The ray-tracing algorithm can synthesize very realistic images. However, the ray tracing is very time consuming. To solve this problem, a load balancing strategy using temporal coherence between images in an animation is presented for balancing computational loads among processing elements of a parallel processng system. Our parallel processing model is based on a space subdivision method for the ray-tracing algorithm. A subdivided object space is distributed among processing elements of the parallel system. To clarify the effectiveness of the load balancing strategy, we examine the system performance by computer simulation.
Kenji SHIMA Koichi MUNAKATA Shoichi WASHINO Shinji KOMORI Yasuya KAJIWARA Setsuhiro SHIMOMURA
Automotive electronics technology has become extremely advanced in the regions of automotive engine control, anti-skid brake control, and others. These control systems require highly advanced control performance and high speed microprocessors which can rapidly execute interrupt processing. Automotive engine control systems are now widely utilized in cars with high speed, high power engines. At present, it is generally acknowledged that such high performance engine control for the 10,000 rpm, 12 cylinder engines requires three or more conventional microprocessors. We fabricated an engine control system prototype incorporating the data-driven processor under development, which was installed in an actual automobile. In this paper, the characteristics of the engine control program and simulation results are firstly discussed. Secondly, the structure of the engine control system prototype and the control performance applied to the actual automobile are shown. Finally, from the results of software simulation and the installation of the engine control system prototype with the data-driven processor, we conclude that a single chip data-driven microprocessor can control a high speed, high power, 10,000 rpm, 12 cylinder automobile engine.
Masanori HARIYAMA Michitaka KAMEYAMA
Since carelessness in driving causes a terrible traffic accident, it is an important subject for a vehicle to avoid collision autonomously. Real-time collision detection between a vehicle and obstacles will be a key target for the next-generation car electronics system. In collision detection, a large storage capacity is usually required to store the 3-D information on the obstacles lacated in a workspace. Moreover, high-computational power is essential not only in coordinate transformation but also in matching operation. In the proposed collision detection VLSI processor, the matching operation is drastically accelerated by using a Content-Addressable Memory (CAM) which evaluates the magnitude relationships between an input word and all the stored words in parallel. A new obstacle representation based on a union of rectangular solids is also used to reduce the obstacle memory capacity, so that the collision detection can be parformed only by parallel magnitude comparison. Parallel architecture using several identical processor elements (PEs) is employed to perform the coordinate transformation at high speed based on the COordinate Rotation DIgital Computation (CORDIC) algorithms. The collision detection time becomes 5.2 ms using 20 PEs and five CAMs with a 42-kbit capacity.
Electronics and automobiles were bound together by the introduction of emission regulations in the 1970's. The rapid progress of control technology and semiconductors that typify microcomputers has brought still closer relations between them. Without electronics, it would be impossible to realize features such as pursuit of comfort and environmental and safety measures which should be added to the automobile's fundamental features. In looking ahead to the future, the role of electronics in achieving electric automobiles and the ultimate goal of "automatic driving" is ever-increasing. Everyone knows that automobiles have become indispensable in our lives. In the future, the role of electronics will become increasingly important in order to evolve automobiles even further to allow harmonization with society.
Masahiro TSUNOYAMA Masataka KAWANAKA Sachio NAITO
This paper proposes a reconfigurable parallel processor based on a two-dimensional linear celular automaton model. The processor based on the model can be reconfigured quickly by utilizing the characteristics of the automaton used for its model. Moreover, the processor has short data path length between processing elements compared with the length of the processor based on one-dimensional linear cellular automaton model which has been already discussed. The processing elements of the processor based on the two-dimensional linear cellular automaton model are regarded as cells and the operational states of the processor are treated as the states of the automaton. When faults are detected, the processor can be reconfigured by changing its state under the state transition function of the processor determined by the weighting function of the automaton model. The processor can be reconfigured within a clock period required for making a state transition. This processor is extremely effective for real-time data processing systems required high reliability.
Space-time tradeoff is a very fundamental issue to design a fault-tolerant real-time (called responsive) system. Routing a message in large computer networks is efficient when each node knows the full topology of the whole network. However, in the hierarchical routing schemes, no node knows the full topology. In this paper, a tradeoff between an optimality of path length (message delay: time) and the amount of topology information (routing table size: space) in each node is presented. The schemes to be analyzed include K-scheme (by Kamoun and Kleinrock), G-scheme (by Garcia and Shacham), and I-scheme (by authors). The analysis is performed by simulation experiments. The results show that, with respect to average path length, I-scheme is superior to both K-scheme and G-scheme, and that K-scheme is better than G-scheme. Additionally, an average path length in I-scheme is about 20% longer than the optimal path length. On the other hand, for the routing table size, three schemes are ranked in reverse direction. However, with respect to the order of size of routing table, the schemes have the same complexity O (log n) where n is the number of nodes in a network.
The emerging discipline of responsive systems demands fault-tolerant and real-time performance in uniprocessor, parallel, and distributed computing environments. The new proposal for responsiveness measure is presented, followed by an introduction of a model for responsive computing. The model, called CONCORDS (CONsensus/COmputation for Responsive Distributed Systems), is based on the integration of various forms of consensus and computation (progress or recovery). The consensus tasks include clock synchronization, diagnosis, checkpointing scheduling and resource allocation.
Typical processes controlled by hard real-time computer systems undergo several, mutually exclusive modes of operation. By deterministically switching among a number of static schedules, a pre run-time scheduled system is able to adapt to changing environmental situations. This paper presents concepts for specification of mode changes, construction of static schedules for modes and transitions, and timely run-time execution of mode changes. We propose concepts for mode changes in the context pre run-time scheduled hard real-time systems. While MARS is used to illustrate the concepts' application, they are applicable to a variety of systems. Our methods adhere closely to the ones established for single modes. By decomposing the system into a set of disjoint modes, the design process and its comprehension are facilitated, testing efforts are reduced significantly, and solutions are enabled which do not exist if all system activities of all modes are combined into a single schedule.
Atsushi SHIONOZAKI Mario TOKORO
A responsive network architecture is essential in future open distributed systems. In this paper, a framework that provides the foundations for a responsive network architecture for an internetworking environment is proposed. It is called the Virtually Separated Link (VSL) model. By incorporating this framework, communication of both data and control information can be completed in bounded time. Consequently, a protocol can initiate a recovery mechanism in bounded time, or allow an application to do the same. Its functionalities augment existing resource reservation protocols that support multimedia communication. An overview of a real-time network protocol that is based on this framework is also presented.