The groupware for new idea generation system, GUNGEN, has been developed. GUNGEN consists of a distributed and cooperative KJ method support system and an intelligent productive work card support system. The system was implemented on a network consisting of a number of personal computers. The distributed and cooperative KJ method is carried out on computers. The ideas proposed by participants are classified into several groups on the basis of similarity and then a conclusion is derived. The intelligent productive work card support system can be used as a multimedia database to refer to the previous data of the distributed and cooperative KJ method.
Koji NAKANO Toshimitsu MASUZAWA Nobuki TOKURA
A chordal ring network is a processor network on which n processors are arranged to a ring with additional chords. We study a distributed leader election algorithm on chordal ring networks and present trade-offs between the message complexity and the number of chords at each processor and between the message complexity and the length of chords as follows:For every d(1dlog* n1) there exists a chordal ring network with d chords at each processor on which the message complexity for leader election is O(n(log(d1)nlog* n)).For every d(1dlog* n1) there exists a chordal ring network with log(d1)nd1 chords at each processor on which the message complexity for leader election is O(dn).For every m(2mn/2) there exists a chordal ring network whose chords have at most length m such that the message complexity for leader election is O((n/m)log n).
Yoshihide IGARASHI Kumiko KANAI Kinya MIURA Shingo OSAWA
We describe two information disseminating schemes, t-disseminate and t-Rdisseminate in a computer network with N processors, where each processor can send a message to t-directions at each round. If no processors have failed, these schemes are time optimal. When at most t processors have failed, for t1 and t2 any of these schemes can broadcast information within any consecutive logt+1N2 rounds, and for an arbitrary t they can broadcast information within any consecutive logt+1N3 rounds.
Yasuhiko TAKENAGA Shuzo YAJIMA
By adding some functions to memories, highly parallel computation may be realized. We have proposed memory-based parallel computation models, which uses a new functional memory as a SIMD type parallel computation engine. In this paper, we consider models with communication between the words of the functional memory. The memory-based parallel computation model consists of a random access machine and a functional memory. On the functional memory, it is possible to access multiple words in parallel according to the partial match with their memory addresses. The cube-FRAM model, which we propose in this paper, has a hypercube network on the functional memory. We prove that PSPACE is accelerated to polynomial time on the model. We think that the operations on each word of the functional memory are, in a sense, the essential ones for SIMD type parallel computation to realize the computational power.
Xin-Min HUANG Yasumitsu MIYAZAKI
This paper presents a new connected associative memory neural network. In this network, a threshold function which has two dynamical parameters is introduced. After analyzing the dynamical behaviors and giving an upper bound of the memory capacity of the conventional connected associative memory neural network, it is demonstrated that these parameters play an important role in the recalling processes of the connected neural network. An approximate method of evaluationg their optimum values is given. Further, the optimum feedback stopping time of this network is discussed. Therefore, in our network, the recalling processes are ended at the optimum feedback stopping time whether a state energy has been local minimum or not. The simulations on computer show that the dynamical behaviors of our network are greatly improved. Even though the number of learned patterns is so large as the number of neurons, the statistical properties of the dynamical behaviors of our network are that the output series of recalling processes approach to the expected patterns on their initial inputs.
W. Thomas CATHEY Satoshi ISHIHARA Soo-Young LEE Jacek CHROSTOWSKI
We review the role of optics in interconnects, analog processing, neural networks, and digital computing. The properties of low interference, massively parallel interconnections, and very high data rates promise extremely high performance for optical information processing systems.
W. Thomas CATHEY Satoshi ISHIHARA Soo-Young LEE Jacek CHROSTOWSKI
We review the role of optics in interconnects, analog processing, neural networks, and digital computing. The properties of low interference, massively parallel interconnections, and very high data rates promise extremely high performance for optical information processing systems.