The search functionality is under construction.

Keyword Search Result

[Keyword] network(4507hit)

4501-4507hit(4507hit)

  • GUNGEN: Groupware for New Idea Generation System

    Jun MUNEMORI  Yoji NAGASAWA  

     
    PAPER

      Vol:
    E75-A No:2
      Page(s):
    171-178

    The groupware for new idea generation system, GUNGEN, has been developed. GUNGEN consists of a distributed and cooperative KJ method support system and an intelligent productive work card support system. The system was implemented on a network consisting of a number of personal computers. The distributed and cooperative KJ method is carried out on computers. The ideas proposed by participants are classified into several groups on the basis of similarity and then a conclusion is derived. The intelligent productive work card support system can be used as a multimedia database to refer to the previous data of the distributed and cooperative KJ method.

  • Distributed Leader Election on Chordal Ring Networks

    Koji NAKANO  Toshimitsu MASUZAWA  Nobuki TOKURA  

     
    PAPER

      Vol:
    E75-D No:1
      Page(s):
    58-63

    A chordal ring network is a processor network on which n processors are arranged to a ring with additional chords. We study a distributed leader election algorithm on chordal ring networks and present trade-offs between the message complexity and the number of chords at each processor and between the message complexity and the length of chords as follows:For every d(1dlog* n1) there exists a chordal ring network with d chords at each processor on which the message complexity for leader election is O(n(log(d1)nlog* n)).For every d(1dlog* n1) there exists a chordal ring network with log(d1)nd1 chords at each processor on which the message complexity for leader election is O(dn).For every m(2mn/2) there exists a chordal ring network whose chords have at most length m such that the message complexity for leader election is O((n/m)log n).

  • Optimal Schemes for Disseminating Information and Their Fault Tolerance

    Yoshihide IGARASHI  Kumiko KANAI  Kinya MIURA  Shingo OSAWA  

     
    PAPER

      Vol:
    E75-D No:1
      Page(s):
    22-29

    We describe two information disseminating schemes, t-disseminate and t-Rdisseminate in a computer network with N processors, where each processor can send a message to t-directions at each round. If no processors have failed, these schemes are time optimal. When at most t processors have failed, for t1 and t2 any of these schemes can broadcast information within any consecutive logt+1N2 rounds, and for an arbitrary t they can broadcast information within any consecutive logt+1N3 rounds.

  • Computational Power of Memory-Based Parallel Computation Models with Communication

    Yasuhiko TAKENAGA  Shuzo YAJIMA  

     
    PAPER

      Vol:
    E75-D No:1
      Page(s):
    89-94

    By adding some functions to memories, highly parallel computation may be realized. We have proposed memory-based parallel computation models, which uses a new functional memory as a SIMD type parallel computation engine. In this paper, we consider models with communication between the words of the functional memory. The memory-based parallel computation model consists of a random access machine and a functional memory. On the functional memory, it is possible to access multiple words in parallel according to the partial match with their memory addresses. The cube-FRAM model, which we propose in this paper, has a hypercube network on the functional memory. We prove that PSPACE is accelerated to polynomial time on the model. We think that the operations on each word of the functional memory are, in a sense, the essential ones for SIMD type parallel computation to realize the computational power.

  • Connected Associative Memory Neural Network with Dynamical Threshold Function

    Xin-Min HUANG  Yasumitsu MIYAZAKI  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:1
      Page(s):
    170-179

    This paper presents a new connected associative memory neural network. In this network, a threshold function which has two dynamical parameters is introduced. After analyzing the dynamical behaviors and giving an upper bound of the memory capacity of the conventional connected associative memory neural network, it is demonstrated that these parameters play an important role in the recalling processes of the connected neural network. An approximate method of evaluationg their optimum values is given. Further, the optimum feedback stopping time of this network is discussed. Therefore, in our network, the recalling processes are ended at the optimum feedback stopping time whether a state energy has been local minimum or not. The simulations on computer show that the dynamical behaviors of our network are greatly improved. Even though the number of learned patterns is so large as the number of neurons, the statistical properties of the dynamical behaviors of our network are that the output series of recalling processes approach to the expected patterns on their initial inputs.

  • Optical Information Processing Systems

    W. Thomas CATHEY  Satoshi ISHIHARA  Soo-Young LEE  Jacek CHROSTOWSKI  

     
    INVITED PAPER

      Vol:
    E75-C No:1
      Page(s):
    26-35

    We review the role of optics in interconnects, analog processing, neural networks, and digital computing. The properties of low interference, massively parallel interconnections, and very high data rates promise extremely high performance for optical information processing systems.

  • Optical Information Processing Systems

    W. Thomas CATHEY  Satoshi ISHIHARA  Soo-Young LEE  Jacek CHROSTOWSKI  

     
    INVITED PAPER

      Vol:
    E75-A No:1
      Page(s):
    28-37

    We review the role of optics in interconnects, analog processing, neural networks, and digital computing. The properties of low interference, massively parallel interconnections, and very high data rates promise extremely high performance for optical information processing systems.

4501-4507hit(4507hit)