In this paper, we clarify fundamental properties of conventional back-propagation neural networks to learn chaotic dynamical systems by some numerical experiments. We train three-layers networks using back-propagation algorithm with the data from two examples of two-dimensional discrete dynamical systems. We qualitatively evaluate the trained networks with two methods analysing geometrical mapping structure and reconstruction of an attractor by the recurrent feedback of the networks. We also quantitatively evaluate the trained networks with calculation of the Lyapunov exponents that represent the dynamics of the recurrent networks is chaotic or periodic. In many cases, the trained networks show high ability of extracting mapping structures of original two-dimensional dynamical systems. We confirm that the Lyapunov exponents of the trained networks correspond to whether the reconstructed attractors by the recurrent networks are chaotic or periodic.
Nobuo KANOU Yoshihiko HORIO Kazuyuki AIHARA Shogo NAKAMURA
This paper presents an improved current-mode circuit for implementation of a chaotic neuron model. The proposed circuit uses a switched-current integrator and a nonlinear output function circuit, which is based on an operational transconductance amplifier, as building blocks. Is is shown by SPICE simulations and experiments using discrete elements that the proposed circuit well replicates the behavior of the chaotic neuron model.
Makoto HIRAYAMA Eric Vatikiotis-BATESON Mitsuo KAWATO
This paper focuses on two areas in our effort to synthesize speech from neuromotor input using neural network models that effect transforms between cognitive intentions to speak, their physiological effects on vocal tract structures, and subsequent realization as acoustic signals. The first area concerns the biomechanical transform between motor commands to muscles and the ensuing articulator behavior. Using physiological data of muscle EMG (electromyography) and articulator movements during natural English speech utterances, three articulator-specific neural networks learn the forward dynamics that relate motor commands to the muscles and motion of the tongue, jaw, ant lips. Compared to a fully-connected network, mapping muscle EMG and motion for all three sets of articulators at once, this modular approach has improved performance by reducing network complexity and has eliminated some of the confounding influence of functional coupling among articulators. Network independence has also allowed us to identify and assess the effects of technical and empirical limitations on an articulator-by-articulator basis. This is particularly important for modeling the tongue whose complex structure is very difficult to examine empirically. The second area of progress concerns the transform between articulator motion and the speech acoustics. From the articulatory movement trajectories, a second neural network generates PARCOR (partial correlation) coefficients which are then used to synthesize the speech acoustics. In the current implementation, articulator velocities have been added as the inputs to the network. As a result, the model now follows the fast changes of the coefficients for consonants generated by relatively slow articulatory movements during natural English utterances. Although much work still needs to be done, progress in these areas brings us closer to our goal of emulating speech production processes computationally.
Joarder KAMRUZZAMAN Yukio KUMAGAI Hiromitsu HIKITA
We present an extension of the previously proposed 3-layer feedforward network called a cascaded network. Cascaded networks are trained to realize category classification employing binary input vectors and locally represented binary target output vectors. To realize a nonlinearly separable task the extended cascaded network presented here is consreucted by introducing high order cross producted inputs at the input layer. In the construction of the cascaded network, two 2-layer networks are first trained independently by delta rule and then cascaded. After cascading, the intermediate layer can be understood as a hidden layer which is trained to attain preassigned saturated outputs in response to the training set. In a cascaded network trained to categorize binary image patterns, saturation of hidden outputs reduces the effect of corrupted disturbances presented in the input. We demonstrated that the extended cascaded network was able to realize a nonlinearly separable task and yielded better generalization ability than the Backpropagation network.
High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.
Teruyuki MIYAJIMA Takaaki HASEGAWA Misao HANEISHI
In this paper we consider multiuser detection using a neural network in a synchronous code-division multiple-access channel. In a code-division multiple-access channel, a matched filter is widely used as a receiver. However, when the relative powers of the interfering signals are large, i.e. the near-far problem, the performances of the matched filter receiver degrade. Although the optimum receiver for multiuser detection is superior to the matched filter receiver in such situations, the optimum receiver is too complex to be implemented. A simple technique to implement the optimum multiuser detection is required. Recurrent neural networks which consist of a number of simple processing units can rapidly provide a collectively-computed solution. Moreover, the network can seek out a minimum in the energy function. On the other hand, the optimum multiuser detection in a synchronous channel is carried out by the maximization of a likelihood function. In this paper, it is shown that the energy function of the neural network is identical to the likelihood function of the optimum multiuser detection and the neural network can be used to implement the optimum multiuser detection. Performance comparisons among the optimum receiver, the matched filter one and the neural network one are carried out by computer simulations. It is shown that the neural network receiver has a capability to achieve near-optimum performance in several situations and local minimum problems are few serious.
Joarder KAMRUZZAMAN Yukio KUMAGAI Hiromitsu HIKITA
It has been reported that generalization performance of multilayer feedformard networks strongly depends on the attainment of saturated hidden outputs in response to the training set. Usually standard Backpropagation (BP) network mostly uses intermediate values of hidden units as the internal representation of the training patterns. In this letter, we propose construction of a 3-layer cascaded network in which two 2-layer networks are first trained independently by delta rule and then cascaded. After cascading, the intermediate layer can be viewed as hidden layer which is trained to attain preassigned saturated outputs in response to the training set. This network is particularly easier to construct for linearly separable training set, and can also be constructed for nonlinearly separable tasks by using higher order inputs at the input layer or by assigning proper codes at the intermediate layer which can be obtained from a trained Fahlman and Lebiere's network. Simulation results show that, at least, when the training set is linearly separable, use of the proposed cascaded network significantly enhances the generalization performance compared to BP network, and also maintains high generalization ability for nonlinearly separable training set. Performance of cascaded network depending on the preassigned codes at the intermediate layer is discussed and a suggestion about the preassigned coding is presented.
Hiroshi MIYAO Masafumi KOGA Takao MATSUMOTO
High-speed learning of neural networks using the multifrequency oscillation method is demonstrated for first time. During the learning of an analog neural network integrated circuit implementing the exclusive-OR' logic, weight and threshold values converge to steady states within 2 ms for a learning speed of 2 mega-patterns per second.
A learning procedure of a three layer neural network with limited structure, called a multi-valued neural network, is proposed. The three layer net has a single linear neuron in its output layer. All input weights of a number of hidden neurons are identical. The network takes k+1 distinct stable values, where k is the number of hidden neurons. The proposed learning procedure consists of two parts, Phase and Phase . The former is one for the learning of weights between the hidden and output layers, and the latter is one for those between the input and the hidden layers. The network is applied to classification of numerals, which shows the effectiveness of the proposed learning procedure.
Abrupt variations of attractors caused by argumental discreteness in non-Hermitian complex-valued neural networks are reported. When we apply the complex-valued associative memories to dynamical processing, the weighting matrices are constructed as non-Hermitian in general so that they have motive force to the signal vectors. It is observed that competitions between argumental rotation force and noise-suppression ability of associative memories lead to trajectory distortions and abrupt variations of the attractors.
Shinichi SATO Takuro SATO Atsushi FUKASAWA
The method of estimating multiple sound source locations based on a neural network algorithm and its performance are described in this paper. An evaluation function is first defined to reflect both properties of sound propagation of spherical wave front and the uniqueness of solution. A neural network is then composed to satisfy the conditions for the above evaluation function. Locations of multiple sources are given as exciting neurons. The proposed method is evaluated and compared with the deterministic method based on the Hyperbolic Method for the case of 8 sources on a square plane of 200m200m. It is found that the solutions are obtained correctly without any pseudo or dropped-out solutions. The proposed method is also applied to another case in which 54 sound sources are composed of 9 sound groups, each of which contains 6 sound sources. The proposed method is found to be effective and sufficient for practical application.
Paolo ARENA Luigi FORTUNA Antonio GALLO Salvatore GRAZIANI Giovanni MUSCATO
Asynchronous machines are a topic of great interest in the research area of actuators. Due to the complexity of these systems and to the required performance, the modelling and control of asynchronous machines are complex questions. Problems arise when the control goals require accurate descriptions of the electric machine or when we need to identify some electrical parameters; in the models employed it becomes very hard to take into account all the phenomena involved and therefore to make the error amplitude adequately small. Moreover, it is well known that, though an efficient control strategy requires knowledge of the flux vector, direct measurement of this quantity, using ad hoc transducers, does not represent a suitable approach, because it results in expensive machines. It is therefore necessary to perform an estimation of this vector, based on adequate dynamic non-linear models. Several different strategies have been proposed in literature to solve the items in a suitable manner. In this paper the authors propose a neural approach both to derive NARMAX models for asynchronous machines and to design non-linear observers: the need to use complex models that may be inefficient for control aims is therefore avoided. The results obtained with the strategy proposed were compared with simulations obtained by considering a classical fifth-order non-linear model.
In this article a Neural Network learning scheme is described, which is a generalization of VQ (Vector Quantization) and ART2a (a simplified version of Adaptive Resonance Theory 2). The basic differences between VQ and ART2a will be exhibited and it will be shown how these differences are covered by the generalized scheme. The generalized scheme enables a rich set of variations on VQ and ART2a. One such variation uses the expression ||I||2+||zj||2/||zj||sin
Nobuo KANOU Yoshihiko HORIO Kazuyuki AIHARA Shogo NAKAMURA
A model of a single neuron with chaotic dynamics is implemented with current-mode circuit design technique. The existence of chaotic dynamics in the circuit is demonstrated by simulation with SPICE3. The proposed circuit is suitable for implementing a chaotic neural network composed of such neuron models on a VLSI chip.
Ryuzo TAKIYAMA Kimitoshi FUKUDOME
The three layer neural network (TLNN) is treated, where the nonlinearity of a neuron is of signum. First we propose an expression of the discriminant function of the TLNN, which is called a linear-homogeneous expression. This expression allows the differentiation in spite of the signum property of the neuron. Subsequently a learning algorithm is proposed based on the linear-homogeneous form. The algorithm is an error-correction procedure, which gives a mathematical foundation to heuristic error-correction learnings described in various literatures.
Fumio UENO Takahiro INOUE Kenichi SUGITANI Badur-ul-Haque BALOCH Takayoshi YAMAMOTO
In this work, we introduce a fuzzy inference in conventional backpropagation learning algorithm, for networks of neuron like units. This procedure repeatedly adjusts the learning parameters and leads the system to converge at the earliest possible time. This technique is appropriate in a sense that optimum learning parameters are being applied in every learning cycle automatically, whereas the conventional backpropagation doesn't contain any well-defined rule regarding the proper determination of the value of learning parameters.
This paper describes a text-independent speaker recognition method using predictive neural networks. For text-independent speaker recognition, an ergodic model which allows transitions to any other state, including selftransitions, is adopted as the speaker model and one predictive neural network is assigned to each state. The proposed method was compared to quantization distortion based methods, HMM based methods, and a discriminative neural network based method through text-independent speaker identification experiments on 24 female speakers. The proposed method gave the highest identification rate of 100.0%, and the effectiveness of predictive neural networks for representing speaker individuality was clarified.
An adaptive algorithm is presented for fuzzy clustering of data. Partitioning is fuzzified by addition of an entropy term to objective functions. The proposed method produces more convex membership functions than those given by the fuzzy c-means algorithm.
Masaya OHTA Yoichiro ANZAI Shojiro YONEDA Akio OGIHARA
This article analyzes the property of the fully interconnected neural networks as a method of solving combinatorial optimization problems in general. In particular, in order to escape local minimums in this model, we analyze theoretically the relation between the diagonal elements of the connection matrix and the stability of the networks. It is shown that the position of the global minimum point of the energy function on the hyper sphere in n dimensional space is given by the eigen vector corresponding the maximum eigen value of the connection matrix. Then it is shown that the diagonal elements of the connection matrix can be improved without loss of generality. The equilibrium points of the improved networks are classified according to their properties, and their stability is investigated. In order to show that the change of the diagonal elements improves the potential for the global minimum search, computer simulations are carried out by using the theoretical values. In according to the simulation result on 10 neurons, the success rate to get the optimum solution is 97.5%. The result shows that the improvement of the diagonal elements has potential for minimum search.
Takeshi SHIMA Stephanie RINNERT
This paper discusses multiple-valued memory circuit using floating gate devices. It is an object of the paper to provide a new and improved analog memory device, which permits the memory of an amount of charges that accurately corresponds to analog information to be stored.