1-11hit |
Jiahai WANG Zheng TANG Qiping CAO
In this paper, introducing a stochastic hill-climbing dynamics into an optimal competitive Hopfield network model (OCHOM), we propose a new algorithm that permits temporary energy increases, which helps the OCHOM escape from local minima. In graph theory, a clique is a completely connected subgraph and the maximum clique problem (MCP) is to find a clique of maximum size of a graph. The MCP is a classic optimization problem in computer science and in graph theory with many real-world applications, and is also known to be NP-complete. Recently, Galan-Marin et al. proposed the OCHOM for the MCP. It can guarantee convergence to a global/local minimum of energy function, and performs better than other competitive neural approaches. However, the OCHOM has no mechanism to escape from local minima. The proposed algorithm introduces stochastic hill-climbing dynamics which helps the OCHOM escape from local minima, and it is applied to the MCP. A number of instances have been simulated to verify the proposed algorithm.
Masahiro AGU Kazuo YAMANAKA Hiroki TAKAHASHI
Stable phase locked states" are found amongst the equiliblia of the phasor model known as a generalized Hopfield model having complex-valued local states on the unit circle with centre at the origin. The asynchronous updating rule is assumed, and the energy decreasing characteristic is used to investigate a property of the equilibrium states. Some of the equilibria are shown to be fragile" in the sense that the energy is not locally convex. It is also shown that the local convexity of the energy is assured by a sort of consistency between the equilibrium and the connection weights.
It is well known that the Hopfield Model (HM) for neural networks to solve the TSP suffers from three major drawbacks: (D1) it can converge to non-optimal local minimum solutions; (D2) it can also converge to non-feasible solutions; (D3) results are very sensitive to the careful tuning of its parameters. A number of methods have been proposed to overcome (D1) well. In contrast, work on (D2) and (D3) has not been sufficient; techniques have not been generalized to larger classes of optimization problems with constraint including the TSP. We first construct Extended HMs (E-HMs) that overcome both (D2) and (D3). The extension of the E-HM lies in the addition of a synapse dynamical system cooperated with the corrent HM unit dynamical system. It is this synapse dynamical system that makes the TSP constraint hold at any final states for whatever choices of the HM parameters and an initial state. We then generalize the E-HM further into a network that can solve a larger class of continuous optimization problems with a constraint equation where both of the objective function and the constraint function are non-negative and continuously differentiable.
Zheng TANG Yuichi SHIRATA Okihiko ISHIZUKA Koichi TANNO
A calibrating analog-to digital (A/D) converter employing a T-Model neural network is described. The T-Model neural-based A/D converter architecure is presented with particular emphasis on the elimination of local minimum of the Hopfield neural network. Furthermore, a teacher forcing algorithm is presented and used to synthesize the A/D converter and correct errors of the converter due to offset and device mismatch. An experimental A/D converter using standard 5-µm CMOS discrete IC circuits demonstrates high-performance analog-to-digital conversion and calibrating.
Zheng TANG Koichi TASHIMA Hirofumi HEBISHIMA Okihiko ISHIZUKA Koichi TANNO
A direct gradient descent learning algorithm of energy function in Hopfield neural networks is proposed. The gradient descent learning is not performed on usual error functions, but the Hopfield energy functions directly. We demonstrate the algorithm by testing it on an analog-to-digital conversion and an associative memory problems.
Zheng TANG Hirofumi HEBISHIMA Okihiko ISHIZUKA Koichi TANNO
This paper describes an MOS charge-mode version of a T-Model neural-based PCM encoder. The neural-based PCM encoding networks are designed, simulated and implemented using MOS charge-mode circuits. Simulation results are given for both the T-Model and the Hopfield model CMOS charge-mode PCM encoders, and demonstrate the T-Model neural-based one performs the PCM encoding perfectly, while the Hopfield one fails to.
In this letter, we demonstrate an experimental CMOS neural circuit towards an understanding of how particular computations can be performed by a T-Model neural network. The architecture and a digital hardware implementation of the learning T-Model network are presented. Our experimental results show that the T-Model allows immense collective network computations and powerful learning.
Zheng TANG Okihiko ISHIZUKA Masakazu SAKAI
We report on an experimental hysteresis in the Hopfield networks and examine the effect of the hysteresis on some important characteristics of the Hopfield networks. The detail mathematic description of the hysteresis phenomenon in the Hopfield networks is given. It suggests that the hysteresis results from fully–connected interconnection of the Hopfield networks and the hysteresis tends to makes the Hopfield networks difficult to reach the global minimum. This paper presents a T–Model network approach to overcoming the hysteresis phenomenon by employing a half–connected interconnection. As a result, there is no hysteresis phenomenon found in the T–Model networks. Theoretical analysis of the T–Model networks is also given. The hysteresis phenomenon in the Hopfield and the T–Model networks is illustrated through experiments and simulations. The experiments agree with the theoretical analysis very well.
Zheng TANG Okihiko ISHIZUKA Masakazu SAKAI
A technique for pulse code modulation (PCM) encoding using a T-Model neural network is described. Performance evaluation on both the T-Model and the Hopfield model neural-based PCM encoders is carried out with PSpice simulations. The PSpice simulations also show that the T-Model neural-based PCM encoder computes to a global minimum much more effectively and more quickly than the Hopfield one.
Okihiko ISHIZUKA Zheng TANG Akihiro TAKEI Hiroki MATSUMOTO
This paper extends an earlier study on the T-Model neural network to its collective computational properties. We present arguments that it is necessary to use the half-interconnected T-Model networks rather than the fully-interconnected Hopfield model networks. The T-Model has been generated in response to a number of observed weaknesses in the Hopfield model. This paper identities these problems and show how the T-Model overcomes them. The T-Model network is essentially a feedforward network which does not produce a local minimum for computations. A concept for understanding the dynamics of the T-Model neural circuit is presented and its performance is also compared with the Hopfield model. The T-Model neural circuit is implemented and tested with standard CMOS technology. Simulations and experiments show that the T-Model allows immense collective network computations and does not produce a local minimum. High densities comparable to that of the Hopfield model implementations have also been achieved.
Shiao-Lin LIN Jiann-Ming WU Cheng-Yuan LIOU
By close analogy of annealing for solids, we devise a new algorithm, called APS, for the time evolution of both the state and the synapses of the Hopfield's neural network. Through constrainedly random perturbation of the synapses of the network, the evolution of the state will ignore the tremendous number of small minima and reach a good minimum. The synapses resemble the microstructure of a network. This new algorithm anneals the microstructure of the network through a thermal controlled process. And the algorithm allows us to obtain a good minimum of the Hopfield's model efficiently. We show the potential of this approach for optimization problems by applying it to the will-known traveling salesman problem. The performance of this new algorithm has been supported by many computer simulations.