1-10hit |
Tetsuo NISHI Hajime HARA Norikazu TAKAHASHI
We give necessary and sufficient conditions for a 1-D DBCNN (1-dimensional discrete-time binary cellular neural network) with an external input to be stable in terms of connection coefficients. The results are generalization of our previous one [18],[19] in which the input was assumed to be zero.
Tetsuo NISHI Norikazu TAKAHASHI
The number of solutions of a nonlinear equation x = sgn(Wx) is discussed. The equation is derived for the determination of equilibrium points of a kind of Hopfield neural networks. We impose some conditions on W. The conditions correspond to the case where a Hopfield neural network has n neurons arranged on a ring, each neuron has connections only from k preceding neurons and the magnitude of k connections decrease as the distance between two neurons increases. We show that the maximum number of solutions for the above case is extremely few and is independent of the number of neurons, n, if k is less than or equal to 4. We also show that the number of solutions generally increases exponentially with n by considering the case where k = n-1.
Jun GUO Norikazu TAKAHASHI Tetsuo NISHI
A novel method to simplify decision functions of support vector machines (SVMs) is proposed in this paper. In our method, a decision function is determined first in a usual way by using all training samples. Next those support vectors which contribute less to the decision function are excluded from the training samples. Finally a new decision function is obtained by using the remaining samples. Experimental results show that the proposed method can effectively simplify decision functions of SVMs without reducing the generalization capability.
Takumi KIMURA Norikazu TAKAHASHI
Nonnegative Matrix Factorization (NMF) with sparseness and smoothness constraints has attracted increasing attention. When these properties are considered, NMF is usually formulated as an optimization problem in which a linear combination of an approximation error term and some regularization terms must be minimized under the constraint that the factor matrices are nonnegative. In this paper, we focus our attention on the error measure based on the Euclidean distance and propose a new iterative method for solving those optimization problems. The proposed method is based on the Hierarchical Alternating Least Squares (HALS) algorithm developed by Cichocki et al. We first present an example to show that the original HALS algorithm can increase the objective value. We then propose a new algorithm called the Gauss-Seidel HALS algorithm that decreases the objective value monotonically. We also prove that it has the global convergence property in the sense of Zangwill. We finally verify the effectiveness of the proposed algorithm through numerical experiments using synthetic and real data.
Norikazu TAKAHASHI Tetsuo NISHI
This paper gives a new sufficient condition for cellular neural networks with delay (DCNNs) to be completely stable. The result is a generalization of two existing stability conditions for DCNNs, and also contains a complete stability condition for standard CNNs as a special case. Our new sufficient condition does not require the uniqueness of equilibrium point of DCNNs and is independent of the length of delay.
Hajime HARA Tetsuo NISHI Norikazu TAKAHASHI
In this paper we give the necessary and sufficient conditions for 2-dimensional discrete-time systems described by the signum function to be stable.
Hidenori SATO Tetsuo NISHI Norikazu TAKAHASHI
This paper investigates the behavior of one-dimensional discrete-time binary cellular neural networks with both the A- and B-templates and gives the necessary and sufficient conditions for the above network to be stable for unspecified fixed boundaries.
Tetsuo NISHI Norikazu TAKAHASHI Hajime HARA
We give the necessary and sufficient conditions for a one-dimensional discrete-time autonomous binary cellular neural networks to be stable in the case of fixed boundary. The results are complete generalization of our previous one [16] in which the symmetrical connections were assumed. The conditions are compared with some stability conditions so far known.
Jun GUO Tetsuo NISHI Norikazu TAKAHASHI
Analog Hopfield neural networks (HNNs) have so far been used to solve many kinds of optimization problems, in particular, combinatorial problems such as the TSP, which can be described by an objective function and some equality constraints. When we solve a minimization problem with equality constraints by using HNNs, however, the constraints are satisfied only approximately. In this paper we propose a circuit which rigorously realizes the equality constraints and whose energy function corresponds to the prescribed objective function. We use the SPICE program to solve circuit equations corresponding to the above circuits. The proposed method is applied to several kinds of optimization problems and the results are very satisfactory.
Daiki HIRATA Norikazu TAKAHASHI
Convolutional Neural Networks (CNNs) have shown remarkable performance in image recognition tasks. In this letter, we propose a new CNN model called the EnsNet which is composed of one base CNN and multiple Fully Connected SubNetworks (FCSNs). In this model, the set of feature maps generated by the last convolutional layer in the base CNN is divided along channels into disjoint subsets, and these subsets are assigned to the FCSNs. Each of the FCSNs is trained independent of others so that it can predict the class label of each feature map in the subset assigned to it. The output of the overall model is determined by majority vote of the base CNN and the FCSNs. Experimental results using the MNIST, Fashion-MNIST and CIFAR-10 datasets show that the proposed approach further improves the performance of CNNs. In particular, an EnsNet achieves a state-of-the-art error rate of 0.16% on MNIST.