Mamoru TANAKA Kenneth R. CROUNSE Tamás ROSKA
This paper describes highly parallel analog image coding and decoding by cellular neural networks (CNNs). The communication system in which the coder (C-) and decoder (D-) CNNs are embedded consists of a differential transmitter with an internal receiver model in the feedback loop. The C-CNN encodes the image through two cascaded techniques: structural compression and halftoning. The D-CNN decodes the received data through a reconstruction process, which includes a dynamic current distribution, so that the original input to the C-CNN can be recognized. The halftoning serves as a dynamic quantization to convert each pixel to a binary value depending on the neighboring values. We approach halftoning by the minimization of error energy between the original gray image and reconstructed halftone image, and the structural compression from the viewpoints of topological and regularization theories. All dynamics are described by CNN state equations. Both the proposed coding and decoding algorithms use only local image information in a space inveriant manner, therefore errors are distributed evenly and will not introduce the blocking effects found in DCT-based coding methods. In the future, the use of parallel inputs from on-chip photodetectors would allow direct dynamic quantization and compression of image sequences without the use of multiple bit analog-to-digital converters. To validate our theory, a simulation has been performed by using the relaxation method on an 150 frame image sequence. Each input image was 256256 pixels whth 8 bits per pixel. The simulated fixed compression rate, not including the Huffman coding, was about 1/16 with a PSNR of 31[dB]35[dB].
Naohiko SHIMIZU Gui-Xin CHENG Munemitsu IKEGAMI Yoshinori NAKAMURA Mamoru TANAKA
This paper describes a pipelining universal system of discrete time cellular neural networks (DTCNNs). The new relaxation-based algorithm which is called a Pipelining Gauss Seidel (PGS) method is used to solve the CNN state equations in pipelining. In the systolic system of N processor elements {PEi}, each PEi performs the convolusional computation (CC) of all cells and the preceding PEi-1 performs the CC of all cells taking precedence over it by the precedence interval number p. The expected maximum number of PE's for the speeding up is given by n/p where n means the number of cells. For its application, the encoding and decoding process of moving images is simulated.
Masaji KATAGIRI Masakazu NAGURA
We apply neural networks to implement a line shape recognition/classification system. The purpose of employing neural networks is to eliminate target-specific algorithms from the system and to simplify the system. The system needs only to be trained by samples. The shapes are captured by the following operations. Lines to be processed are segmented at inflection points. Each segment is extended from both ends of it in a certain percentage. The shape of each extended segment is captured as an approximate curvature. Curvature sequence is normalized by size in order to get a scale-invariant measure. Feeding this normalized curvature date to a neural network leads to position-, rotation-, and scale-invariant line shape recognition. According to our experiments, almost 100% recognition rates are achieved against 5% random modification and 50%-200% scaling. The experimental results show that our method is effective. In addition, since this method captures shape locally, partial lines (caused by overlapping etc.) can also be recognized.
Kazuharu TOYOKAWA Kozo KITAMURA Shin KATOH Hiroshi KANEKO Nobuyasu ITOH Masayuki FUJITA
An integrated pen interface system was developed to allow effective Japanese text entry. It consists of sub-systems for handwriting recognition, contextual post-processing, and enhanced Kana-to-Kanji conversion. The recognition sub-system uses a hybrid algorithm consisting of a pattern matcher and a neural network discriminator. Special care was taken to improve the recognition of non-Kanji and simple Kanji characters frequently used in fast data entry. The post-processor predicts consecutive characters on the basis of bigrams modified by the addition of parts of speech and substitution of macro characters for Kanji characters. A Kana-to Kanji conversion method designed for ease of use with a pen interface has also been integrated into the system. In an experiment in which 2,900 samples of Kanji and non-Kanji characters were obtained from 20 subjects, it was observed that the original recognition accuracy of 83.7% (the result obtained by using the pattern matching recognizer) was improved to 90.7% by adding the neural network discriminator, and that it was further improved to 94.4% by adding the post-processor. The improved recognition accuracy for non-Kanji characters was particularly marked.
Masakatsu MARUYAMA Hiroyuki NAKAHIRA Shiro SAKIYAMA Toshiyuki KOHDA Susumu MARUNO Yasuharu SHIMEKI
This paper discusses a digital neuroprocessor named Quantizer Neuron Chip (QNC) employing the Quantizer Neuron model and two newly developed schemes; "concurrent processing of quantizer neuron" and "removal of ineffective calculations". QNC simulates neural networks named the Multi-Functional Layered Network (MFLN) with 64 output neurons, 4672 quantizer neurons and two million synaptic weights and can be used for character or image recognition and learning. The processing speed of the chip achieved 1.6 µseconds per output neuron for recognition and 20 million connections updated per second (MCUPS) for learning. In addition, QNC can execute multichip operation for increasing the size of networks. We applied QNC to handwritten numeral recognition and realized high speed recognition and learning. QNC is implemented in a 1.2 µm double metal CMOS with sea of gates' technology and contains 27,000 gates on a 10.9910.93 mm2 chip.
Luigi RAFFO Silvio P. SABATINI Giacomo INDIVERI Giovanni NATERI Giacomo M. BISIO
The paper describes the architecture and the simulated performances of a memory-based chip that emulates human cortical processing in early visual tasks, such as texture segregation. The featural elements present in an image are extracted by a convolution block and subsequently processed by the cortical chip, whose neurons, organized into three layers, gain relational descriptions (intelligent processing) through recurrent inhibitory/excitatory interactions between both inter-and intra-layer parallel pathways. The digital implementation of this architecuture directly maps the set of equations determining the status of the cortical network to achieve an optimal exploitation of VLSI technology in neural computation. Neurons are mapped into a memory matrix whose elements are updated through a programmable computational unit that implements synaptic interconnections. By using 0.5 µm-CMOS technology, full cortical image processing can be attained on a single chip (2020 mm2 die) at a rate higher than 70 frames/second, for images of 256256 pixels.
Hiroshi UEDA Masaya OHTA Akio OGIHARA Kunio FUKUNAGA
A pseudoinverse rule, one of major rule to determine a weight matrix for associative memory, has large capacity comparing with other determining rules. However, it is wellknown that the rule has small domains of attraction of memory vectors on account of many spurious states. In this paper, we try to improve the problem by means of subtracting a constant from all diagonal elements of a weight matrix. By this method, many spurious states disappear and eigenvectors with negative eigenvalues are introduced for the orthocomplement of the subspace spanned by memory vectors. This method can be applied to two types of networks: binary network and analog network. Some computer simulations are performed for both two models. The results of the simulations show our improvement is effective to extend error correcting ability for both networks.
Shigeki AISAWA Kazuhiro NOGUCHI Masafumi KOGA Takao MATSUMOTO Yoshihito AMEMIYA
A very-high-speed ten-neuron analog neural network LSI chip is fabricated for the first time using super self-aligned Si bipolar process technology. The LSI consists of ten neurons and 100 electrically modifiable synaptic weights. The neural network nonlinear mapping function to solve the four-bit parity problem is successfully demonstrated at 150 mega-patterns/sec. The operation speed of this neural network is, to the best of the authors, knowledge, the fastest yet reported.
Pitch frequency is a basic characteristic of human voice, and pitch extraction is one of the most important studies for speech recognition. This paper describes a simple but effective technique to obtain correct pitch frequency from candidates (pitch candidates) extracted by the short-range autocorrelation function. The correction is performed by a neural network in consideration of the time coutinuation that is realized by referring to pitch candidates at previous frames. Since the neural network is trained by the back-propagation algorithm with training data, it adapts to any speaker and obtains good correction without sensitive adjustment and tuning. The pitch extraction was performed for 3 male and 3 female announcers, and the proposed method improves the percentage of correct pitch from 58.65% to 89.19%.
Massimo CONTI Simone ORCIONI Claudio TURCHETTI
Artificial Neural Networks (ANN's) that are able to learn exhibit many interesting features making them suitable to be applied in several fields such as pattern recognition, computer vision and so forth. Learning a given input-output mapping can be regarded as a problem of approximating a multivariate function. In this paper we will report a theoretical framework for approximation, based on the well known sequences of functions named approximate identities. In particular, it is proven that such sequences are able to approximate a generally continuous function to any degree of accuracy. On the basis of these theoretical results, it is shown that the proposed approximation scheme maps into a class of networks which can efficiently be implemented with analog MOS VLSI or BJT integrated circuits. To prove the validity of the proposed approach a series of results is reported.
Kazuhiko SHIMADA Keisuke NAKANO Masakazu SENGOKU Takeo ABE
In cellular mobile systems, an alternative approach for a Dynamic Channel Assignment problem is presented. It adaptively assigns the channels considering the cochannel interference level. The Dynamic Channel Assignment problem is modeled on the different cellular system from the conventional one. In this paper, we formulate the rearrangement problem in the Dynamic Channel Assignment and propose a novel strategy for the problem. The proposed algorithm is based on an artificial neural network as a specific dynamical system, and is successfully applied to the cellular system models. The computer simulation results show that the algorithm utilized for the rearrangement is an effective strategy to improve the traffic characteristics.
This paper describes a novel technique to realize high performance digital sequential circuits by using Hopfield neural networks. For an example of applications of neural networks to digital circuits, a novel gate circuit, full adder circuit and latch circuit using neural networks, which have the global convergence property, are proposed. Here, global convergence means that the energy function is monotonically decreasing and each circulit always operates correctly independently of the initial values. Finally the several digital sequential circuits such as shift register and asynchronous binary counter are designed.
This paper presents a structure of adaptive equalizer equipped with a neural network and a Viterbi decoder, and evaluates its performance under a fading environment by means of computer simulation.
The data-driven model of computation is well suited for flexible and highly parallel simulation of neural networks. First, the operational semantics of data-driven languages preserve the locality and functionality of neural networks, and naturally describe their inherent parallelism. Second, the asynchronous data-driven execution facilitates the implementation of large and scalable multiprocessor systems, which are necessary to obtain considerable degrees of simulation sppedups. In this paper, we present a dynamic data-driven multiprocessor system, and demonstrate its suitability for the paralel simulation of back propagation neural networks. Two parallel implementations are described and evaluated using an image data compression network. The system is scalable, and as a result, the performance improved proportionally with the increase in number of processors.
Kitaek KWON Hisao ISHIBUCHI Hideo TANAKA
This paper proposes an approach for approximately realizing nonlinear mappings of interval vectors by interval neural networks. Interval neural networks in this paper are characterized by interval weights and interval biases. This means that the weights and biases are given by intervals instead of real numbers. First, an architecture of interval neural networks is proposed for dealing with interval input vectors. Interval neural networks with the proposed architecture map interval input vectors to interval output vectors by interval arithmetic. Some characteristic features of the nonlinear mappings realized by the interval neural networks are described. Next, a learning algorithm is derived. In the derived learning algorithm, training data are the pairs of interval input vectors and interval target vectors. Last, using a numerical example, the proposed approach is illustrated and compared with other approaches based on the standard back-propagation neural networks with real number weights.
Hideki SANO Atsuhiro NADA Yuji IWAHORI Naohiro ISHII
This paper proposes a new method of extracting feature attentive regions in a learnt multi-layer neural network. We difine a function which calculates the degree of dependence of an output unit on an inpur unit. The value of this function can be used to investigate whether a learnt network detects the feature regions in the training patterns. Three computer simulations are presented: (1) investigation of the basic characteristic of this function; (2) application of our method to a simpie pattern classification task; (3) application of our method to a large scale pattern classfication task.
Masaya OHTA Akio OGIHARA Kunio FUKUNAGA
This article deals with the binary neural network with negative self-feedback connections as a method for solving combinational optimization problems. Although the binary neural network has a high convergence speed, it hardly searches out the optimum solution, because the neuron is selected randomly at each state update. In thie article, an improvement using the negative self-feedback is proposed. First it is shown that the negative self-feedback can make some local minimums be unstable. Second a selection rule is proposed and its property is analyzed in detail. In the binary neural network with negative self-feedback, this selection rule is effective to escape a local minimum. In order to comfirm the effectiveness of this selection rule, some computer simulations are carried out for the N-Queens problem. For N=256, the network is not caught in any local minimum and provides the optimum solution within 2654 steps (about 10 minutes).
Yasumasa IKUNO Hiroaki HAWABATA Yoshiaki SHIRAO Masaya HIRATA Toshikuni NAGAHARA Yashio INAGAKI
Recently, the back propagation method, which is one of the algorithms for learning neural networks, has been widely applied to various fields because of its excellent characteristics. But it has drawbacks, for example, slowness of learning speed, the possibility of falling into a local minimum and the necessity of adjusting a learning constant in every application. In this article we propose an algorithm which overcomes some of the drawbacks of the back propagation by using an improved genetic algorithm.
In this paper, a middle-mapping learning algorithm for cellular associative memories is presented. This algorithm makes full use of the properties of the cellular neural network so that the associative memory has some advantages compared with the memory designed by the ourter product method. It can guarantee each prototype is stored at an equilibrium point. In the practical implementation, it is easy to build up the circuit because the weight matrix presenting the connection between cells is not symmetric. The synchronous updating rule makes its associative speed very fast compared to the Hopfield associative memory.
Iwao SEKITA Takio KURITA David K. Y. CHIU Hideki ASOH
The number of nodes in a hidden layer of a feed-forward layered network reflects an optimality condition of the network in coding a function. It also affects the computation time and the ability of the network to generalize. When an arbitrary number of hidden nodes is used in designing the network, redundancy of hidden nodes often can be seen. In this paper, a method of reducing hidden nodes is proposed on the condition that a reduced network maintains the performances of the original network within an accepted level of tolerance. This method can be applied to estimate the performances of a network with fewer hidden nodes. The estimated performances indicate the lower bounds of the actual performances of the network. Experiments were performed using the Fisher's IRIS data, a set of SONAR data, and the XOR data for classification. The results suggest that sufficient number of hidden nodes, fewer than the original number, can be estimated by the proposed method.