The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] neural net(879hit)

821-840hit(879hit)

  • On the Multiuser Detection Using a Neural Network in Code-Division Multiple-Access Communications

    Teruyuki MIYAJIMA  Takaaki HASEGAWA  Misao HANEISHI  

     
    PAPER

      Vol:
    E76-B No:8
      Page(s):
    961-968

    In this paper we consider multiuser detection using a neural network in a synchronous code-division multiple-access channel. In a code-division multiple-access channel, a matched filter is widely used as a receiver. However, when the relative powers of the interfering signals are large, i.e. the near-far problem, the performances of the matched filter receiver degrade. Although the optimum receiver for multiuser detection is superior to the matched filter receiver in such situations, the optimum receiver is too complex to be implemented. A simple technique to implement the optimum multiuser detection is required. Recurrent neural networks which consist of a number of simple processing units can rapidly provide a collectively-computed solution. Moreover, the network can seek out a minimum in the energy function. On the other hand, the optimum multiuser detection in a synchronous channel is carried out by the maximization of a likelihood function. In this paper, it is shown that the energy function of the neural network is identical to the likelihood function of the optimum multiuser detection and the neural network can be used to implement the optimum multiuser detection. Performance comparisons among the optimum receiver, the matched filter one and the neural network one are carried out by computer simulations. It is shown that the neural network receiver has a capability to achieve near-optimum performance in several situations and local minimum problems are few serious.

  • Neural Network Approach to Characterization of Cirrhotic Parenchymal Echo Patterns

    Shin-ya YOSHINO  Akira KOBAYASHI  Takashi YAHAGI  Hiroyuki FUKUDA  Masaaki EBARA  Masao OHTO  

     
    PAPER-Biomedical Signal Processing

      Vol:
    E76-A No:8
      Page(s):
    1316-1322

    We have calssified parenchymal echo patterns of cirrhotic liver into four types, according to the size of hypoechoic nodular lesions. Neural network technique has been applied to the characterization of hepatic parenchymal diseases in ultrasonic B-scan texture. We employed a multi-layer feedforward neural network utilizing the back-propagation algorithm. We carried out four kinds of pre-processings for liver parenchymal pattern in the images. We describe the examination of each performance by these pre-processing techniques. We show four results using (1) only magnitudes of FFT pre-processing, (2) both magnitudes and phase angles, (3) data normalized by the maximum value in the dataset, and (4) data normalized by variance of the dataset. Among the 4 pre-processing data treatments studied, the process combining FFT phase angles and magnitudes of FFT is found to be the most efficient.

  • Breast Tumor Classification by Neural Networks Fed with Sequential-Dependence Factors to the Input Layer

    Du-Yih TSAI  Hiroshi FUJITA  Katsuhei HORITA  Tokiko ENDO  Choichiro KIDO  Sadayuki SAKUMA  

     
    PAPER-Medical Electronics and Medical Information

      Vol:
    E76-D No:8
      Page(s):
    956-962

    We applied an artificial neural network approach identify possible tumors into benign and malignant ones in mammograms. A sequential-dependence technique, which calculates the degree of redundancy or patterning in a sequence, was employed to extract image features from mammographic images. The extracted vectors were then used as input to the network. Our preliminary results show that the neural network can correctly classify benign and malignant tumors at an average rate of 85%. This accuracy rate indicates that the neural network approach with the proposed feature-extraction technique has potential utility in the computer-aided diagnosis of breast cancer.

  • A Real-Time Scheduler Using Neural Networks for Scheduling Independent and Nonpreemptable Tasks with Deadlines and Resource Requirements

    Ruck THAWONMAS  Norio SHIRATORI  Shoichi NOGUCHI  

     
    PAPER-Bio-Cybernetics

      Vol:
    E76-D No:8
      Page(s):
    947-955

    This paper describes a neural network scheduler for scheduling independent and nonpreemptable tasks with deadlines and resource requirements in critical real-time applications, in which a schedule is to be obtained within a short time span. The proposed neural network scheduler is an integrate model of two Hopfield-Tank neural network medels. To cope with deadlines, a heuristic policy which is modified from the earliest deadling policy is embodied into the proposed model. Computer simulations show that the proposed neural network scheduler has a promising performance, with regard to the probability of generating a feasible schedule, compared with a scheduler that executes a conventional algorithm performing the earliest deadline policy.

  • Hybrid Neural Networks as a Tool for the Compressor Diagnosis

    Manabu KOTANI  Haruya MATSUMOTO  Toshihide KANAGAWA  

     
    PAPER-Speech Processing

      Vol:
    E76-D No:8
      Page(s):
    882-889

    An attempt to apply neural networks to the acoustic diagnosis for the reciprocating compressor is described. The proposed neural network, Hybrid Neural Network (HNN), is composed of two multi-layered neural networks, an Acoustic Feature Extraction Network (AFEN) and a Fault Discrimination Network (FDN). The AFEN has multi-layers and the number of units in the middle hidden layer is smaller than the others. The input patterns of the AFEN are the logarithmic power spectra. In the AFEN, the error back propagation method is applied as the learning algorithm and the target patterns for the output layer are the same as the input patterns. After the learning, the hidden layer acquires the compressed input information. The architecture of the AFEN appropriate for the acoustic diagnosis is examined. This includes the determination of the form of the activation function in the output layer, the number of hidden layers and the numbers of units in the hidden layers. The FDN is composed of three layers and the learning algorithm is the same as the AFEN. The appropriate number of units in the hidden layer of the FDN is examined. The input patterns of the FDN are fed from the output of the hidden layer in the learned AFEN. The task of the HNN is to discriminate the types of faults in the compressor's two elements, the valve plate and the valve spring. The performance of the FDN are compared between the different inputs; the output of the hidden layer in the AFEN, the conventional cepstral coefficients and the filterbank's outputs. Furthermore, the FDN itself is compared to the conventional pattern recognition technique based on the feature vector distance, the Euclid distance measure, where the input is taken from the AFEN. The obtained results show that the discrimination accuracy with the HNN is better than that with the other combination of the discrimination method and its input. The output criteria of network for practical use is also discussed. The discrimination accuracy with this criteria is 85.4% and there is no case which mistakes the fault condition for the normal condition. These results suggest that the proposed decision network is effective for the acoustic diagnosis.

  • Hardware Architecture for Kohonen Network

    Hidetoshi ONODERA  Kiyoshi TAKESHITA  Keikichi TAMARU  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1159-1166

    We propose a fully digital architecture for Kohonen network suitable for VLSI implementation. The proposed architecture adopts a functional memory type parallel processor (FMPP) architecture which has a structure similar to a content addressable memory (CAM). One word of CAM is regarded as a processing element and a group of elements forms a neuron. All processing elements execute the same operation in bit-serial but in processor-parallel. Thus the number of instructions for realizing the network algorithm is independent of the number of neurons in the network. With reference to a previously reported CAM, we estimate a network with 96 neurons for speech recognition could be integrated on three chips using a 1.2 µm process, and it operates 50 times faster than a sequential hardware. Owing to its highly regular structure of memories, the proposed hardware architecture is well compatible with current VLSI technology.

  • A Copy-Learning Model for Recognizing Patterns Rotated at Various Angles

    Kenichi SUZAKI  Shinji ARAYA  Ryozo NAKAMURA  

     
    LETTER

      Vol:
    E76-A No:7
      Page(s):
    1207-1211

    In this paper we discuss a neural network model that can recognize patterns rotated at various angles. The model employs copy learning, a learning method entirely different from those used in conventional models. Copy-Learning is an effective learning method to attain the desired objective in a short period of time by making a copy of the result of basic learning through the application of certain rules. Our model using this method is capable of recognizing patterns rotated at various angles without requiring mathematical preprocessing. It involves two processes: first, it learns only the standard patterns by using part of the network. Then, it copies the result of the learning to the unused part of the network and thereby recognizes unknown input patterns by using all parts of the network. The model has merits over the conventional models in that it substantially reduces the time required for learning and recognition and can also recognize the rotation angle of the input pattern.

  • Development and Fabrication of Digital Neural Network WSIs

    Minoru FUJITA  Yasushi KOBAYASHI  Kenji SHIOZAWA  Takahiko TAKAHASHI  Fumio MIZUNO  Hajime HAYAKAWA  Makoto KATO  Shigeki MORI  Tetsuro KASE  Minoru YAMADA  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1182-1190

    Digital neural networks are suitable for WSI implementation because their noise immunity is high, they have a fault tolerant structure, and the use of bus architecture can reduce the number of interconnections between neurons. To investigate the feasibility of WSIs, we integrated either 576 conventional neurons or 288 self-learning neurons on a 5-inch wafer, by using 0.8-µm CMOS technology and three metal layers. We also developed a new electron-beam direct-writing technology which enables easier fabrication of VLSI chips and wafer-level interconnections. We fabricated 288 self-learning neuron WSIs having as many as 230 good neurons.

  • Forced Formation of a Geometrical Feature Space by a Neural Network Model with Supervised Learning

    Toshiaki TAKEDA  Hiroki MIZOE  Koichiro KISHI  Takahide MATSUOKA  

     
    LETTER

      Vol:
    E76-A No:7
      Page(s):
    1129-1132

    To investigate necessary conditions for the object recognition by simulations using neural network models is one of ways to acquire suggestions for understanding the neuronal representation of objects in the brain. In the present study, we trained a three layered neural network to form a geometrical feature representation in its output layer using back-propagation algorithm. After training using 73 learning examples, 65 testing patterns made by various combinations of above features could be recognized with the network at a rate of 95.3% appropriate response. We could classify four types of hidden layer units on the basis of effects on the output layer.

  • Non von Neumann Chip Architecture--Present and Future--

    Tadashi AE  Reiji AIBARA  

     
    INVITED PAPER

      Vol:
    E76-C No:7
      Page(s):
    1034-1044

    The recent non von Neumann chip architectures are mainly classified into the AI architecture and the neural architecture. We focus on these two categories, and introduce the representatives each with a brief history. The AI chip architecture is difficult to escape essentially from the von Neumann architecture as far as it is language-oriented. The neural architecture, however, may yield an essentially new computer architecture, when the new device technologies will support it. In particular, the optoelectronics and the quantum electronics will provide a lot of powerful technologies.

  • Deterministic Boltzmann Machine Learning Improved for Analog LSI Implementation

    Takashi MORIE  Yoshihito AMEMIYA  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1167-1173

    This paper describes the learning performance of the deterministic Boltzmann machine (DBM), which is a promising neural network model suitable for analog LSI implementation. (i) A new learning procedure suitable for LSI implementation is proposed. This is fully-on-line learning in which different sample patterns are presented in consecutive clamped and free phases and the weights are modified in each phase. This procedure is implemented without extra memories for learning operation, and reduces the chip area and power consumption for learning by 50 percent. (ii) Learning in a layer-type DBM with one output unit has characteristic local minima which reduce the effective number of available hidden units. Effective methods to avoid reaching these local minima are proposed. (iii) Although DBM learning is not suitable for mapping problems with analog target values, it is useful for analog data discrimination problems.

  • A Programmable Parallel Digital Neurocomputer

    Yoshiyuki SHIMOKAWA  Yutaka FUWA  Naruhiko ARAMAKI  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1197-1205

    We developed programmable high-performance and high-speed neurocomputer for a large neural network using ASIC neurocomputing chips made by CMOS VLSI technology. The neurocomputer consists of one master node and multiple slave nodes which are connected by two data paths, a broadcast bus and a ring bus. The nodes are made by ASIC chips and each chip has plural nodes in it. The node has four types of computation hardware that can be cascaded in series forming a pipeline. Processing speed is proportional to the number of nodes. The neurocomputer is built on one printed circuit board having 65 VLSI chips that offers 1.5 billion connections/sec. The neurocomputer uses SIMD for easy programming and simple hardware. It can execute complicated computations, memory access and memory address control, and data paths control in a single instruction and in a single time step using the pipeline. The neurocomputer processes forward and backward calculations of multilayer perceptron type neural networks, LVQ, feedback type neural networks such as Hopfield model, and any other types by programming. To compute neural computation effectively and simply in a SIMD type neurocomputer, new processing methods are proposed for parallel computation such as delayed instruction execution, and reconfiguration.

  • The Advantages of a DRAM-Based Digital Architecture for Low-Power, Large-Scale Neuro-Chips

    Takao WATANABE  Masakazu AOKI  Katsutaka KIMURA  Takeshi SAKATA  Kiyoo ITOH  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1206-1214

    The advantages of a neuro-chip architecture based on a DRAM are demonstrated through a discussion of the general issuse regarding a memory based neuro-chip architecture and a comparison with a chip based on an SRAM. The performance of both chips is compared assuming digital operation, a 1.5-V supply voltage, a 106-synapse neural network capability, and a 0.5-µm CMOS design rule. The use of a one-transistor DRAM cell array for the storage of synapse weights results in a chip 55% smaller than an SRAM based chip with the same 8-Mbit memory capacity and the same number of processing elements. No additional operations for refreshing the DRAM cell array are necessary during the processing of the neural networks. This is because all the synapse weights in the array are transferred to the processing elements during the processing and the DRAM cells in the array are automatically refreshed when they are selected. The precharge operation of the DRAM cell array degrades the processing speed, however a processing speed of 1.37 GCPS is expected for the DRAM based chip. That speed is comparable to the 1.71 GCPS for the SRAM based chip with the same 256 parallel-processing elements. A DRAM cell array has the additional advantage of lower power dissipation in this specific usage for the neuro-chip. The dynamic operation of the DRAM cell array results in a 10% lower operating power dissipation than a chip using an SRAM cell array at the same processing speed of 1.37 GCPS. That lower operating power dissipation enables a DRAM based chip to run on a 1.5-V dry cell for longer under intermittent daily use even though the SRAM cell array has little power dissipation in data-holding mode.

  • A Digital Neural Network Coprocessor with a Dynamically Reconfigurable Pipeline Architecture

    Takayuki MORISHITA  Youichi TAMURA  Takami SATONAKA  Atsuo INOUE  Shin-ichi KATSU  Tatsuo OTSUKI  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1191-1196

    We have developed a digital coprocessor with a dynamically reconfigurable pipeline architecture specified for a layered neural network which executes on-chip learning. The coprocessor attains a learning speed of 18 MCUPS that is approximately twenty times that of the conventional DSP. This coprocessor obtains expansibility in the calculation through a larger multi-layer, network by means of a network decomposition and a distributed processing approach.

  • Three Dimensional Optical Interconnection Technology for Massively-Parallel Computing Systems

    Kazuo KYUMA  Shuichi TAI  

     
    INVITED PAPER

      Vol:
    E76-C No:7
      Page(s):
    1070-1079

    Three dimensional (3-D) optics offers potential advantages to the massively-parallel systems over electronics from the view point of information transfer. The purpose of this paper is to survey some aspects of the 3-D optical interconnection technology for the future massively-parallel computing systems. At first, the state-of-art of the current optoelectronic array devices to build the interconnection networks are described, with emphasis on those based on the semiconductor technology. Next, the principles, basic architectures, several examples of the 3-D optical interconnection systems in neural networks and multiprocessor systems are described. Finally, the issues that are needed to be solved for putting such technology into practical use are summarized.

  • Robust Performance Using Cascaded Artificial Neural Network Architecture

    Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Digital Signal Processing

      Vol:
    E76-A No:6
      Page(s):
    1023-1030

    It has been reported that generalization performance of multilayer feedformard networks strongly depends on the attainment of saturated hidden outputs in response to the training set. Usually standard Backpropagation (BP) network mostly uses intermediate values of hidden units as the internal representation of the training patterns. In this letter, we propose construction of a 3-layer cascaded network in which two 2-layer networks are first trained independently by delta rule and then cascaded. After cascading, the intermediate layer can be viewed as hidden layer which is trained to attain preassigned saturated outputs in response to the training set. This network is particularly easier to construct for linearly separable training set, and can also be constructed for nonlinearly separable tasks by using higher order inputs at the input layer or by assigning proper codes at the intermediate layer which can be obtained from a trained Fahlman and Lebiere's network. Simulation results show that, at least, when the training set is linearly separable, use of the proposed cascaded network significantly enhances the generalization performance compared to BP network, and also maintains high generalization ability for nonlinearly separable training set. Performance of cascaded network depending on the preassigned codes at the intermediate layer is discussed and a suggestion about the preassigned coding is presented.

  • Structural Evolution of Neural Networks Having Arbitrary Connections by a Genetic Method

    Tomoharu NAGAO  Takeshi AGUI  Hiroshi NAGAHASHI  

     
    PAPER-Bio-Cybernetics

      Vol:
    E76-D No:6
      Page(s):
    689-697

    A genetic method to generate a neural network which has both structure and connection weights adequate for a given task is proposed. A neural network having arbitrary connections is regarded as a virtual living thing which has genes representing its connections among neural units. Effectiveness of the network is estimated from its time sequential input and output signals. Excellent individuals, namely appropriate neural networks, are generated through generation iterations. The basic principle of the method and its applications are described. As an example of evolution from randomly generated networks to feedforward networks, an XOR problem is dealt with, and an action control problem is used for making networks containing feedback and mutual connections. The proposed method is available for designing a neural network whose adequate structure is unknown.

  • Learning of a Multi-Valued Neural Network and Its Application

    Ryuzo TAKIYAMA  Koichiro KUBO  

     
    PAPER-Nonlinear Circuits and Neural Nets

      Vol:
    E76-A No:6
      Page(s):
    873-877

    A learning procedure of a three layer neural network with limited structure, called a multi-valued neural network, is proposed. The three layer net has a single linear neuron in its output layer. All input weights of a number of hidden neurons are identical. The network takes k+1 distinct stable values, where k is the number of hidden neurons. The proposed learning procedure consists of two parts, Phase and Phase . The former is one for the learning of weights between the hidden and output layers, and the latter is one for those between the input and the hidden layers. The network is applied to classification of numerals, which shows the effectiveness of the proposed learning procedure.

  • Boltzmann Machine Processor Using Single-Bit Operation

    Mamoru SASAKI  Shuichi KANEDA  Fumio UENO  Takahiro INOUE  Yoshiki KITAMURA  

     
    PAPER-Nonlinear Circuits and Neural Nets

      Vol:
    E76-A No:6
      Page(s):
    878-885

    This paper describes a single-bit parallel processor specified to Boltzmann Machine. The processor has SIMD (Shingle Instruction Multiple Data stream) type parallel architecture and every processing element (PE) has a single-bit ALU and a local memory storing connected weights between neurons. Features of the processor are large scale parallel processing a number of the simple single-bit PEs and effective expansion realized by multiple chips connected simple bus lines. Moreover, it is enhanced that the processing speed can be independent of the number of the neurons. We designed the PE using 1.2 µm CMOS process standard cells and confirmed the high performance using CAD simulations.

  • L* Learning: A Fast Self-Organizing Feature Map Learning Algorithm Based on Incremental Ordering

    Young Pyo JUN  Hyunsoo YOON  Jung Wan CHO  

     
    PAPER-Bio-Cybernetics

      Vol:
    E76-D No:6
      Page(s):
    698-706

    The self-organizing feature map is one of the most widely used neural network paradigm based on unsupervised competitive learning. However, the learning algorithm introduced by Kohonen is very slow when the size of the map is large. The slowness is caused by the search for large map in each training steps of the learning. In this paper, a fast learning algorithm based on incremental ordering is proposed. The new learning starts with only a few units evenly distributed on a large topological feature map, and gradually increases the number of units until it covers the entire map. In middle phases of the learning, some units are well-ordered and others are not, while all units are weekly-ordered in Kohonen learning. The ordered units, during the learning, help to accelerate the search speed of the algorithm and accelerate the movements of the remaining unordered units to their topological locations. It is shown by theoretical analysis as well as experimental analysis that the proposed learning algorithm reduces the training time from O(M2) to O(log M) for M by M map without any additional working space, while preserving the ordering properties of the Kohonen learning algorithm.

821-840hit(879hit)