The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

16221-16240hit(16314hit)

  • Principal Component Analysis by Homogeneous Neural Networks, Part : Analysis and Extensions of the Learning Algorithms

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    376-382

    Artificial neurons and neural networks have been shown to perform Principal Component Analysis (PCA) when gradient ascent learning rules are used, which are related to the constrained maximization of statistical objective functions. Due to their parallelism and adaptivity to input data, such algorithms and their implementations in neural networks are potentially useful in feature extraction and data compression. In the companion paper(9), two such learning rules were derived from two criteria, the Subspace Criterion and the Weighted Subspace Criterion. It was shown that the only solutions to the latter problem are dominant eigenvectors of the data covariance matrix, which are the basis vectors of PCA. It was suggested by a simulation that the corresponding learning algorithm converges to these eigenvectors. A homogeneous neural network implementation was proposed for the algorithm. The learning algorithm is analyzed here in detail and it is shown that it can be approximated by a continuous-time differential equation that is obtained by averaging. It is shown that the asymptotically stable limits of this differntial equation are the eigenvectors. The neural network learning algorithm is further extended to a case in which each neuron has a sigmoidal nonlinear feedback activity function. Then no parameters specific to each neuron are needed, and the learning rule is fully homogeneous.

  • Presto: A Bus-Connected Multiprocessor for a Rete-Based Production System

    Hideo KIKUCHI  Takashi YUKAWA  Kazumitsu MATSUZAWA  Tsutomu ISHIKAWA  

     
    PAPER-Computer Systems

      Vol:
    E75-D No:3
      Page(s):
    265-273

    This paper discusses the design, implementation, and performance of a bus-connected multiprocessor, called Presto, for a Rete-based production system. To perform a match, which is a major phase of a production system, a Presto match scheme exploits the subnetworks that are separated by the top two-input nodes and the token flow control at these nodes. Since parallelism of a production system can only increase speed 10-fold, the aim is to do so efficiently on a low-cost, compact bus-connected multi-processor system without shared memory or cache memory. The Presto hardware consists of up to 10 processisng elements (PEs), each comprising a commercial microprocessor, 4 Mbytes of local memory, and two kinds of newly developed ASIC chips for memory control and bus control. Hierarchical system software is provided for developing interpreter programs. Measurement with 10 PEs shows that sample programs run 5-7 times faster.

  • Infinite Dimensional Homotopy Method of Calculating Solutions for Fredholm Operator with Index 1 and A-Proper Operator Equations

    Mitsunori MAKINO  Shin'ichi OISHI  Masahide KASHIWAGI  Kazuo HORIUCHI  

     
    LETTER-Nonlinear Systems

      Vol:
    E75-A No:5
      Page(s):
    613-615

    A type of infinite dimensional homotopy method is considered for numerically calculating a solution curve of a nonlinear functional equation being a Fredholm operator with index 1 and an A-proper operator. In this method, a property of so-called A-proper homotopy plays an important role.

  • Analysis of Time Transient EM Field Response from a Dielectric Spherical Cavity

    Hiroshi SHIRAI  Eiji NAKANO  Mikio YANO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E75-C No:5
      Page(s):
    627-634

    Transient responses by a dielectric sphere have been analyzed here for a dipole source located at the center. The formulation has been constructed first in the frequency domain, then transformed into the time domain to obtain for an impulsive response by two analytical methods, namely the Singularity Expansion Method and the Wavefront Expansion Method. While the former method collects the contributions around the singularities in the complex frequency domain, the latter gives us a result which is a summation of each successive wavefront arrivals. A Gaussian pulse has been introduced to simulate an impulse response result. The Gaussian pulse response is analytically formulated by convolving Gaussian pulse with the corresponding impulse response. Numercal inversion results are also calculated by Fast Fourier Transform Algorithm. Numerical examples are shown here to compare the results obtained by these three methods and good agreement are obtained between them. Comments are often made in connection with the corresponding two dimensional cylindrical case.

  • An Approximate Algorithm for Decision Tree Design

    Satoru OHTA  

     
    PAPER-Optimization Techniques

      Vol:
    E75-A No:5
      Page(s):
    622-630

    Efficient probabilistic decision trees are required in various application areas such as character recognition. This paper presents a polynomial-time approximate algorithm for designing a probabilistic decision tree. The obtained tree is near-optimal for the cost, defined as the weighted sum of the expected test execution time and expected loss. The algorithm is advantageous over other reported heuristics from the viewpoint that the goodness of the solution is theoretically guaranteed. That is, the relative deviation of the obtained tree cost from the exact optimum is not more than a positive constant ε, which can be set arbitrarily small. When the given loss function is Hamming metric, the time efficiency is further improved by using the information theoretical lower bound on the tree cost. The time efficiency of the algorithm and the accuracy of the solutions were evaluated through computational experiments. The results show that the computing time increases very slowly with an increase in problem size and the relative error of the obtained solution is much less than the upper bound ε for most problems.

  • Fractal Dimension of Neural Networks

    Ikuo MATSUBA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    363-365

    A theoretical conjecture on fractal dimensions of a dendrite distribution in neural networks is presented on the basis of the dendrite tree model. It is shown that the fractal dimensions obtained by the model are consistent with the recent experimental data.

  • 45Mbps Multi-Channel Composite TV Coding System

    Shuichi MATSUMOT  Takahiro HAMADA  Masahiro SAITO  Hitomi MURAKAMI  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    358-367

    In recent years, the digitalization of transmission links, such as optical fibre cables, satellite links, and terrestrial microwave links, has been progressed rapidly in many countries. In addition, many types of digital studio equipment have been developed and TV programs can be produced or edited without any picture quality degradation by using such equipment, for example, digital VTR. A high-efficiency bit-reduction coding system is the most promising and effective means for this situation in terms of reducing the cost of digital transmission of TV programs with high picture quality. Considering this background, a new digital coding system has been developed, which makes it possible to transmit up to 4 NTSC TV programs simultaneously over a single DS3 45Mbps link including two high quality sound channels and one 64kbps ancillary data channel for each TV program. The principal bit-reduction technique employed is 2 dimensional intraframe WHT (Walsh Hadamard Transform) coding, which gives higher coding performance for composite TV signals than DCT (Discrete Cosine Transform) coding. In order to attain high picture quality at around 8Mbps for 4 channel transmission, a 3 dimensional adaptive quantization cube which reflects human visual perception sufficiently is employed in the intraframe WHT coding scheme. The hardware has been made compact like a home use VTR. In this paper, first, the algorithm of the coding scheme developed for the coding system is presented, and then the system configuration and its basic coding performance are described.

  • The Self-Validating Numerical Method--A New Tool for Computer Assisted Proofs of Nonlinear Problems--

    Shin'ichi OISHI  

     
    INVITED SURVEY PAPER-Nonlinear Systems

      Vol:
    E75-A No:5
      Page(s):
    595-612

    The purpose of the present paper is to review a state of the art of nonlinear analysis with the self-validating numerical method. The self-validating numerics based method provides a tool for performing computer assisted proofs of nonlinear problems by taking the effect of rounding errors in numerical computations rigorously into account. First, Kantorovich's approach of a posteriori error estimation method is surveyed, which is based on his convergence theorem of Newton's method. Then, Urabe's approach for computer assisted existence proofs is likewise discussed. Based on his convergence theorem of the simplified Newton method, he treated practical nonlinear differential equations such as the Van der Pol equation ahd the Duffing equation, and proved the existence of their periodic and quasi-periodic solutions by the self-validating numerics. An approach of the author for generalization and abstraction of Urabe's method are also discribed to more general funcional equations. Furthermore, methods for rigorous estimation of rounding errors are surveyed. Interval analytic methods are discussed. Then an approach of the author which uses rational arithmetic is reviewed. Finally, approaches for computer assisted proofs of nonlinear problems are surveyed, which are based on the self-validating numerics.

  • A Testable Design of Sequential Circuits under Highly Observable Condition

    WEN Xiaoqing  Kozo KINOSHITA  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E75-D No:3
      Page(s):
    334-341

    The outputs of all gates in a circuit are assumed to be observable unber the highly observable condition, which is mainly based on the use of E-beam testers. When using the E-beam tester, it is desirable that the test set for a circuit is small and the test vectors in the test set can be applied in a successive and repetitive manner. For a combinational circuit, these requirements can be satisfied by modifying the circuit into a k-UCP circuit, which needs only a small number of tests for diagnosis. For a sequential circuit, however, even if the combinational portion has been modified into a k-UCP circuit, it is impossible that the test vectors for the combinational portion can always be applied in a successive and repetitive manner because of the existence of feedback loops. To solve this problem, the concept of k-UCP scan circuits is proposed in this paper. It is shown that the test vectors for the combinational portion in a k-UCP scan circuit can be applied in a successive and repetitive manner through a specially constructed scan-path. An efficient method of modifying a sequential circuit into a k-UCP scan circuit is also presented.

  • Model-Based/Waveform Hybrid Coding for Low-Rate Transmission of Facial Images

    Yuichiro NAKAYA  Hiroshi HARASHIMA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    377-384

    Despite its potential to realize image communication at extremely low rates, model-based coding (analysis-synthesis coding) still has problems to be solved for any practical use. The main problems are the difficulty in modeling unknown objects and the presence of analysis errors. To cope with these difficulties, we incorporate waveform coding into model-based coding (model-based/waveform hybrid coding). The incorporated waveform coder can code unmodeled objects and cancel the artifacts caused by the analysis errors. From a different point of view, the performance of the practically used waveform coder can be improved by the incorporation of model-based coding. Since the model-based coder codes the modeled part of the image at extremely low rates, more bits can be allocated for the coding of the unmodeled region. In this paper, we present the basic concept of model-based/waveform hybrid coding. We develop a model-based/MC-DCT hybrid coding system designed to improve the performance of the practically used MC-DCT coder. Simulation results of the system show that this coding method is effective at very low transmission rates such as 16kb/s. Image transmission at such low rates is quite difficult for an MC-DCT coder without the contribution of the model-based coder.

  • An Intercomparison between MSR and SI Retrieved Rain Rates

    Yuji OHSAKI  Masaharu FUJITA  

     
    LETTER-Satellite Communication

      Vol:
    E75-B No:5
      Page(s):
    422-426

    Rain rates are estimated from brightness temperature measured with a Microwave Scanning Radiometer (MSR) carried on board the Marine Observation Satellite 1 (MOS-1). Estimations are made using a rain rate retrieval algorithm based on a radiative-transfer model assuming rain spaced uniformly over the ocean. These values are compared with a Satellite-Derived Index of Precipitation Intensity (SI), which estimates the rain rate from visible and infrared images of a Geostationary Meteorological Satellite in conjunction with rain observation by a radar network of the Japan Meteorological Agency. Good correlation between MSR and SI derived rain rates validates the rain-rate retrieval algorithm.

  • Passivity and Learnability for Mechanical Systems--A Learning Control Theory for Skill Refinement--

    Suguru ARIMOTO  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    552-560

    This paper attempts to account for intelligibility of practices-based learning (so-called 'learning control') for skill refinement from the viewpoint of Newtonian mechanics. It is shown from an axiomatic approach that an extended notion of passivity for the residual error dynamics of robots plays a crucial role in their ability of learning. More precisely, it is shown that the exponentially weighted passivity with respect to residual velocity vector and torque vector leads the robot system to the convergence of trajectory tracking errors to zero with repeating practices. For a class of tasks when the endpoint is constrained geometrically on a surface, the problem of convergence of residual tracking errors and residual contact-force errors is also discussed on the basis of passivity analysis.

  • An Adaptive Antenna System for High-Speed Digital Mobile Communications

    Yasutaka OGAWA  Yasuyuki NAGASHIMA  Kiyohiko ITOH  

     
    PAPER-Antennas and Propagation

      Vol:
    E75-B No:5
      Page(s):
    413-421

    High-speed digital land mobile communications suffer from frequency-selective fading due to a long delay difference. Several techniques have been proposed to overcome the multipath propagation problem. Among them, an adaptive array antenna is suitable for very high-speed transmission because it can suppress the multipath signal of a long delay difference significantly. This paper describes the LMS adaptive array antenna for frequency-selective fading reduction and a new diversity technique. First, we propose a method to generate a reference signal in the LMS adaptive array. At the beginning of communication, we use training codes for the reference signal, which are known at a receiver. After the training period, we use detected codes for the reference signal. We can generate the reference signal modulating a carrier at the receiver by those codes. The carrier is oscillated independently of the incident signal. Then, the carrier frequency of the reference signal is in general different from that of the incident signal. However, the LMS adaptive array works in such a way that the carrier frequency of the array output coincides with that of the reference signal. Namely, the frequency difference does not affect the performance of the LMS adaptive array. Computer simulations show the proper behavior of the LMS adaptive array with the above reference signal generator. Moreover, we present a new multipath diversity technique using the LMS adaptive array. The LMS adaptive array reduces the frequency-selective fading by suppressing the multipath components. This means that the transmitted power is not used sufficiently. We propose a multiple beam antenna with the LMS adaptive array. Each antenna pattern receives one of the multipath components, and we combine them adjusting the timing. Then, we realize the multipath diversity. In addition to the multipath fading reduction, we can improve a signal-to-noise ratio by the diversity technique.

  • A Mean-Separated and Normalized Vector Quantizer with Edge-Adaptive Feedback Estimation and Variable Bit Rates

    Xiping WANG  Shinji OZAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:3
      Page(s):
    342-351

    This paper proposes a Mean-Separated and Normalized Vector Quantizer with edge-Adaptive Feedback estimation and variable bit rates (AFMSN-VQ). The basic idea of the AFMSN-VQ is to estimate the statistical parameters of each coding block from its previous coded blocks and then use the estimated parameters to normalize the coding block prior to vector quantization. The edge-adaptive feedback estimator utilizes the interblock correlations of edge connectivity and gray level continuity to accurately estimate the mean and standard deviation of the coding block. The rate-variable VQ is to diminish distortion nonuniformity among image blocks of different activities and to improve the reconstruction quality of edges and contours to which the human vision is sensitive. Simulation results show that up to 2.7dB SNR gain of the AFMSN-VQ over the non-adaptive FMSN-VQ and up to 2.2dB over the 1616 ADCT can be achieved at 0.2-1.0 bit/pixel. Furthermore, the AFMSN-VQ shows a comparable coding performance to ADCT-VQ and A-PE-VQ.

  • A Cache-Coherent, Distributed Memory Multiprocessor System and Its Performance Analysis

    Douglas E. MARQUARDT  Hasan S. ALKHATIB  

     
    PAPER-Computer Systems

      Vol:
    E75-D No:3
      Page(s):
    274-290

    The problems of cache coherency in multiprocessor systems are directly related to their architectural structures. Small scale multiprocessor systems have focused on the use of bus based memory interconnection networks using centrally shared memory and a sequential consistency model for coherency. This has limited scalability to but a few tens of processors due to the limited bus bandwidth used for both coherency updates and memory traffic. Recently, large scale multiprocessor systems have been proposed that use general interconnection networks and distributed shared memory. These architectures have been proposed using weak consistency models and various directory map schemes to hide the overhead for coherency maintenance within the memory hieratchy, interconnection network or process context switch latencies. The coherency and memory traffic are still maintained over the same interconnection network. In this paper, we present the architecture of a new general purpose medium scale multiprocessor system. This Cache Coherent Multiprocessor System (C2MP), supports distributed shared memory using a general memory interconnection network for memory traffic and a separate bus based coherency interconnection network for coherency maintenance. Through the use of a special directory based coherency protocol and cache oriented distributed coherency controllers, direct cache-to-cache coherency maintenance is performed over the dedicated coherency bus. This minimizes coherency updates to only those processor nodes needing coherency maintenance. An aggressive sequential coherncy model is used, which reduces the hardware penalty to support an ideal sequential consistency programmers model. The system can scale up to 256-512 processors depending on the degree of shared data and is expected to have higher per processor utilization in this range than currently proposed medium and large scale multiprocessor systems. The C2MP system is analyzed utilizing a Generalized Timed Petri-Net model of a processor node. A stochastic model for internode interactions over the general memory interconnection network and coherency bus are used . The model of the proposed architecture is analyzed under steady-state conditions for varying system work load parameters.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : The Weighted Subspace Criterion

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    366-375

    Principal Component Analysis (PCA) is a useful technique in feature extraction and data compression. It can be formulated as a statistical constrained maximization problem, whose solution is given by unit eigenvectors of the data covariance matrix. In a practical application like image compression, the problem can be solved numerically by a corresponding gradient ascent maximization algorithm. Such on-line algoritms can be good alternatives due to their parallelism and adaptivity to input data. The algorithms can be implemented in a local and homogeneous way in learning neural networks. One example is the Subspace Network. It is a regular layer of parallel artificial neurons with a learning rule that is completely homogeneous with respect to the neurons. However, due to the complete homogeneity, the learning rule does not converge to the unique basis given by the dominant eigenvectors, but any basis of this eigenvector subspace is possible. In many applications like data compression, the subspace is not sufficient but the actual eigenvectors or PCA coefficient vectors are needed. A new criterion, called the Weighted Subspace Criterion, is proposed, which makes a small symmetry-breaking change to the Subspace Criterion. Only the true eigenvectors are solutions. Making the corresponding change to the learning rule of the Subspace Network gives a modified learning rule, which can be still implemented on a homogeneous network architecture. In learning, the weight vectors will tend to the true eigenvectors.

  • Analysis of Economics of Computer Backup Service

    Marshall FREIMER  Ushio SUMITA  Hsing K. CHENG  

     
    PAPER-Switching and Communication Processing

      Vol:
    E75-B No:5
      Page(s):
    385-400

    An organization may suffer large losses if its computer service is interrupted. For protection, it can purchase computer backup service from the outside market which temporarily provides service replacement from a central facility. A dynamic probabilistic model is developed which describes such a computer backup service system. The parties involved have conflicting motivations. The supplier is interested in optimizing his expected profits subject to a given set of parameters while the subscriber will evaluate the service contract to his own best interest. This paper analyzes how the economic interests of the supplier and subscribers interact based on a dynamic reliability analysis of their respective computer systems. Assuming all physical parameters fixed, the supplier's optimal value in terms of economic parameters is determined. An algorithmic procedure is developed for computing such values. Some numerical examples are presented in order to gain insights into the system.

  • A Distributed Mutual Exclusion Algorithm Based on Weak Copy Consistency

    Seoung Sup LEE  Ha Ryoung OH  June Hyoung KIM  Won Ho CHUNG  Myunghwan KIM  

     
    PAPER-Computer Networks

      Vol:
    E75-D No:3
      Page(s):
    298-306

    This paper presents a destributed algorithm that uses weak copy consistency to create mutual exclusion in a distributed computer system. The weak copy consistency is deduced from the uncertainty of state which occurs due to the finite and unpredictable communication delays in a distributed environment. Also the method correlates outdated state information to current state. The average number of messages to enter critical section in the algorithm is n/2 to n messages where n is the number of sites. We show that the algorithm achieves mutual exclusion and the fairness and liveness of the algorithm is proven. We study the performance of the algorithm by simulation technique.

  • Image Compression and Regeneration by Nonlinear Associative Silicon Retina

    Mamoru TANAKA  Yoshinori NAKAMURA  Munemitsu IKEGAMI  Kikufumi KANDA  Taizou HATTORI  Yasutami CHIGUSA  Hikaru MIZUTANI  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    586-594

    Threre are two types of nonlinear associative silicon retinas. One is a sparse Hopfield type neural network which is called a H-type retina and the other is its dual network which is called a DH-type retina. The input information sequences of H-type and HD-type retinas are given by nodes and links as voltages and currents respectively. The error correcting capacity (minimum basin of attraction) of H-type and DH-type retinas is decided by the minimum numbers of links of cutset and loop respectively. The operation principle of the regeneration is based on the voltage or current distribution of the neural field. The most important nonlinear operation in the retinas is a dynamic quantization to decide the binary value of each neuron output from the neighbor value. Also, the edge is emphasized by a line-process. The rates of compression of H-type and DH-type retinas used in the simulation are 1/8 and (2/3) (1/8) respectively, where 2/3 and 1/8 mean rates of the structural and binarizational compression respectively. We could have interesting and significant simulation results enough to make a chip.

  • Closed-Form Error Probability Formula for Narrowband DQPSK in Slow Rayleigh Fading and Gaussian Noise

    Chun Sum NG  Francois P.S. CHIN  Tjeng Thiang TJUNG  Kin Mun LYE  

     
    PAPER-Radio Communication

      Vol:
    E75-B No:5
      Page(s):
    401-412

    A new error rate formula for narrowband Differential Quaternary Phase Shift Keyed system in a Rayleigh fading channel is obtained in closed-form. The formula predicts a non-zero error probability for noiseless reception. As predicted, the computed error rates approach some constant or floor values as the signal-to-noise ratio is increased beyond a certain limit. In the presence of various Doppler frequency shifts, an IF filter bandwidth of about one times the symbol rate is found to lead to a minimum error probability prior to the appearence of the error rate floor.

16221-16240hit(16314hit)