The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

20381-20400hit(20498hit)

  • Analysis of Time Transient EM Field Response from a Dielectric Spherical Cavity

    Hiroshi SHIRAI  Eiji NAKANO  Mikio YANO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E75-C No:5
      Page(s):
    627-634

    Transient responses by a dielectric sphere have been analyzed here for a dipole source located at the center. The formulation has been constructed first in the frequency domain, then transformed into the time domain to obtain for an impulsive response by two analytical methods, namely the Singularity Expansion Method and the Wavefront Expansion Method. While the former method collects the contributions around the singularities in the complex frequency domain, the latter gives us a result which is a summation of each successive wavefront arrivals. A Gaussian pulse has been introduced to simulate an impulse response result. The Gaussian pulse response is analytically formulated by convolving Gaussian pulse with the corresponding impulse response. Numercal inversion results are also calculated by Fast Fourier Transform Algorithm. Numerical examples are shown here to compare the results obtained by these three methods and good agreement are obtained between them. Comments are often made in connection with the corresponding two dimensional cylindrical case.

  • Analysis of Economics of Computer Backup Service

    Marshall FREIMER  Ushio SUMITA  Hsing K. CHENG  

     
    PAPER-Switching and Communication Processing

      Vol:
    E75-B No:5
      Page(s):
    385-400

    An organization may suffer large losses if its computer service is interrupted. For protection, it can purchase computer backup service from the outside market which temporarily provides service replacement from a central facility. A dynamic probabilistic model is developed which describes such a computer backup service system. The parties involved have conflicting motivations. The supplier is interested in optimizing his expected profits subject to a given set of parameters while the subscriber will evaluate the service contract to his own best interest. This paper analyzes how the economic interests of the supplier and subscribers interact based on a dynamic reliability analysis of their respective computer systems. Assuming all physical parameters fixed, the supplier's optimal value in terms of economic parameters is determined. An algorithmic procedure is developed for computing such values. Some numerical examples are presented in order to gain insights into the system.

  • Presto: A Bus-Connected Multiprocessor for a Rete-Based Production System

    Hideo KIKUCHI  Takashi YUKAWA  Kazumitsu MATSUZAWA  Tsutomu ISHIKAWA  

     
    PAPER-Computer Systems

      Vol:
    E75-D No:3
      Page(s):
    265-273

    This paper discusses the design, implementation, and performance of a bus-connected multiprocessor, called Presto, for a Rete-based production system. To perform a match, which is a major phase of a production system, a Presto match scheme exploits the subnetworks that are separated by the top two-input nodes and the token flow control at these nodes. Since parallelism of a production system can only increase speed 10-fold, the aim is to do so efficiently on a low-cost, compact bus-connected multi-processor system without shared memory or cache memory. The Presto hardware consists of up to 10 processisng elements (PEs), each comprising a commercial microprocessor, 4 Mbytes of local memory, and two kinds of newly developed ASIC chips for memory control and bus control. Hierarchical system software is provided for developing interpreter programs. Measurement with 10 PEs shows that sample programs run 5-7 times faster.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : Analysis and Extensions of the Learning Algorithms

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    376-382

    Artificial neurons and neural networks have been shown to perform Principal Component Analysis (PCA) when gradient ascent learning rules are used, which are related to the constrained maximization of statistical objective functions. Due to their parallelism and adaptivity to input data, such algorithms and their implementations in neural networks are potentially useful in feature extraction and data compression. In the companion paper(9), two such learning rules were derived from two criteria, the Subspace Criterion and the Weighted Subspace Criterion. It was shown that the only solutions to the latter problem are dominant eigenvectors of the data covariance matrix, which are the basis vectors of PCA. It was suggested by a simulation that the corresponding learning algorithm converges to these eigenvectors. A homogeneous neural network implementation was proposed for the algorithm. The learning algorithm is analyzed here in detail and it is shown that it can be approximated by a continuous-time differential equation that is obtained by averaging. It is shown that the asymptotically stable limits of this differntial equation are the eigenvectors. The neural network learning algorithm is further extended to a case in which each neuron has a sigmoidal nonlinear feedback activity function. Then no parameters specific to each neuron are needed, and the learning rule is fully homogeneous.

  • A Cache-Coherent, Distributed Memory Multiprocessor System and Its Performance Analysis

    Douglas E. MARQUARDT  Hasan S. ALKHATIB  

     
    PAPER-Computer Systems

      Vol:
    E75-D No:3
      Page(s):
    274-290

    The problems of cache coherency in multiprocessor systems are directly related to their architectural structures. Small scale multiprocessor systems have focused on the use of bus based memory interconnection networks using centrally shared memory and a sequential consistency model for coherency. This has limited scalability to but a few tens of processors due to the limited bus bandwidth used for both coherency updates and memory traffic. Recently, large scale multiprocessor systems have been proposed that use general interconnection networks and distributed shared memory. These architectures have been proposed using weak consistency models and various directory map schemes to hide the overhead for coherency maintenance within the memory hieratchy, interconnection network or process context switch latencies. The coherency and memory traffic are still maintained over the same interconnection network. In this paper, we present the architecture of a new general purpose medium scale multiprocessor system. This Cache Coherent Multiprocessor System (C2MP), supports distributed shared memory using a general memory interconnection network for memory traffic and a separate bus based coherency interconnection network for coherency maintenance. Through the use of a special directory based coherency protocol and cache oriented distributed coherency controllers, direct cache-to-cache coherency maintenance is performed over the dedicated coherency bus. This minimizes coherency updates to only those processor nodes needing coherency maintenance. An aggressive sequential coherncy model is used, which reduces the hardware penalty to support an ideal sequential consistency programmers model. The system can scale up to 256-512 processors depending on the degree of shared data and is expected to have higher per processor utilization in this range than currently proposed medium and large scale multiprocessor systems. The C2MP system is analyzed utilizing a Generalized Timed Petri-Net model of a processor node. A stochastic model for internode interactions over the general memory interconnection network and coherency bus are used . The model of the proposed architecture is analyzed under steady-state conditions for varying system work load parameters.

  • A Self-Consistent Linear Theory of Gyrotrons

    Kenichi HAYASHI  Tohru SUGAWARA  

     
    PAPER-Microwave and Millimeter Wave Technology

      Vol:
    E75-C No:5
      Page(s):
    610-616

    A new set of self-consistent linear equations is presented for the analysis of the startup characteristics of gyrotron oscillators with an open cavity consisting of weakly irregular waveguides. Numerical results on frequency detuning and oscillation starting current for a whispering-gallery-mode gyrotron are described in which these equations were utilized. Experiments for making a check on the effectiveness of the derived equations showed that they well express the operation of gyrotrons in comparison with the linear theory using an empty cavity field as the wave field.

  • Visual Communications in the U.S.

    Charles N. JUDICE  

     
    INVITED PAPER

      Vol:
    E75-B No:5
      Page(s):
    309-312

    To describe the state of visual communications in the U.S., two words come to mind: digital and anticipation. Although compressed, digital video has been used in teleconferencing systems for at least ten years, it is only recently that a broad consensus has developed among diverse industries anticipating business opportunities, value, or both in digital video. The drivers for this turning point are: advances in digital signal processing, continued improvement in the cost, complexity, and speed of VLSI, maturing international standards and their adoption by vendors and end users, and a seemingly insatiable consumer demand for greater diversity, accessibility, and control of communication systems.

  • Closed-Form Error Probability Formula for Narrowband DQPSK in Slow Rayleigh Fading and Gaussian Noise

    Chun Sum NG  Francois P.S. CHIN  Tjeng Thiang TJUNG  Kin Mun LYE  

     
    PAPER-Radio Communication

      Vol:
    E75-B No:5
      Page(s):
    401-412

    A new error rate formula for narrowband Differential Quaternary Phase Shift Keyed system in a Rayleigh fading channel is obtained in closed-form. The formula predicts a non-zero error probability for noiseless reception. As predicted, the computed error rates approach some constant or floor values as the signal-to-noise ratio is increased beyond a certain limit. In the presence of various Doppler frequency shifts, an IF filter bandwidth of about one times the symbol rate is found to lead to a minimum error probability prior to the appearence of the error rate floor.

  • A Study on Modeling of the Motion Compensation Prediction Error Signal

    Yoshiaki SHISHIKUI  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    368-376

    An investigation into the spatial properties of the motion compensation prediction error signal has been carried out to provide a better understanding of it and to model the spatial power spectrum of the error signal. To construct a theoretical model, the motion compensation prediction process is analyzed, including the interpolation process used for motion compensation with decimal place precision, in the horizontal and vertical directions separately, thereby deriving its statistical power gain function. Properties of the input processing system are also examined. Based on these analyses, this paper proposes a theoretical model of the error signal, clarifies its spatial properties that are distinctive of the interlace scanned picture signal, and collates the obtained data with the real picture, thereby verifying the validity of the model. This model is especially useful for the evaluation, selection and detailed designing of the coding techniques of the error signal.

  • Model-Based/Waveform Hybrid Coding for Low-Rate Transmission of Facial Images

    Yuichiro NAKAYA  Hiroshi HARASHIMA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    377-384

    Despite its potential to realize image communication at extremely low rates, model-based coding (analysis-synthesis coding) still has problems to be solved for any practical use. The main problems are the difficulty in modeling unknown objects and the presence of analysis errors. To cope with these difficulties, we incorporate waveform coding into model-based coding (model-based/waveform hybrid coding). The incorporated waveform coder can code unmodeled objects and cancel the artifacts caused by the analysis errors. From a different point of view, the performance of the practically used waveform coder can be improved by the incorporation of model-based coding. Since the model-based coder codes the modeled part of the image at extremely low rates, more bits can be allocated for the coding of the unmodeled region. In this paper, we present the basic concept of model-based/waveform hybrid coding. We develop a model-based/MC-DCT hybrid coding system designed to improve the performance of the practically used MC-DCT coder. Simulation results of the system show that this coding method is effective at very low transmission rates such as 16kb/s. Image transmission at such low rates is quite difficult for an MC-DCT coder without the contribution of the model-based coder.

  • On Translating a Set of C-Oriented Faces in Three Dimensions

    Xue-Hou TAN  Tomio HIRATA  Yasuyoshi INAGAKI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E75-D No:3
      Page(s):
    258-264

    Recently much attention has been devoted to the problem of translating a set of geometrical objects in a given direction, one at a time, without allowing collisions between the objects. This paper studies the translation problem in three dimensions on a set of c-oriented faces", that is, the faces whose bounding edges have a constant number c of orientations. We solve the problem in O(N log2 NK) time and O(N log N) space, where N is the total number of edges of the faces and K is the number of edge intersections in the projection plane. As an intermediate step, we also solve a problem related to ray-shooting. The algorithm for translating c-oriented faces finds uses in computer graphic systems.

  • Overview of Visual Telecommunication Activities in Japan

    Takahiko KAMAE  

     
    INVITED PAPER

      Vol:
    E75-B No:5
      Page(s):
    313-318

    The states-of-the-art in visual communication in Japan are described. First the status of networks, which is a basis for offering visual communication service, is outlined. Visual communication service being developed on the basis of ISDN is described. The future service can be represented by NTT's service vision VI&P. Visual communication technologies and services being studied are surveyed.

  • An Approximate Algorithm for Decision Tree Design

    Satoru OHTA  

     
    PAPER-Optimization Techniques

      Vol:
    E75-A No:5
      Page(s):
    622-630

    Efficient probabilistic decision trees are required in various application areas such as character recognition. This paper presents a polynomial-time approximate algorithm for designing a probabilistic decision tree. The obtained tree is near-optimal for the cost, defined as the weighted sum of the expected test execution time and expected loss. The algorithm is advantageous over other reported heuristics from the viewpoint that the goodness of the solution is theoretically guaranteed. That is, the relative deviation of the obtained tree cost from the exact optimum is not more than a positive constant ε, which can be set arbitrarily small. When the given loss function is Hamming metric, the time efficiency is further improved by using the information theoretical lower bound on the tree cost. The time efficiency of the algorithm and the accuracy of the solutions were evaluated through computational experiments. The results show that the computing time increases very slowly with an increase in problem size and the relative error of the obtained solution is much less than the upper bound ε for most problems.

  • Applying Adaptive Credit Assignment Algorithm for the Learning Classifier System Based upon the Genetic Algorithm

    Shozo TOKINAGA  Andrew B. WHINSTON  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    568-577

    This paper deals with an adaptive credit assignment algorithm to select strategies having higher capabilities in the learning classifier system (LCS) based upon the genetic algorithm (GA). We emulate a kind of prizes and incentives employed in the economies with imperfect information. The compensation scheme provides an automatic adjustment in response to the changes in the environment, and a comfortable guideline to incorporate the constraints. The learning process in the LCS based on the GA is realized by combining a pair of most capable strategies (called classifiers) represented as the production rules to replace another less capable strategy in the similar manner to the genetic operation on chromosomes in organisms. In the conventional scheme of the learning classifier system, the capability s(k, t) (called strength) of a strategy k at time t is measured by only the suitableness to sense and recognize the environment. But, we also define and utilize the prizes and incentives obtained by employing the strategy, so as to increase s(k, t) if the classifier provide good rules, and some amount is subtracted if the classifier k violate the constraints. The new algorithm is applied to the portfolio management. As the simulation result shows, the net return of the portfolio management system surpasses the average return obtained in the American securities market. The result of the illustrative example is compared to the same system composed of the neural networks, and related problems are discussed.

  • The Self-Validating Numerical Method--A New Tool for Computer Assisted Proofs of Nonlinear Problems--

    Shin'ichi OISHI  

     
    INVITED SURVEY PAPER-Nonlinear Systems

      Vol:
    E75-A No:5
      Page(s):
    595-612

    The purpose of the present paper is to review a state of the art of nonlinear analysis with the self-validating numerical method. The self-validating numerics based method provides a tool for performing computer assisted proofs of nonlinear problems by taking the effect of rounding errors in numerical computations rigorously into account. First, Kantorovich's approach of a posteriori error estimation method is surveyed, which is based on his convergence theorem of Newton's method. Then, Urabe's approach for computer assisted existence proofs is likewise discussed. Based on his convergence theorem of the simplified Newton method, he treated practical nonlinear differential equations such as the Van der Pol equation ahd the Duffing equation, and proved the existence of their periodic and quasi-periodic solutions by the self-validating numerics. An approach of the author for generalization and abstraction of Urabe's method are also discribed to more general funcional equations. Furthermore, methods for rigorous estimation of rounding errors are surveyed. Interval analytic methods are discussed. Then an approach of the author which uses rational arithmetic is reviewed. Finally, approaches for computer assisted proofs of nonlinear problems are surveyed, which are based on the self-validating numerics.

  • Fractal Dimension of Neural Networks

    Ikuo MATSUBA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    363-365

    A theoretical conjecture on fractal dimensions of a dendrite distribution in neural networks is presented on the basis of the dendrite tree model. It is shown that the fractal dimensions obtained by the model are consistent with the recent experimental data.

  • Infinite Dimensional Homotopy Method of Calculating Solutions for Fredholm Operator with Index 1 and A-Proper Operator Equations

    Mitsunori MAKINO  Shin'ichi OISHI  Masahide KASHIWAGI  Kazuo HORIUCHI  

     
    LETTER-Nonlinear Systems

      Vol:
    E75-A No:5
      Page(s):
    613-615

    A type of infinite dimensional homotopy method is considered for numerically calculating a solution curve of a nonlinear functional equation being a Fredholm operator with index 1 and an A-proper operator. In this method, a property of so-called A-proper homotopy plays an important role.

  • Perceptually Transparent Coding of Still Images

    V. Ralph ALGAZI  Todd R. REED  Gary E. FORD  Eric MAURINCOMME  Iftekhar HUSSAIN  Ravindra POTHARLANKA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    340-348

    The encoding of high quality and super high definition images requires new approaches to the coding problem. The nature of such images and the applications in which they are used prohibits the introduction of perceptible degradation by the coding process. In this paper, we discuss techniques for the perceptually transparent coding of images. Although technically lossy methods, images encoded and reconstructed using these techniques appear identical to the original images. The reconstructed images can be postprocessed (e.g., enhanced via anisotropic filtering), due to the absence of structured errors, commonly introduced by conventional lossy methods. The compression, ratios obtained are substantially higher than those achieved using lossless means.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : The Weighted Subspace Criterion

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    366-375

    Principal Component Analysis (PCA) is a useful technique in feature extraction and data compression. It can be formulated as a statistical constrained maximization problem, whose solution is given by unit eigenvectors of the data covariance matrix. In a practical application like image compression, the problem can be solved numerically by a corresponding gradient ascent maximization algorithm. Such on-line algoritms can be good alternatives due to their parallelism and adaptivity to input data. The algorithms can be implemented in a local and homogeneous way in learning neural networks. One example is the Subspace Network. It is a regular layer of parallel artificial neurons with a learning rule that is completely homogeneous with respect to the neurons. However, due to the complete homogeneity, the learning rule does not converge to the unique basis given by the dominant eigenvectors, but any basis of this eigenvector subspace is possible. In many applications like data compression, the subspace is not sufficient but the actual eigenvectors or PCA coefficient vectors are needed. A new criterion, called the Weighted Subspace Criterion, is proposed, which makes a small symmetry-breaking change to the Subspace Criterion. Only the true eigenvectors are solutions. Making the corresponding change to the learning rule of the Subspace Network gives a modified learning rule, which can be still implemented on a homogeneous network architecture. In learning, the weight vectors will tend to the true eigenvectors.

  • Information Geometry of Neural Networks

    Shun-ichi AMARI  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    531-536

    Information geometry is a new powerful method of information sciences. Information geometry is applied to manifolds of neural networks of various architectures. Here is proposed a new theoretical approach to the manifold consisting of feedforward neural networks, the manifold of Boltzmann machines and the manifold of neural networks of recurrent connections. This opens a new direction of studies on a family of neural networks, not a study of behaviors of single neural networks.

20381-20400hit(20498hit)