The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42756hit)

39841-39860hit(42756hit)

  • Applying Adaptive Credit Assignment Algorithm for the Learning Classifier System Based upon the Genetic Algorithm

    Shozo TOKINAGA  Andrew B. WHINSTON  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    568-577

    This paper deals with an adaptive credit assignment algorithm to select strategies having higher capabilities in the learning classifier system (LCS) based upon the genetic algorithm (GA). We emulate a kind of prizes and incentives employed in the economies with imperfect information. The compensation scheme provides an automatic adjustment in response to the changes in the environment, and a comfortable guideline to incorporate the constraints. The learning process in the LCS based on the GA is realized by combining a pair of most capable strategies (called classifiers) represented as the production rules to replace another less capable strategy in the similar manner to the genetic operation on chromosomes in organisms. In the conventional scheme of the learning classifier system, the capability s(k, t) (called strength) of a strategy k at time t is measured by only the suitableness to sense and recognize the environment. But, we also define and utilize the prizes and incentives obtained by employing the strategy, so as to increase s(k, t) if the classifier provide good rules, and some amount is subtracted if the classifier k violate the constraints. The new algorithm is applied to the portfolio management. As the simulation result shows, the net return of the portfolio management system surpasses the average return obtained in the American securities market. The result of the illustrative example is compared to the same system composed of the neural networks, and related problems are discussed.

  • Analysis of Fault Tolerance of Reconfigurable Arrays Using Spare Processors

    Kazuo SUGIHARA  Tohru KIKUNO  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E75-D No:3
      Page(s):
    315-324

    This paper addresses fault tolerance of a processor array that is reconfigurable by replacing faulty processors with spare processors. The fault tolerance of such a reconfigurable array depends on not only an algorithm for spare processor assignment but also the folloving factor of an organization of spare processors in the reconfigurable array: the number of spare processors; the number of processors that can be replaced by each spare processor; and how spare processors are connected with processors. We discuss a relationship between fault tolerance of reconfigurable arrays and their organizations of spare processors in terms of the smallest size of fatal sets and the reliability function. The smallest size of fatal sets is the smallest number of faulty processors for which the reconfigurable array cannot be failure-free as a processor array system no matter what reconfiguration is used. The reliability function is a function of time t whose value is the probability that the reconfigurable array is failure-free as a processor array system by time t when the best possible reconfiguration is used. First, we show that the larger smallest size of fatal sets a reconfigurable array has, the larger reliability function it has by some time. It suggests that it is important to maximize the smallest size of fatal sets in orer to improve the reliability function as well. Second, we present the best possible smallest size of fatal sets for nn reconfigurable arrays using 2n spare processor each of which is connected with n processors. Third, we show that the nn reconfigurable array previously presented in a literature achieves the best smallest size of fatal sets. That is, it is optimum with respect to the smallest size of fatal sets. Fourth, we present an uppr bound of the reliability function of the optimum nn reconfigurable array using 2n spare processors.

  • Model-Based/Waveform Hybrid Coding for Low-Rate Transmission of Facial Images

    Yuichiro NAKAYA  Hiroshi HARASHIMA  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    377-384

    Despite its potential to realize image communication at extremely low rates, model-based coding (analysis-synthesis coding) still has problems to be solved for any practical use. The main problems are the difficulty in modeling unknown objects and the presence of analysis errors. To cope with these difficulties, we incorporate waveform coding into model-based coding (model-based/waveform hybrid coding). The incorporated waveform coder can code unmodeled objects and cancel the artifacts caused by the analysis errors. From a different point of view, the performance of the practically used waveform coder can be improved by the incorporation of model-based coding. Since the model-based coder codes the modeled part of the image at extremely low rates, more bits can be allocated for the coding of the unmodeled region. In this paper, we present the basic concept of model-based/waveform hybrid coding. We develop a model-based/MC-DCT hybrid coding system designed to improve the performance of the practically used MC-DCT coder. Simulation results of the system show that this coding method is effective at very low transmission rates such as 16kb/s. Image transmission at such low rates is quite difficult for an MC-DCT coder without the contribution of the model-based coder.

  • Equivalent Edge Currents by the Modified Edge Representation: Physical Optics Components

    Tsutomu MURASAKI  Makoto ANDO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E75-C No:5
      Page(s):
    617-626

    The method of equivalent edge currents (MEC) has some ambiguity about definition of edge currents at general edge points except for diffraction points. The modified edge representation is introduced to overcome this ambiguity. The modified edge is the fictitious one which is defined so as to satisfy the diffraction law for given directions of incidence and observation. The equivalent edge currents for physical optics (PO) components at general edge points are obtained by utilizing these fictitious edges and the classical Keller's diffraction coefficients. High potentials of these currents are numerically demonstrated for diffraction from a disk, a square plate and a parabolic reflector.

  • Improvement of Contactless Evaluation for Surface Contamination Using Two Lasers of Different Wavelengths to Exclude the Effect of Impedance Mismatching

    Akira USAMI  Hideki FUJIWARA  Noboru YAMADA  Kazunori MATSUKI  Tsutomu TAKEUCHI  Takao WADA  

     
    PAPER-Semiconductor Materials and Devices

      Vol:
    E75-C No:5
      Page(s):
    595-603

    This paper describes a new evaluation technique for Si surfaces. A laser/microwave method using two lasers of different wavelengths for carrier injection is proposed to evaluate Si surfaces. With this evaluation system, the effect of impedance mismatching between the microwave probe and the Si wafer can be eliminated. These lasers used in this experiment are He-Ne (wavelength633 nm, penetration depth3 µm) and YAG lasers (wavelength1060 nm, penetration depth500 µm). Using a microwave probe, the amount of injected excess carriers can be detected. These carrier concentrations are mainly dependent on the condition of the surface, when carriers are excited by the He-Ne laser, and the condition of the bulk region, when carriers are excited by the YAG laser. We refer to microwave intensities detected by the He-Ne and YAG lasers as the surface-recombination-velocity-related microwave intensity (SRMI) and bulk-related microwave intensity (BRMI), respectively. We refer to the difference between SRMI and BRMI as relative SRMI (R-SRMI), which is closely related to the surface condition. A theoretical analysis is performed and several experiments are conducted to evaluate Si surfaces. It is found that the R-SRMI method is better suited to surface evaluation then conventional lifetime measurements, and that the rdliability and reproducibility of measurements are improved.

  • Neural Networks Applied to Speech Recognition

    Hiroaki SAKOE  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    546-551

    Applications of neural networks are prevailing in speech recognition research. In this paper, first, suitable role of neural network (mainly back-propagation based multi-layer type) in speech recognition, is discussed. Considering that speech is a long, variable length, structured pattern, a direction, in which neural network is used in cooperation with existing structural analysis frameworks, is recommended. Activities are surveyed, including those intended to cooperatively merge neural networks into dynamic programming based structural analysis framework. It is observed that considerable efforts have been paid to suppress the high nonlinearity of network output. As far as surveyed, no experiment in real field has been reported.

  • Passivity and Learnability for Mechanical Systems--A Learning Control Theory for Skill Refinement--

    Suguru ARIMOTO  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    552-560

    This paper attempts to account for intelligibility of practices-based learning (so-called 'learning control') for skill refinement from the viewpoint of Newtonian mechanics. It is shown from an axiomatic approach that an extended notion of passivity for the residual error dynamics of robots plays a crucial role in their ability of learning. More precisely, it is shown that the exponentially weighted passivity with respect to residual velocity vector and torque vector leads the robot system to the convergence of trajectory tracking errors to zero with repeating practices. For a class of tasks when the endpoint is constrained geometrically on a surface, the problem of convergence of residual tracking errors and residual contact-force errors is also discussed on the basis of passivity analysis.

  • Separating Capabilities of Three Layer Neural Networks

    Ryuzo TAKIYAMA  

     
    SURVEY PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    561-567

    This paper reviews the capability of the three layer neural network (TLNN) with one output neuron. The input set is restricted to a finite subset S of En, and the TLNN implements a function F such as F : S I={1, -1}, i,e., F is a dichotomy of S. How many functions (dichotomies) can it compute by appropriately adjusting parameters in the TLNN? Brief historical review, some theorems on the subject obtained so far, and related topics are presented. Several open problems are also included.

  • An Approximate Algorithm for Decision Tree Design

    Satoru OHTA  

     
    PAPER-Optimization Techniques

      Vol:
    E75-A No:5
      Page(s):
    622-630

    Efficient probabilistic decision trees are required in various application areas such as character recognition. This paper presents a polynomial-time approximate algorithm for designing a probabilistic decision tree. The obtained tree is near-optimal for the cost, defined as the weighted sum of the expected test execution time and expected loss. The algorithm is advantageous over other reported heuristics from the viewpoint that the goodness of the solution is theoretically guaranteed. That is, the relative deviation of the obtained tree cost from the exact optimum is not more than a positive constant ε, which can be set arbitrarily small. When the given loss function is Hamming metric, the time efficiency is further improved by using the information theoretical lower bound on the tree cost. The time efficiency of the algorithm and the accuracy of the solutions were evaluated through computational experiments. The results show that the computing time increases very slowly with an increase in problem size and the relative error of the obtained solution is much less than the upper bound ε for most problems.

  • Overview of Visual Telecommunication Activities in Japan

    Takahiko KAMAE  

     
    INVITED PAPER

      Vol:
    E75-B No:5
      Page(s):
    313-318

    The states-of-the-art in visual communication in Japan are described. First the status of networks, which is a basis for offering visual communication service, is outlined. Visual communication service being developed on the basis of ISDN is described. The future service can be represented by NTT's service vision VI&P. Visual communication technologies and services being studied are surveyed.

  • FOREWORD

    Tosio KOGA  Shun-ichi AMARI  

     
    FOREWORD

      Vol:
    E75-A No:5
      Page(s):
    529-530
  • The Computation of Nodal Points Generated by Period Doubling Bifurcation Points on a Locus of Turning Points

    Norio YAMAMOTO  

     
    PAPER-Nonlinear Systems

      Vol:
    E75-A No:5
      Page(s):
    616-621

    As the values of parameters in periodic systems vary, a nodal point appearing on a locus of period doubling bifurcation points crosses over a locus of turning points. We consider the nodal point lying just on the locus of turning points and consider its accurate location. To compute it, we consider an extended system which consists of an original equation and an additional equation. We present a result assuring that this extended system has an isolated solution containing the nodal point.

  • Visual Communications in the U.S.

    Charles N. JUDICE  

     
    INVITED PAPER

      Vol:
    E75-B No:5
      Page(s):
    309-312

    To describe the state of visual communications in the U.S., two words come to mind: digital and anticipation. Although compressed, digital video has been used in teleconferencing systems for at least ten years, it is only recently that a broad consensus has developed among diverse industries anticipating business opportunities, value, or both in digital video. The drivers for this turning point are: advances in digital signal processing, continued improvement in the cost, complexity, and speed of VLSI, maturing international standards and their adoption by vendors and end users, and a seemingly insatiable consumer demand for greater diversity, accessibility, and control of communication systems.

  • High-Fidelity Sub-Band Coding for Very High Resolution Images

    Takahiro SAITO  Hirofumi HIGUCHI  Takashi KOMATSU  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    327-339

    Very high resolution images with more than 2,000*2.000 pels will play a very important role in a wide variety of applications of future multimedia communications ranging from electronic publishing to broadcasting. To make communication of very high resolution images practicable, we need to develop image coding techniques that can compress very high resolution images efficiently. Taking the channel capacity limitation of the future communication into consideration, the requisite compression ratio will be estimated to be at least 1/10 to 1/20 for color signals. Among existing image coding techniques, the sub-band coding technique is one of the most suitable techniques. With its applications to high-fidelity compression of very high resolution images, one of the major problem is how to encode high frequency sub-band signals. High frequency sub-band signals are well modeled as having approximately memoryless probability distribution, and hence the best way to solve this problem is to improve the quantization of high frequency sub-band signals. From the standpoint stated above, the work herein first compares three different scalor quantization schemes and improved permutation codes, which the authors have previously developed extending the concept of permutation codes, from the aspect of quantization performance for a memoryless probability distribution that well approximates the real statistical properties of high frequency sub-band signals, and thus demonstrates that at low coding rates improved permutation codes outperform the other scalor quatization schemes and that its superiority decreases as its coding rate increases. Moreover, from the results stated above, the work herein, develops a rate-adaptive quantization technique where the number of bits assigned to each subblock is determined according to the signal variance within the subblock and the proper quantization scheme is chosen from among different types of quantization schemes according to the allocated number of bits, and applies it to the high-fidelity encoding of sub-band signals of very high resolution images to demonstrate its usefulness.

  • FOREWORD

    Shin-ichi MURAKAMI  

     
    FOREWORD

      Vol:
    E75-B No:5
      Page(s):
    307-308
  • Cold Cathode with SIS Tunnel Junction

    Tetsuya TAKAMI  Kazuyoshi KOJIMA  Takashi NOGUCHI  Koichi HAMANAKA  

     
    PAPER-Superconductive Electronics

      Vol:
    E75-C No:5
      Page(s):
    604-609

    The energy distribution and emission efficiency of electrons emitted from a superconductor-insulator-superconductor (SIS) junction have been investigated by numerical calculation adopting the free electron model. The emission efficiency of an SIS junction cold cathode was found to be about 0.3% of tunneling current flowing to the SIS junction when the energy gap voltage of superconductor was 20 meV, the work function of counter electrode 1 eV, the bias voltage 0.96 V, the thickness of the counter electrode 100 , the electric field strength between the plate and the counter electrode 106 V/m, and the relaxation time 0.01 ps. It is clear that the SIS junction cold cathode can emit electrons with sharper energy distributions at much the same efficiency as compared with a metal-insulator-metal (MIM) junction cold cathode.

  • Information Geometry of Neural Networks

    Shun-ichi AMARI  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    531-536

    Information geometry is a new powerful method of information sciences. Information geometry is applied to manifolds of neural networks of various architectures. Here is proposed a new theoretical approach to the manifold consisting of feedforward neural networks, the manifold of Boltzmann machines and the manifold of neural networks of recurrent connections. This opens a new direction of studies on a family of neural networks, not a study of behaviors of single neural networks.

  • On Translating a Set of C-Oriented Faces in Three Dimensions

    Xue-Hou TAN  Tomio HIRATA  Yasuyoshi INAGAKI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E75-D No:3
      Page(s):
    258-264

    Recently much attention has been devoted to the problem of translating a set of geometrical objects in a given direction, one at a time, without allowing collisions between the objects. This paper studies the translation problem in three dimensions on a set of c-oriented faces", that is, the faces whose bounding edges have a constant number c of orientations. We solve the problem in O(N log2 NK) time and O(N log N) space, where N is the total number of edges of the faces and K is the number of edge intersections in the projection plane. As an intermediate step, we also solve a problem related to ray-shooting. The algorithm for translating c-oriented faces finds uses in computer graphic systems.

  • An Intercomparison between MSR and SI Retrieved Rain Rates

    Yuji OHSAKI  Masaharu FUJITA  

     
    LETTER-Satellite Communication

      Vol:
    E75-B No:5
      Page(s):
    422-426

    Rain rates are estimated from brightness temperature measured with a Microwave Scanning Radiometer (MSR) carried on board the Marine Observation Satellite 1 (MOS-1). Estimations are made using a rain rate retrieval algorithm based on a radiative-transfer model assuming rain spaced uniformly over the ocean. These values are compared with a Satellite-Derived Index of Precipitation Intensity (SI), which estimates the rain rate from visible and infrared images of a Geostationary Meteorological Satellite in conjunction with rain observation by a radar network of the Japan Meteorological Agency. Good correlation between MSR and SI derived rain rates validates the rain-rate retrieval algorithm.

  • Analysis of Economics of Computer Backup Service

    Marshall FREIMER  Ushio SUMITA  Hsing K. CHENG  

     
    PAPER-Switching and Communication Processing

      Vol:
    E75-B No:5
      Page(s):
    385-400

    An organization may suffer large losses if its computer service is interrupted. For protection, it can purchase computer backup service from the outside market which temporarily provides service replacement from a central facility. A dynamic probabilistic model is developed which describes such a computer backup service system. The parties involved have conflicting motivations. The supplier is interested in optimizing his expected profits subject to a given set of parameters while the subscriber will evaluate the service contract to his own best interest. This paper analyzes how the economic interests of the supplier and subscribers interact based on a dynamic reliability analysis of their respective computer systems. Assuming all physical parameters fixed, the supplier's optimal value in terms of economic parameters is determined. An algorithmic procedure is developed for computing such values. Some numerical examples are presented in order to gain insights into the system.

39841-39860hit(42756hit)