The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] IN(26286hit)

26161-26180hit(26286hit)

  • Improvement of Contactless Evaluation for Surface Contamination Using Two Lasers of Different Wavelengths to Exclude the Effect of Impedance Mismatching

    Akira USAMI  Hideki FUJIWARA  Noboru YAMADA  Kazunori MATSUKI  Tsutomu TAKEUCHI  Takao WADA  

     
    PAPER-Semiconductor Materials and Devices

      Vol:
    E75-C No:5
      Page(s):
    595-603

    This paper describes a new evaluation technique for Si surfaces. A laser/microwave method using two lasers of different wavelengths for carrier injection is proposed to evaluate Si surfaces. With this evaluation system, the effect of impedance mismatching between the microwave probe and the Si wafer can be eliminated. These lasers used in this experiment are He-Ne (wavelength633 nm, penetration depth3 µm) and YAG lasers (wavelength1060 nm, penetration depth500 µm). Using a microwave probe, the amount of injected excess carriers can be detected. These carrier concentrations are mainly dependent on the condition of the surface, when carriers are excited by the He-Ne laser, and the condition of the bulk region, when carriers are excited by the YAG laser. We refer to microwave intensities detected by the He-Ne and YAG lasers as the surface-recombination-velocity-related microwave intensity (SRMI) and bulk-related microwave intensity (BRMI), respectively. We refer to the difference between SRMI and BRMI as relative SRMI (R-SRMI), which is closely related to the surface condition. A theoretical analysis is performed and several experiments are conducted to evaluate Si surfaces. It is found that the R-SRMI method is better suited to surface evaluation then conventional lifetime measurements, and that the rdliability and reproducibility of measurements are improved.

  • An Adaptive Antenna System for High-Speed Digital Mobile Communications

    Yasutaka OGAWA  Yasuyuki NAGASHIMA  Kiyohiko ITOH  

     
    PAPER-Antennas and Propagation

      Vol:
    E75-B No:5
      Page(s):
    413-421

    High-speed digital land mobile communications suffer from frequency-selective fading due to a long delay difference. Several techniques have been proposed to overcome the multipath propagation problem. Among them, an adaptive array antenna is suitable for very high-speed transmission because it can suppress the multipath signal of a long delay difference significantly. This paper describes the LMS adaptive array antenna for frequency-selective fading reduction and a new diversity technique. First, we propose a method to generate a reference signal in the LMS adaptive array. At the beginning of communication, we use training codes for the reference signal, which are known at a receiver. After the training period, we use detected codes for the reference signal. We can generate the reference signal modulating a carrier at the receiver by those codes. The carrier is oscillated independently of the incident signal. Then, the carrier frequency of the reference signal is in general different from that of the incident signal. However, the LMS adaptive array works in such a way that the carrier frequency of the array output coincides with that of the reference signal. Namely, the frequency difference does not affect the performance of the LMS adaptive array. Computer simulations show the proper behavior of the LMS adaptive array with the above reference signal generator. Moreover, we present a new multipath diversity technique using the LMS adaptive array. The LMS adaptive array reduces the frequency-selective fading by suppressing the multipath components. This means that the transmitted power is not used sufficiently. We propose a multiple beam antenna with the LMS adaptive array. Each antenna pattern receives one of the multipath components, and we combine them adjusting the timing. Then, we realize the multipath diversity. In addition to the multipath fading reduction, we can improve a signal-to-noise ratio by the diversity technique.

  • Analysis of Time Transient EM Field Response from a Dielectric Spherical Cavity

    Hiroshi SHIRAI  Eiji NAKANO  Mikio YANO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E75-C No:5
      Page(s):
    627-634

    Transient responses by a dielectric sphere have been analyzed here for a dipole source located at the center. The formulation has been constructed first in the frequency domain, then transformed into the time domain to obtain for an impulsive response by two analytical methods, namely the Singularity Expansion Method and the Wavefront Expansion Method. While the former method collects the contributions around the singularities in the complex frequency domain, the latter gives us a result which is a summation of each successive wavefront arrivals. A Gaussian pulse has been introduced to simulate an impulse response result. The Gaussian pulse response is analytically formulated by convolving Gaussian pulse with the corresponding impulse response. Numercal inversion results are also calculated by Fast Fourier Transform Algorithm. Numerical examples are shown here to compare the results obtained by these three methods and good agreement are obtained between them. Comments are often made in connection with the corresponding two dimensional cylindrical case.

  • On Translating a Set of C-Oriented Faces in Three Dimensions

    Xue-Hou TAN  Tomio HIRATA  Yasuyoshi INAGAKI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E75-D No:3
      Page(s):
    258-264

    Recently much attention has been devoted to the problem of translating a set of geometrical objects in a given direction, one at a time, without allowing collisions between the objects. This paper studies the translation problem in three dimensions on a set of c-oriented faces", that is, the faces whose bounding edges have a constant number c of orientations. We solve the problem in O(N log2 NK) time and O(N log N) space, where N is the total number of edges of the faces and K is the number of edge intersections in the projection plane. As an intermediate step, we also solve a problem related to ray-shooting. The algorithm for translating c-oriented faces finds uses in computer graphic systems.

  • Information Geometry of Neural Networks

    Shun-ichi AMARI  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    531-536

    Information geometry is a new powerful method of information sciences. Information geometry is applied to manifolds of neural networks of various architectures. Here is proposed a new theoretical approach to the manifold consisting of feedforward neural networks, the manifold of Boltzmann machines and the manifold of neural networks of recurrent connections. This opens a new direction of studies on a family of neural networks, not a study of behaviors of single neural networks.

  • Understanding Conversational Sentences Using Multi-Paradigm World Knowledge

    Teruhiko UKITA  Satoshi KINOSHITA  Kazuo SUMITA  Hiroshi SANO  Shin'ya AMANO  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E75-D No:3
      Page(s):
    352-362

    Resolving ambiguities in interpreting the user's utterances is one of the most fundamental problems in the development of a question-answering system. The process of disambiguating interpretations requires knowledge and inference functions on an objective task field. This paper describes a framework for understanding conversational language, using the multi-paradigm knowledge representation (frames" and rules") which represents concept hierarchy and causal relationships for an objective field. Knowledge of the objective field is used in the process to interpret input sentences as a model for the objective world. In interpreting sentences, a procedure judges preferences for interpretation candidates by identifying causal relationship with messages in the preceding context, where the causal relationship is used to supplement some shortage of information and to give either an affirmative or a negative explanation to the interpretation. The procedure has been implemented in an experimental question-answering system, whose current task is consultation in operating an electronic device. The experimental results are shown for a concrete problem involving resolving anaphoric references, and characteristics of the knowledge processing system are discussed.

  • Analysis of Fault Tolerance of Reconfigurable Arrays Using Spare Processors

    Kazuo SUGIHARA  Tohru KIKUNO  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E75-D No:3
      Page(s):
    315-324

    This paper addresses fault tolerance of a processor array that is reconfigurable by replacing faulty processors with spare processors. The fault tolerance of such a reconfigurable array depends on not only an algorithm for spare processor assignment but also the folloving factor of an organization of spare processors in the reconfigurable array: the number of spare processors; the number of processors that can be replaced by each spare processor; and how spare processors are connected with processors. We discuss a relationship between fault tolerance of reconfigurable arrays and their organizations of spare processors in terms of the smallest size of fatal sets and the reliability function. The smallest size of fatal sets is the smallest number of faulty processors for which the reconfigurable array cannot be failure-free as a processor array system no matter what reconfiguration is used. The reliability function is a function of time t whose value is the probability that the reconfigurable array is failure-free as a processor array system by time t when the best possible reconfiguration is used. First, we show that the larger smallest size of fatal sets a reconfigurable array has, the larger reliability function it has by some time. It suggests that it is important to maximize the smallest size of fatal sets in orer to improve the reliability function as well. Second, we present the best possible smallest size of fatal sets for nn reconfigurable arrays using 2n spare processor each of which is connected with n processors. Third, we show that the nn reconfigurable array previously presented in a literature achieves the best smallest size of fatal sets. That is, it is optimum with respect to the smallest size of fatal sets. Fourth, we present an uppr bound of the reliability function of the optimum nn reconfigurable array using 2n spare processors.

  • A Testable Design of Sequential Circuits under Highly Observable Condition

    WEN Xiaoqing  Kozo KINOSHITA  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E75-D No:3
      Page(s):
    334-341

    The outputs of all gates in a circuit are assumed to be observable unber the highly observable condition, which is mainly based on the use of E-beam testers. When using the E-beam tester, it is desirable that the test set for a circuit is small and the test vectors in the test set can be applied in a successive and repetitive manner. For a combinational circuit, these requirements can be satisfied by modifying the circuit into a k-UCP circuit, which needs only a small number of tests for diagnosis. For a sequential circuit, however, even if the combinational portion has been modified into a k-UCP circuit, it is impossible that the test vectors for the combinational portion can always be applied in a successive and repetitive manner because of the existence of feedback loops. To solve this problem, the concept of k-UCP scan circuits is proposed in this paper. It is shown that the test vectors for the combinational portion in a k-UCP scan circuit can be applied in a successive and repetitive manner through a specially constructed scan-path. An efficient method of modifying a sequential circuit into a k-UCP scan circuit is also presented.

  • New Classes of Majority-Logic Decodable Double Error Correcting Codes for Computer Memories

    Toshio HORIGUCHI  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E75-D No:3
      Page(s):
    325-333

    A new class of (m23m1,m2) 1-step majority-logic decodable double error correcting codes (1-step DEC codes) is described, where m is an odd integer. Combining this code with properly constructed (m1k1,k1) and (m,k2) 1-step DEC codes, a (m23(mk1)1,m23k1) 1-step DEC code and a (m23(mk2)1,m2) 2-step majority-logic decodable DEC code (2-step DEC code) are obtained, respectively. Considering computer memory applications, some practical 1 -and 2-step DEC codes with data-bit lengths of 24, 32, 64 and 72 are obtained by shortening the new codes, and are compared to existing majority-logic decodable DEC codes. It is shown that, for given data-bit lengths, new 2-step DEC codes have much better code rates than self-orthogonal DEC codes but slightly worse code rates than existing 2-step majority-logic decodable cyclic DEC codes (2-step cyclic DEC codes). However, parallel decoders of new 2-step DEC codes are much simpler than those of exisiting 2-step cyclic DEC codes, and are nearly as simple as those of 1-step DEC codes.

  • High-Fidelity Sub-Band Coding for Very High Resolution Images

    Takahiro SAITO  Hirofumi HIGUCHI  Takashi KOMATSU  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    327-339

    Very high resolution images with more than 2,000*2.000 pels will play a very important role in a wide variety of applications of future multimedia communications ranging from electronic publishing to broadcasting. To make communication of very high resolution images practicable, we need to develop image coding techniques that can compress very high resolution images efficiently. Taking the channel capacity limitation of the future communication into consideration, the requisite compression ratio will be estimated to be at least 1/10 to 1/20 for color signals. Among existing image coding techniques, the sub-band coding technique is one of the most suitable techniques. With its applications to high-fidelity compression of very high resolution images, one of the major problem is how to encode high frequency sub-band signals. High frequency sub-band signals are well modeled as having approximately memoryless probability distribution, and hence the best way to solve this problem is to improve the quantization of high frequency sub-band signals. From the standpoint stated above, the work herein first compares three different scalor quantization schemes and improved permutation codes, which the authors have previously developed extending the concept of permutation codes, from the aspect of quantization performance for a memoryless probability distribution that well approximates the real statistical properties of high frequency sub-band signals, and thus demonstrates that at low coding rates improved permutation codes outperform the other scalor quatization schemes and that its superiority decreases as its coding rate increases. Moreover, from the results stated above, the work herein, develops a rate-adaptive quantization technique where the number of bits assigned to each subblock is determined according to the signal variance within the subblock and the proper quantization scheme is chosen from among different types of quantization schemes according to the allocated number of bits, and applies it to the high-fidelity encoding of sub-band signals of very high resolution images to demonstrate its usefulness.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : The Weighted Subspace Criterion

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    366-375

    Principal Component Analysis (PCA) is a useful technique in feature extraction and data compression. It can be formulated as a statistical constrained maximization problem, whose solution is given by unit eigenvectors of the data covariance matrix. In a practical application like image compression, the problem can be solved numerically by a corresponding gradient ascent maximization algorithm. Such on-line algoritms can be good alternatives due to their parallelism and adaptivity to input data. The algorithms can be implemented in a local and homogeneous way in learning neural networks. One example is the Subspace Network. It is a regular layer of parallel artificial neurons with a learning rule that is completely homogeneous with respect to the neurons. However, due to the complete homogeneity, the learning rule does not converge to the unique basis given by the dominant eigenvectors, but any basis of this eigenvector subspace is possible. In many applications like data compression, the subspace is not sufficient but the actual eigenvectors or PCA coefficient vectors are needed. A new criterion, called the Weighted Subspace Criterion, is proposed, which makes a small symmetry-breaking change to the Subspace Criterion. Only the true eigenvectors are solutions. Making the corresponding change to the learning rule of the Subspace Network gives a modified learning rule, which can be still implemented on a homogeneous network architecture. In learning, the weight vectors will tend to the true eigenvectors.

  • The Self-Validating Numerical Method--A New Tool for Computer Assisted Proofs of Nonlinear Problems--

    Shin'ichi OISHI  

     
    INVITED SURVEY PAPER-Nonlinear Systems

      Vol:
    E75-A No:5
      Page(s):
    595-612

    The purpose of the present paper is to review a state of the art of nonlinear analysis with the self-validating numerical method. The self-validating numerics based method provides a tool for performing computer assisted proofs of nonlinear problems by taking the effect of rounding errors in numerical computations rigorously into account. First, Kantorovich's approach of a posteriori error estimation method is surveyed, which is based on his convergence theorem of Newton's method. Then, Urabe's approach for computer assisted existence proofs is likewise discussed. Based on his convergence theorem of the simplified Newton method, he treated practical nonlinear differential equations such as the Van der Pol equation ahd the Duffing equation, and proved the existence of their periodic and quasi-periodic solutions by the self-validating numerics. An approach of the author for generalization and abstraction of Urabe's method are also discribed to more general funcional equations. Furthermore, methods for rigorous estimation of rounding errors are surveyed. Interval analytic methods are discussed. Then an approach of the author which uses rational arithmetic is reviewed. Finally, approaches for computer assisted proofs of nonlinear problems are surveyed, which are based on the self-validating numerics.

  • Applying Adaptive Credit Assignment Algorithm for the Learning Classifier System Based upon the Genetic Algorithm

    Shozo TOKINAGA  Andrew B. WHINSTON  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    568-577

    This paper deals with an adaptive credit assignment algorithm to select strategies having higher capabilities in the learning classifier system (LCS) based upon the genetic algorithm (GA). We emulate a kind of prizes and incentives employed in the economies with imperfect information. The compensation scheme provides an automatic adjustment in response to the changes in the environment, and a comfortable guideline to incorporate the constraints. The learning process in the LCS based on the GA is realized by combining a pair of most capable strategies (called classifiers) represented as the production rules to replace another less capable strategy in the similar manner to the genetic operation on chromosomes in organisms. In the conventional scheme of the learning classifier system, the capability s(k, t) (called strength) of a strategy k at time t is measured by only the suitableness to sense and recognize the environment. But, we also define and utilize the prizes and incentives obtained by employing the strategy, so as to increase s(k, t) if the classifier provide good rules, and some amount is subtracted if the classifier k violate the constraints. The new algorithm is applied to the portfolio management. As the simulation result shows, the net return of the portfolio management system surpasses the average return obtained in the American securities market. The result of the illustrative example is compared to the same system composed of the neural networks, and related problems are discussed.

  • A Model for the Development of the Spatial Structure of Retinotopic Maps and Orientation Columns

    Klaus OBERMAYER  Helge RITTER  Klaus J. SCHULTEN  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    537-545

    Topographic maps begin to be recognized as one of the major computational structures underlying neural computation in the brain. They provide dimension-reducing projections between feature spaces that seem to be established and maintained under the participation of selforganizing, adaptive processes. In this contribution, we investigate how well the structure of such maps can be replicated by simple adaptive processes of the kind proposed by Kohonen. We will particularly address the important issue, how the dimensionality of the input space affects the spatial organization of the resulting map.

  • Image Compression and Regeneration by Nonlinear Associative Silicon Retina

    Mamoru TANAKA  Yoshinori NAKAMURA  Munemitsu IKEGAMI  Kikufumi KANDA  Taizou HATTORI  Yasutami CHIGUSA  Hikaru MIZUTANI  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    586-594

    Threre are two types of nonlinear associative silicon retinas. One is a sparse Hopfield type neural network which is called a H-type retina and the other is its dual network which is called a DH-type retina. The input information sequences of H-type and HD-type retinas are given by nodes and links as voltages and currents respectively. The error correcting capacity (minimum basin of attraction) of H-type and DH-type retinas is decided by the minimum numbers of links of cutset and loop respectively. The operation principle of the regeneration is based on the voltage or current distribution of the neural field. The most important nonlinear operation in the retinas is a dynamic quantization to decide the binary value of each neuron output from the neighbor value. Also, the edge is emphasized by a line-process. The rates of compression of H-type and DH-type retinas used in the simulation are 1/8 and (2/3) (1/8) respectively, where 2/3 and 1/8 mean rates of the structural and binarizational compression respectively. We could have interesting and significant simulation results enough to make a chip.

  • 45Mbps Multi-Channel Composite TV Coding System

    Shuichi MATSUMOT  Takahiro HAMADA  Masahiro SAITO  Hitomi MURAKAMI  

     
    PAPER

      Vol:
    E75-B No:5
      Page(s):
    358-367

    In recent years, the digitalization of transmission links, such as optical fibre cables, satellite links, and terrestrial microwave links, has been progressed rapidly in many countries. In addition, many types of digital studio equipment have been developed and TV programs can be produced or edited without any picture quality degradation by using such equipment, for example, digital VTR. A high-efficiency bit-reduction coding system is the most promising and effective means for this situation in terms of reducing the cost of digital transmission of TV programs with high picture quality. Considering this background, a new digital coding system has been developed, which makes it possible to transmit up to 4 NTSC TV programs simultaneously over a single DS3 45Mbps link including two high quality sound channels and one 64kbps ancillary data channel for each TV program. The principal bit-reduction technique employed is 2 dimensional intraframe WHT (Walsh Hadamard Transform) coding, which gives higher coding performance for composite TV signals than DCT (Discrete Cosine Transform) coding. In order to attain high picture quality at around 8Mbps for 4 channel transmission, a 3 dimensional adaptive quantization cube which reflects human visual perception sufficiently is employed in the intraframe WHT coding scheme. The hardware has been made compact like a home use VTR. In this paper, first, the algorithm of the coding scheme developed for the coding system is presented, and then the system configuration and its basic coding performance are described.

  • A Batcher-Double-Omega Network with Combining

    Kalidou GAYE  Hideharu AMANO  

     
    PAPER-Computer Networks

      Vol:
    E75-D No:3
      Page(s):
    307-314

    The Batcher banyan network is well known as a non-blocking switching fabric. However, it is conflict free only when there is no packets for the same destination. To cope with the arbitrary combination of packets, an additional network or special control sequence which causes the increase of the hardware or performance degradation is required. A Batcher Double Omega network with Combining (BDOC) is an elegant solution of this problem. It consists of a Batcher sorter and two double sized Omega networks. Like in the Batcher banyan network, packets are sorted by the destination label in the Batcher sorter. In the first Omega network called the distributer, a packet is routed by a tag corresponding to the sum of the label at the output of the Batcher sorter and the destination label. In the second (Inverse) Omega network called the concentrator, the original destination label is used as the routing tag, and packets are routed without any conflict. The BDOC is useful for an interconnection network to connect processors and memory modules in multiprocessor. Unlike conventional multistage interconnection networks for multiprocessors, packets are transferred in a serial and synchronized manner. The simple structure of the switching element enables a high speed operation which reduces the latency caused by the serial communication. Using the pipelined circuit switching, the address and data packets share the same control signal, and the structure of the switching element is much simplified. Moreover, packets combining which avoids the hot spot contention is realized easily in the concentrator.

  • An Approximate Algorithm for Decision Tree Design

    Satoru OHTA  

     
    PAPER-Optimization Techniques

      Vol:
    E75-A No:5
      Page(s):
    622-630

    Efficient probabilistic decision trees are required in various application areas such as character recognition. This paper presents a polynomial-time approximate algorithm for designing a probabilistic decision tree. The obtained tree is near-optimal for the cost, defined as the weighted sum of the expected test execution time and expected loss. The algorithm is advantageous over other reported heuristics from the viewpoint that the goodness of the solution is theoretically guaranteed. That is, the relative deviation of the obtained tree cost from the exact optimum is not more than a positive constant ε, which can be set arbitrarily small. When the given loss function is Hamming metric, the time efficiency is further improved by using the information theoretical lower bound on the tree cost. The time efficiency of the algorithm and the accuracy of the solutions were evaluated through computational experiments. The results show that the computing time increases very slowly with an increase in problem size and the relative error of the obtained solution is much less than the upper bound ε for most problems.

  • Overview of Visual Telecommunication Activities in Japan

    Takahiko KAMAE  

     
    INVITED PAPER

      Vol:
    E75-B No:5
      Page(s):
    313-318

    The states-of-the-art in visual communication in Japan are described. First the status of networks, which is a basis for offering visual communication service, is outlined. Visual communication service being developed on the basis of ISDN is described. The future service can be represented by NTT's service vision VI&P. Visual communication technologies and services being studied are surveyed.

  • A Self-Consistent Linear Theory of Gyrotrons

    Kenichi HAYASHI  Tohru SUGAWARA  

     
    PAPER-Microwave and Millimeter Wave Technology

      Vol:
    E75-C No:5
      Page(s):
    610-616

    A new set of self-consistent linear equations is presented for the analysis of the startup characteristics of gyrotron oscillators with an open cavity consisting of weakly irregular waveguides. Numerical results on frequency detuning and oscillation starting current for a whispering-gallery-mode gyrotron are described in which these equations were utilized. Experiments for making a check on the effectiveness of the derived equations showed that they well express the operation of gyrotrons in comparison with the linear theory using an empty cavity field as the wave field.

26161-26180hit(26286hit)