The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Y(22683hit)

21641-21660hit(22683hit)

  • Convergence of the Simple Genetic Algorithm to the Two-bit Problems

    Yoshikane TAKAHASHI  

     
    PAPER-Algorithms, Data Structures and Computational Complexity

      Vol:
    E77-A No:5
      Page(s):
    868-880

    We develop a convergence theory of the simple genetic algorithm (SGA) for two-bit problems (Type I TBP and Type II TBP). SGA consists of two operations, reproduction and crossover. These are imitations of selection and recombination in biological systems. TBP is the simplest optimization problem that is devised with an intention to deceive SGA into deviating from the maximum point. It has been believed that, empirically, SGA can deviate from the maximum point for Type II while it always converges to the maximum point for Type I. Our convergence theory is a first mathematical achievement to ensure that the belief is true. Specifically, we demonstrate the following. (a) SGA always converges to the maximum point for Type I, starting from any initial point. (b) SGA converges either to the maximum or second maximum point for Type II, depending upon its initial points. Regarding Type II, we furthermore elucidate a typical sufficient initial condition under which SGA converges either to the maximum or second maximum point. Consequently, our convergence theory establishes a solid foundation for more general GA convergence theory that is in its initial stage of research. Moreover, it can bring powerful analytical techniques back to the research of original biological systems.

  • Adaptive Array Antenna Based on Spatial Spectral Estimation Using Maximum Entropy Method

    Minami NAGATSUKA  Naoto ISHII  Ryuji KOHNO  Hideki IMAI  

     
    PAPER

      Vol:
    E77-B No:5
      Page(s):
    624-633

    An adaptive array antenna can be considered as a useful tool of combating with fading in mobile communications. We can directly obtain the optimal weight coefficients without updating in temporal sampling, if the arrival angles and signal-to-noise ratio (SNR) of the desired and the undesired signals can be accurately estimated. The Maximum Entropy Method (MEM) can estimate the arrival angles, and the SNR from spatially sampled signals by an array antenna more precisely than the Discrete Fourier Transform (DFT). Therefore, this paper proposes and investigates an adaptive array antenna based on spatial spectral estimation using MEM. We call it MEM array. In order to reduce complexity for implementation, we also propose a modified algorithm using temporal updating as well. Furthermore, we propose a method of both improving estimation accuracy and reducing the number of antenna elements. In the method, the arrival angles can be approximately estimated by using temporal sampling instead of spatial sampling. Computer simulations evaluate MEM array in comparison with DFT array and LMS array, and show improvement owing to its modified algorithm and performance of the improved method.

  • Defect Detection of Passivation Layer by a Bias-Free Cu Decoration Method

    Tetsuaki WADA  Shinji NAKANO  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    585-589

    New detection method of passivation defect was studied. The method was the Cu decoration method without bias (bias-free Cu decoration). As the result of comparison with conventional method, it was found that a bias-free Cu decoration method was effective, sensitive and simple. In this method, the difference of humidity resistance induced by poor passivation coverage could be evaluated.

  • An Improved Reflection Wave Method for Measurement of Complex Permittivity at 100 MHz-1GHz

    Akira NAKAYAMA  Kazuya SHIMIZU  

     
    PAPER-Microwave and Millimeter Wave Technology

      Vol:
    E77-C No:4
      Page(s):
    633-638

    An improved reflection wave method was described for measurement of complex permittivity of low-loss materials over 100MHz-1GHz range. The residual impedance Zr and stray admittance Ys surrounding the test sample, which terminated the transmission line, were evaluated using sapphire as a reference material. The correction by the obtained Zr and Ys gave accurate values of complex permittivities of alumina and mullite ceramics as 100MHz-1GHz.

  • Matching of DUT Interconnection Pattern with CAD Layout in CAD-Linked Electron Beam Test System

    Koji NAKAMAE  Ryo NAKAGAKI  Katsuyoshi MIURA  Hiromu FUJIOKA  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    567-573

    Precise matching of the SEM (secondary electron microscope) image of the DUT (device under test) interconnection pattern with the CAD layout is required in the CAD-linked electron beam test system. We propose the point pattern matching method that utilizes a corner pattern in the CAD layout. In the method, a corner pattern which consists of a small number of pixels is derived by taking into account the design rules of VLSIs. By using the corner pattern as a template, the matching points of the template are sought in both the SEM image and CAD layout. Then, the point image obtained from the SEM image of DUT is matched with that from the CAD layout. Even if the number of points obtained in the DUT pattern is different from that in the CAD layout due to the influence of noise present in the SEM image of the DUT pattern, the point matching method would be successful. The method is applied to nonpassivated and passivated LSIs. Even for the passivated LSI where the contrast in the SEM image is mainly determined by voltage contrast, matching is successful. The computing time of the proposed method is found to be shortened by a factor of 4 to 10 compared with that in a conventional correlation coefficient method.

  • A Multiple Sidelobe Canceller Switching over Auxiliary Antennas Arranged in Triangular Order

    Tetsuo KIRIMOTO  Yasuhiro HARASAWA  Atsushi SHIMADA  

     
    PAPER-Electronic and Radio Applications

      Vol:
    E77-B No:4
      Page(s):
    519-525

    Many previous works state that a multiple Sidelobe canceller (MSLC) with two auxiliary antennas is successful in suppressing two interference signals received simultaneously by sidelobes of a main antenna. In this paper, we show that the MSLC does not always guarantee such capability in three dimensional applications where the incident direction of interference signals is defined by two angles (elevation and azimuth). We show the singularity of the autocorrelation matrix for the auxiliary channel signals induces the degradation of the capability by analyzing characteristics of MSLC's in three dimensional applications from the view point of the eigenvalue problem. To overcome this singularity, we propose a novel MSLC controlling the placement of auxiliary antennas by means of switching over three antennas arranged triangularly. Some simulations are conducted to show the effectiveness of the proposed MSLC.

  • Ray-Optical Techniques in Dielectric Waveguides

    Masahiro HASHIMOTO  Hiroyuki HASHIMOTO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E77-C No:4
      Page(s):
    639-646

    We describe a geometrical optics approach for the analysis of dielectric tapered waveguides. The method is based on the ray-optical treatment for wave-normal rays defined newly to waves of light in open structures. Geometrical optics fields are represented in terms of two kinds of wave-normal rays: leaky rays and guided rays. Since the behavior of these rays is different in the two regions separated at critical incidence, the geometrical optics fields have certain classes of discontinuity in a transition region between leaky and guided regions. Guided wave solutions are given as a superposition of guided rays that zigzag along the guides, all of which are totally reflected upon the interfaces. By including some leaky rays adjacent to the guided rays, we obtain more accurate guided wave solutions. Calculated results are in excellent agreement with wave optics solutions.

  • A Neurocomputational Approach to the Correspondence Problem in Computer Vision

    Hiroshi SAKO  Hadar Itzhak AVI-ITZHAK  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    507-515

    A problem which often arises in computer vision is that of matching corresponding points of images. In the case of object recognition, for example, the computer compares new images to templates from a library of known objects. A common way to perform this comparison is to extract feature points from the images and compare these points with the template points. Another common example is the case of motion detection, where feature points of a video image are compared to those of the previous frame. Note that in both of these example, the point correspondence is complicated by the fact that the point sets are not only randomly ordered but have also been distorted by an unknown transformation and having quite different coordinates. In the case of object recognition, there exists a transformation from the object being viewed, to its projection onto the camera's imaging plane, while in the motion detection case, this transformation represents the motion (translation and rotation) of the ofject. If the parameters of the transformation are completely unknow, then all n! permutations must be compared (n : number of feature points). For each permutation, the ensuing transformation is computed using the least-squared projection method. The exponentially large computation required for this is prohibitive. A neural computational method is propopsed to solve these combinatorial problems. This method obtains the best correspondence matching and also finds the associated transform parameters. The method was applied to two dimensional point correspondence and three-to-two dimensional correspondence. Finally, this connectionist approach extends readily to a Boltzmann machine implementation. This implementation is desirable when the transformation is unknown, as it is less sensitive to local minima regardless of initial conditions.

  • Failure Analysis in Si Device Chips

    Kiyoshi NIKAWA  

     
    INVITED PAPER

      Vol:
    E77-C No:4
      Page(s):
    528-534

    Recent developments and case studies regarding VLSI device chip failure analysis are reviewed. The key failure analysis techniques reviewed include EMMS (emission microscopy), OBIC (optical beam induced current), LCM (liquid crystal method), EBP (electron beam probing), and FIB (focused ion beam method). Further, future possibilities in failure analysis, and some promising new tools are introduced.

  • Evaluation of Robustness in a Leaning Algorithm that Minimizes Output Variation for Handprinted Kanji Pattern Recognition

    Yoshimasa KIMURA  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    393-401

    This paper uses both network analysis and experiments to confirm that the neural network learning algorithm that minimizes output variation (BPV) provides much more robustness than back-propagation (BP) or BP with noise-modified training samples (BPN). Network analysis clarifies the relationship between sample displacement and what and how the network learns. Sample displacement generates variation in the output of the output units in the output layer. The output variation model introduces two types of deformation error, both of which modify the mean square error. We propose a new error which combines the two types of deformation error. The network analysis using this new error considers that BPV learns two types of training samples where the modification is either towards or away from the category mean, which is defined as the center of sample distribution. The magnitude of modification depends on the position of the training sample in the sample distribution and the degree of leaning completion. The conclusions is that BPV learns samples modified towards to the category mean more stronger than those modified away from the category mean, namely it achieves nonuniform learning. Another conclusion is that BPN learns from uniformly modified samples. The conjecture that BPV is much more robust than the other two algorithms is made. Experiments that evaluate robustness are performed from two kinds of viewpoints: overall robustness and specific robustness. Benchmark studies using distorted handprinted Kanji character patterns examine overall robustness and two specifically modified samples (noise-modified samples and directionally-modified samples) examine specific robustness. Both sets of studies confirm the superiority of BPV and the accuracy of the conjecture.

  • On Secure and Fast Elliptic Curve Cryptosystems over Fp

    Atsuko MIYAJI  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    630-635

    From a practical point of view, a cryptosystem should require a small key size and less running time. For this purpose, we often select its definition field in such a way that the arithmetic can be implemented fast. But it often brings attacks which depend on the definition field. In this paper, we investigate the definition field Fp on which elliptic curve cryptosystems can be implemented fast, while maintaining the security. The expected running time on a general construction of many elliptic curves with a given number of rational points is also discussed.

  • Peformance Formulation and Evaluation of Associative Memory Extended to Higher Order

    Yukio KUMAGAI  Joarder KAMRUZZAMAN  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    736-741

    In this letter, we present a distinct alternative of cross talk formulation of associative memory based on the outer product algorithm extended to the higher order and a performance evaluation in terms of the probability of exact data recall by using this formulation. The significant feature of these formulations is that both cross talk and the probability formulated are explicitly represented as the functional forms of Hamming distance between the memorized keys and the applied input key, and the degree of higher order correlation. Simulation results show that exact data retrieval ability of the associative memory using randomly generated data and keys is in well agreement with our theoretical estimation.

  • Non-integer Exponents in Electronic Circuits II: Memory Effects in the Fractal Immittance

    Michio SUGI  Kazuhiro SAITO  

     
    PAPER-Analog Circuits and Signal Processing

      Vol:
    E77-A No:4
      Page(s):
    688-697

    The transient behavior in the fractal admittance acting as a non-integer-rank differential/integral operator, Y(s) ∝ sa with -1a1 and a0, is examined from the point of view of memory effects by employing the distributed-relaxation-time model. The internal state of the diode is found to be represented by the current spectrum i(λ, t) with respect to the carrier relaxation rate λ, leading to a general formulation of the long-time-tail memory behavior characteristic of the operator. One-to-one corrsepondence is found among the input voltage in the past ν(-t), the short-circuit current isc(t) and the initial current spectrum i(λ, 0) within the framework of the Laplace-type integral transformation and its inverse, assuring that each response retains in principle the entire information on the corresponding input, such as the functional form, the magnitude, the onset time, and so forth. The current and voltage responses are exemplified for various single-pulse voltage inputs. The responses to the pulse-train inputs corresponding to different ASCII codes are found to be properly discriminated between one another, showing the potentials of the present memory effects.

  • Partial Construction of an Arrangement of Lines and Its Application to Optimal Partitioning of Bichromatic Point Set

    Tetsuo ASANO  Takeshi TOKUYAMA  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    595-600

    This paper presents an efficient algorithm for constructing at-most-k levels of an arrangement of n lines in the plane in time O(nk+n log n), which is optimal since Ω(nk) line segments are included there. The algorithm can sweep the at-most-k levels of the arrangement using O(n) space. Although Everett et al. recently gave an algorithm for constructing the at-most-k levels with the same time complexity independently, our algorithm is superior with respect to the space complexity as a sweep algorithm. Then, we apply the algorithm to a bipartitioning problem of a bichromatic point set: For r red points and b blue points in the plane and a directed line L, the figure of demerit fd(L) associated with L is defined to be the sum of the number of blue points below L and that of red ones above L. The problem we are going to consider is to find an optimal partitioning line to minimize the figure of demerit. Given a number k, our algorithm first determines whether there is a line whose figure of demerit is at most k, and further finds an optimal bipartitioning line if there is one. It runs in O(kn+n log n) time (n=r+b), which is subquadratic if k is sublinear.

  • Practical Efficiencies of Planar Point Location Algorithms

    Satoshi KAGAMI  Masato EDAHIRO  Takao ASANO  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    608-614

    The planar point location problem is one of the most fundamental problems in computational geometry and stated as follows: Given a straight line planar graph (subdivision) with n vertices and an arbitary query point Q, determine the region containing Q. Many algorithms have been proposed, and some of them are known to be theoretically optimal (O(log n) search time, O(n) space and O(n log n) preprocessing time). In this paper, we implement several representative algorithms in C, and investigate their practical efficiencies by computational experiments on Voronoi diagrams with 210 - 217 vertices.

  • Hierarchical Properties of Realtime One-Way Alternating Multi-Stack-Counter Automata

    Tsunehiro YOSHINAGA  Katsushi INOUE  Itsuo TAKANAMI  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    621-629

    This paper investigates the accepting powers of one-way alternating multi-stack-counter automata (lamsca's) and one-way alternating multi-counter automata (lamsca's) which operate in realtime. For each k1, let 1ASCA (k, real) (1ACA(k, real)) denote the class of sets accepted by realtime one-way alternating k-stach-counter (k-counter) automata, and let 1USCA(k, real)(1UCA(k, real)) denote the class of sets accepted by realtime one-way alternating k-stack-counter (k-counter) automata with only universal states. We first investigate a relationship between the accepting powers of realtime lamsca's (lamca's) with only universal states, with only existential states, and with full alternation. We then investigate hierarchical properties based on the numbers of counters and stackcounters, and show, for example, that for each k1, 1USCA(k+1, real)-1ASCA(k, real)φ and 1UCA(k+1, real)-1ACA(k, real)φ. We finally investigate a relationship between the accepting powers of realtime lamsca's and lamca's, and show, for example, that there are no i and j such that 1UCA(i, real)=1USCA(j, real), and 1USCA(k, real)-1ACA(k, real)φ for each k1.

  • Shared Pseudo-Random Secret Generation Protocols

    Manuel CERECEDO  Tsutomu MATSUMOTO  Hideki IMAI  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    636-645

    An extension of the notion of cryptographically strong pseudo-random generator to a distributed setting is proposed in this paper. Instead of a deterministic function to generate a pseudo-random bit string from a truly random shorter string, we have a deterministic secure protocol for a group of separate entities to compute a secretly shared pseudo-random string from a secretly shared and truly random shorter string. We propose a precise definition of this notion in terms of Yao's computational entropy and describe a concrete construction using Shamir's pseudo-random number generator. Several practical applications are also discussed.

  • On the Complexity of Protocol Validation Problems for Protocols with Bounded Capacity Channels

    Yoshiaki KAKUDA  Yoshihiro TAKADA  Tohru KIKUNO  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    658-667

    In this paper, it is proven that the following three decision problems on validation of protocols with bounded capacity channels are NP-complete. (1) Given a protocol with the channel capacity being 1, determine whether or not there exist deadlocks in the protocol. (2) Given a protocol with the channel capacity being 1, determine whether or not there exist unspecified receptions in the protocol. (3) Given a protocol with the channel capacity being 2, determine whether or not there exist overflows such that the channel capacity is not bounded by 1 in the protocol. These results suggest that, even when all channeles in a protocol are bounded by 1 or 2, protocol validation should be in general interactable. It also clarifies the boundary of computational complexity of protocol validation problems because the channel capacity 0 does not allow protocols to transmit and recieve messages.

  • A Fast Convergence Algorithm for Adaptive FIR Filters with Sparse Taps

    Akihiko SUGIYAMA  Shigeji IKEDA  

     
    PAPER-Adaptive Signal Processing

      Vol:
    E77-A No:4
      Page(s):
    681-687

    This paper proposes a fast convergence algorithm for adaptive FIR filters with sparse taps. Coefficient values and positions are simultaneously controlled. The proposed algorithm consists of two stages: flat-delay estimation and tapposition control with a constraint. The flat-delay estimation is carried out by estimating the significant dispersive region of the impulse response. The constrained tap-position control is achieved by imposing a limit on the new-tap-position search. Simulation results show that the proposed algorithm reduces the convergence speed by up to 85% over the conventional algorithms for a white signal input. For a colored signal, it also converges in 40% of the convergence time by the conventional algorithms. The proposed algorithm is applicable to adaptive FIR filters which are to model a path with long flat delay, such as echo cancelers for satellite-link communications.

  • An Analysis of the Economics of the VLSI Development Including Test Cost

    Koji NAKAMAE  Homare SAKAMOTO  Hiromu FUJIOKA  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:4
      Page(s):
    698-705

    In order to evaluate the effect of testing technologies such as electron beam (EB) testing and focused ion beam (FIB) reconstruction on the VLSI development cycle, the VLSI development period and cost are analyzed by using detailed fault models which make possible to take into consideration the effect of EB and FIB techniques. First, the specifications of fabricated VLSIs and the VLSI development cycle are modeled. Next the faults which can be diagnosed by such testing techniques are modeled. By using the parametric model of the VLSI development cycle, the development period and cost are analyzed. In the fault diagnosis stage, the use of an EB tester or the combinational use of an EB tester and an FIB equipment, instead of a traditional mechanical prober is considered. It is seen that the development period and cost are reduced by using EB and FIB diagnosis equipments by a factor of about 3. The effect of scan path method is also evaluated by making use of the same simulation method. Results show that the scan path design is effective for the reduction in both period and cost in the development cycle.

21641-21660hit(22683hit)