The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ALG(2355hit)

2221-2240hit(2355hit)

  • Polygon Interval Arithmetic and Interval Evaluation of Value Sets of Transfer Functions

    Yuzo OHTA  Lei GONG  Hiromasa HANEDA  

     
    PAPER-Algorithms, Data Structures and Computational Complexity

      Vol:
    E77-A No:6
      Page(s):
    1033-1042

    Data of system parameters of real systems have some uncertainty and they should be given by sets (or intervals) rather than fixed values. To analyze and design systems contaning such uncertain parameters, it is required to represent and treat uncertainty in data of parameters, and to compute value sets of characteristic polynomials and transfer functions. Interval arithmetic is one of the most powerful tools to perform such subjects. In this paper, Polygon Interval Arithmetic (PIA) on the set of polygons in the complex plane is considered, and the data structure and algorithms to execute PIA efficiently is proposed. Moreover, practical examples are shown to demonstrate how PIA is useful to compute the evaluation of value sets.

  • Improving the Convergence of Spherical Algorithms for Tracing Solution Curves

    Kiyotaka YAMAMURA  

     
    LETTER-Numerical Analysis and Self-Validation

      Vol:
    E77-A No:6
      Page(s):
    1085-1088

    A simple technique is proposed for improving the convergence of Newton's method in the spherical algorithms, which are metheods for tracing solution curves. A numerical example is given in order to show the effectiveness of the proposed technique.

  • Computational Complexity of Manipulating Binary Decision Diagrams

    Yasuhiko TAKENAGA  Shuzo YAJIMA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:6
      Page(s):
    642-647

    An Ordered Binary Decision Diagram (BDD) is a graph representation of a Boolean function. According to its good properties, BDD's are widely used in various applications. In this paper, we investigate the computational complexity of basic operations on BDD's. We consider two important operations: reduction of a BDD and binary Boolean operations based on BDD's. This paper shows that both the reduction of a BDD and the binary Boolean operations based on BDD's are NC1-reducible to REACHABILITY. That is, both of the problems belong to NC2. In order to extend the results to the BDD's with output inverters, we also considered the transformations between BDD's and BDD's with output inverters. We show that both of the transformations are also NC1-reducible to REACHBILITY.

  • Convergence of the Simple Genetic Algorithm to the Two-bit Problems

    Yoshikane TAKAHASHI  

     
    PAPER-Algorithms, Data Structures and Computational Complexity

      Vol:
    E77-A No:5
      Page(s):
    868-880

    We develop a convergence theory of the simple genetic algorithm (SGA) for two-bit problems (Type I TBP and Type II TBP). SGA consists of two operations, reproduction and crossover. These are imitations of selection and recombination in biological systems. TBP is the simplest optimization problem that is devised with an intention to deceive SGA into deviating from the maximum point. It has been believed that, empirically, SGA can deviate from the maximum point for Type II while it always converges to the maximum point for Type I. Our convergence theory is a first mathematical achievement to ensure that the belief is true. Specifically, we demonstrate the following. (a) SGA always converges to the maximum point for Type I, starting from any initial point. (b) SGA converges either to the maximum or second maximum point for Type II, depending upon its initial points. Regarding Type II, we furthermore elucidate a typical sufficient initial condition under which SGA converges either to the maximum or second maximum point. Consequently, our convergence theory establishes a solid foundation for more general GA convergence theory that is in its initial stage of research. Moreover, it can bring powerful analytical techniques back to the research of original biological systems.

  • A Fast Tracking Adaptive MLSE for TDMA Digital Cellular Systems

    Kazuhiro OKANOUE  Akihisa USHIROKAWA  Hideho TOMITA  Yukitsuna FURUYA  

     
    PAPER

      Vol:
    E77-B No:5
      Page(s):
    557-565

    This paper presents an adaptive MLSE (Maximum Likelihood Sequence Estimator) suitable for TDMA cellular systems. The proposed MLSE has two special features such as handling wide dynamic range signals without analogue gain controls and fast channel tracking capability. In order to handle wide dynamic range signals without conventional AGCs (Automatic Gain Controller), the proposed MLSE uses envelope components of received signals obtained from a non-linear log-amplifier module which has wide log-linear gain characteristics. By using digital signal processing technique, the log-converted envelope components are normalized and converted to linear values which conventional adaptive MLSEs can handle. As a channel tracking algorithm of the channel estimator, the proposed MLSE adopts a QT-LMS (Quick-Tracking Least Mean Square) algorithm, which is obtained by modifying LMS algorithm to enable a faster tracking capability. The algorithm has a fast tracking capability with low complexity and is suitable for implementation in a fixed-point digital signal processor. The performances of the MLSE have been evaluated through experiments in TDMA cellular environments with π/4-shifted QPSK, 24.3k symbol/sec. It is shown that, under conditions of 65dB amplitude variations and 80Hz Doppler frequency, the MLSE successfully achieves less than 3% B.E.R., which is required for digital cellular systems.

  • A Parallel Quicksort in Ada and Its Performance Profile

    Zensho NAKAO  

     
    PAPER-Software Theory

      Vol:
    E77-D No:5
      Page(s):
    589-596

    A parallel quicksort algorithm in Ada is proposed and analyzed, its computational complexities are derived, and its performance profile is determined by simulation.

  • Adaptive Signal Processing for Optimal Transmission in Mobile Radio Communications

    Hiroshi SUZUKI  

     
    INVITED PAPER

      Vol:
    E77-B No:5
      Page(s):
    535-544

    This paper reviews recent progress in adaptive signal processing techniques for digital mobile radio communications. In Radio Signal Processing (RSP) , digital signal processing is becoming more important because it makes it relatively easy to develop sophisticated adaptive processing techniques, Adaptive signal processing is especially important for carrier signal processing in RSP. Its main objective is to realize optimal or near-optimal radio signal transmission. Application environments of adaptive signal processing in mobile radio are clarified. Adaptive equalization is discussed in detail with the focus on adaptive MLSE based on the blind algorithm. Demodulation performance examples obtained by simulations and experiments are introduced, which demonstrates the recent advances in this field. Next, new trends in adaptive array processing, interference cancelling, and orthogonalization processing are reviewed. Finally, the three automatic calibration techniques that are based on adaptive signal processing are described for realizing high precision transmission devices.

  • A Task Mapping Algorithm for Linear Array Processors

    Tsuyoshi KAWAGUCHI  Yoshinori TAMURA  Kouichi UTSUMIYA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:5
      Page(s):
    546-554

    The linear array processor architecture is an important class of interconnection structures that are suitable for VLSI. In this paper we study the problem of mapping a task tree onto a linear array to minimize the total execution time. First, an optimization algorithm is presented for a message scheduling probrem which occurs in the task tree mapping problem. Next, we give a heuristic algorithm for the task tree mapping problem. The algorithm partitions the node set of a task tree into clusters and maps these clusters onto processors. Simulation experiments showed that the proposed algorithm is much more efficient than a conventional algorithm.

  • Blind Interference Cancelling Equalizer for Mobile Radio Communications

    Kazuhiko FUKAWA  Hiroshi SUZUKI  

     
    PAPER

      Vol:
    E77-B No:5
      Page(s):
    580-588

    This paper proposes a new adaptive Interference Cancelling Equalizer (ICE) with a blind algorithm. From a received signal, ICE not only eliminates inter-symbol interference, but also cancels co-channel interference. Blind ICE can operate well even if training signals for the interference are unknown. First, training signal conditions for applying blind ICE are considered. Next, a theoretical derivation for blind ICE is developed in detail by applying the maximum likelihood estimation theory. It is shown that RLS-MLSE with diversity, which is derived for mobile radio equalizers, is also effective for blind ICE. Computer simulations demonstrate the 40kb/s QDPSK transmission performance of Blind ICE as a blind canceller with two branch diversity reception under Rayleigh fading in a single interference environment. The simulations assume synchronous training; the canceller is trained for the desired signal but not for the interference signals. Blind ICE can be successfully achieved at more than -10dB CIR values when average Eb/N0 is 15dB and a maximum Doppler frequency is 40Hz.

  • A Robot Navigation Strategy in Unknown Environment and Its Efficiency

    Aohan MEI  Yoshihide IGARASHI  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    646-651

    We consider a class of unknown scenes Sk(n) with rectangular obstacles aligned with the axes such that Euclidean distance between the start point and the target is n, and any side length of each obstacle is at most k. We propose a strategy called the adaptive-bias heuristic for navigating a robot in such a scene, and analyze its efficiency. We show that a ratio of the total distance walked by a robot using the strategy to the shortest path distance between the start point and the target is at most 1+(3/5) k, if k=o(n) and if the start point and the target are at the same horizontal level. This ratio is better than a ratio obtained by any strategy previously known in the class of scenes, Sk(n), such that k=o(n).

  • Practical Efficiencies of Planar Point Location Algorithms

    Satoshi KAGAMI  Masato EDAHIRO  Takao ASANO  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    608-614

    The planar point location problem is one of the most fundamental problems in computational geometry and stated as follows: Given a straight line planar graph (subdivision) with n vertices and an arbitary query point Q, determine the region containing Q. Many algorithms have been proposed, and some of them are known to be theoretically optimal (O(log n) search time, O(n) space and O(n log n) preprocessing time). In this paper, we implement several representative algorithms in C, and investigate their practical efficiencies by computational experiments on Voronoi diagrams with 210 - 217 vertices.

  • Experimental Appraisal of Linear and Quadratic Objective Functions Effect on Force Directed Method for Analog Placement

    Imbaby I.MAHMOUD  Koji ASAKURA  Takashi NISHIBU  Tatsuo OHTSUKI  

     
    LETTER-Computer Aided Design (CAD)

      Vol:
    E77-A No:4
      Page(s):
    719-725

    This paper advocates the use of linear objective function in analytic analog placement. The role of linear and quadratic objctive functions in the behavior and results of an analog placement algorithm based on the force directed method is discussed. Experimental results for a MCNC benchmark circuit and another one from text books are shown to demonstrate the effect of a linear and a quadratic objective function on the analog constraint satisfaction and CPU time. By introducing linear objective function to the algorithm, we obtain better placements in terms of analog constraint satisfaction and computation cost than in case of conventional quadratic objective function.

  • Quick Learning for Bidirectional Associative Memory

    Motonobu HATTORI  Masafumi HAGIWARA  Masao NAKAGAWA  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    385-392

    Recently, many researches on associative memories have been made a lot of neural network models have been proposed. Bidirectional Associative Memory (BAM) is one of them. The BAM uses Hebbian learning. However, unless the traning vectors are orthogonal, Hebbian learning does not guarantee the recall of all training pairs. Namely, the BAM which is trained by Hebbian learning suffers from low memory capacity. To improve the storage capacity of the BAM, Pseudo-Relaxation Learning Algorithm for BAM (PRLAB) has been proposed. However, PRLAB needs long learning epochs because of random initial weights. In this paper, we propose Quick Learning for BAM which greatly reduces learning epochs and guarantees the recall of all training pairs. In the proposed algorithm, the BAM is trained by Hebbian learning in the first stage and then trained by PRLAB. Owing to the use of Hebbian learning in the first stage, the weights are much closer to the solution space than the initial weights chosen randomly. As a result, the proposed algorithm can reduce the learning epocks. The features of the proposed algorithm are: 1) It requires much less learning epochs. 2) It guarantees the recall of all training pairs. 3) It is robust for noisy inputs. 4) The memory capacity is much larger than conventional BAM. In addition, we made clear several important chracteristics of the conventional and the proposed algorithms such as noise reduction characteristics, storage capacity and the finding of an index which relates to the noise reduction.

  • Iterative Middle Mapping Learning Algorithm for Cellular Neural Networks

    Chen HE  Akio USHIDA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    706-715

    In this paper, a middle-mapping learning algorithm for cellular associative memories is presented. This algorithm makes full use of the properties of the cellular neural network so that the associative memory has some advantages compared with the memory designed by the ourter product method. It can guarantee each prototype is stored at an equilibrium point. In the practical implementation, it is easy to build up the circuit because the weight matrix presenting the connection between cells is not symmetric. The synchronous updating rule makes its associative speed very fast compared to the Hopfield associative memory.

  • Neural Networks with Interval Weights for Nonlinear Mappings of Interval Vectors

    Kitaek KWON  Hisao ISHIBUCHI  Hideo TANAKA  

     
    PAPER-Mapping

      Vol:
    E77-D No:4
      Page(s):
    409-417

    This paper proposes an approach for approximately realizing nonlinear mappings of interval vectors by interval neural networks. Interval neural networks in this paper are characterized by interval weights and interval biases. This means that the weights and biases are given by intervals instead of real numbers. First, an architecture of interval neural networks is proposed for dealing with interval input vectors. Interval neural networks with the proposed architecture map interval input vectors to interval output vectors by interval arithmetic. Some characteristic features of the nonlinear mappings realized by the interval neural networks are described. Next, a learning algorithm is derived. In the derived learning algorithm, training data are the pairs of interval input vectors and interval target vectors. Last, using a numerical example, the proposed approach is illustrated and compared with other approaches based on the standard back-propagation neural networks with real number weights.

  • A Regularization Method for Neural Network Learning that Minimizes Estimation Error

    Miki YAMADA  

     
    PAPER-Regularization

      Vol:
    E77-D No:4
      Page(s):
    418-424

    A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.

  • Stochastic Relaxation for Continuous Values--Standard Regularization Based on Gaussian MRF--

    Sadayuki HONGO  Isamu YOROIZAWA  

     
    PAPER-Regularization

      Vol:
    E77-D No:4
      Page(s):
    425-432

    We propose a fast computation method of stochastic relaxation for the continuous-valued Markov random field (MRF) whose energy function is represented in the quadratic form. In the case of regularization in visual information processing, the probability density function of a state transition can be transformed to a Gaussian function, therefore, the probablistic state transition is realized with Gaussian random numbers whose mean value and variance are calculated based on the condition of the input data and the neighborhood. Early visual information processing can be represented with a coupled MRF model which consists of continuity and discontinuity processes. Each of the continuity or discontinuity processes represents a visual property, which is like an intensity pattern, or a discontinuity of the continuity process. Since most of the energy function for early visual information processing can be represented by the quadratic form in the continuity process, the probability density of local computation variables in the continuity process is equivalent to the Gaussian function. If we use this characteristic, it is not necessary for the discrimination function computation to calculate the summation of the probabilities corresponding to all possible states, therefore, the computation load for the state transition is drastically decreased. Furthermore, if the continuous-valued discontinuity process is introduced, the MRF model can directly represent the strength of discontinuity. Moreover, the discrimination function of this energy function in the discontinuity process, which is linear, can also be calculated without probability summation. In this paper, a fast method for calculating the state transition probability for the continuous-valued MRF on the visual informtion processing is theoretically explained. Next, initial condition dependency, computation time and dependency on the statistical estimation of the condition are investigated in comparison with conventional methods using the examples of the data restoration for a corrupted square wave and a corrupted one-dimensional slice of a natural image.

  • An Efficient Algorithm for Summing up Binary Values on a Reconfigurable Mesh

    Koji NAKANO  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    652-657

    This paper presents an algorithm which sums up n binary values on an n m reconfigurable mesh in O(log n/(m log m)1/2) time. This algorithm also yields a corollary which states that n binary values can be summed up on an nlog2n/log log n reconfigurable mesh in constant time.

  • Peformance Formulation and Evaluation of Associative Memory Extended to Higher Order

    Yukio KUMAGAI  Joarder KAMRUZZAMAN  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    736-741

    In this letter, we present a distinct alternative of cross talk formulation of associative memory based on the outer product algorithm extended to the higher order and a performance evaluation in terms of the probability of exact data recall by using this formulation. The significant feature of these formulations is that both cross talk and the probability formulated are explicitly represented as the functional forms of Hamming distance between the memorized keys and the applied input key, and the degree of higher order correlation. Simulation results show that exact data retrieval ability of the associative memory using randomly generated data and keys is in well agreement with our theoretical estimation.

  • AVHRR Image Segmentation Using Modified Backpropagation Algorithm

    Tao CHEN  Mikio TAKAGI  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    490-497

    Analysis of satellite images requires classificatio of image objects. Since different categories may have almost the same brightness or feature in high dimensional remote sensing data, many object categories overlap with each other. How to segment the object categories accurately is still an open question. It is widely recognized that the assumptions required by many classification methods (maximum likelihood estimation, etc.) are suspect for textural features based on image pixel brightness. We propose an image feature based neural network approach for the segmentation of AVHRR images. The learning algoriothm is a modified backpropagation with gain and weight decay, since feedforward networks using the backpropagation algorithm have been generally successful and enjoy wide popularity. Destructive algorithms that adapt the neural architecture during the training have been developed. The classification accuracy of 100% is reached for a validation data set. Classification result is compared with that of Kohonen's LVQ and basic backpropagation algorithm based pixel-by-pixel method. Visual investigation of the result images shows that our method can not only distinguish the categories with similar signatures very well, but also is robustic to noise.

2221-2240hit(2355hit)