Yuzo OHTA Lei GONG Hiromasa HANEDA
Data of system parameters of real systems have some uncertainty and they should be given by sets (or intervals) rather than fixed values. To analyze and design systems contaning such uncertain parameters, it is required to represent and treat uncertainty in data of parameters, and to compute value sets of characteristic polynomials and transfer functions. Interval arithmetic is one of the most powerful tools to perform such subjects. In this paper, Polygon Interval Arithmetic (PIA) on the set of polygons in the complex plane is considered, and the data structure and algorithms to execute PIA efficiently is proposed. Moreover, practical examples are shown to demonstrate how PIA is useful to compute the evaluation of value sets.
A simple technique is proposed for improving the convergence of Newton's method in the spherical algorithms, which are metheods for tracing solution curves. A numerical example is given in order to show the effectiveness of the proposed technique.
Yasuhiko TAKENAGA Shuzo YAJIMA
An Ordered Binary Decision Diagram (BDD) is a graph representation of a Boolean function. According to its good properties, BDD's are widely used in various applications. In this paper, we investigate the computational complexity of basic operations on BDD's. We consider two important operations: reduction of a BDD and binary Boolean operations based on BDD's. This paper shows that both the reduction of a BDD and the binary Boolean operations based on BDD's are NC1-reducible to REACHABILITY. That is, both of the problems belong to NC2. In order to extend the results to the BDD's with output inverters, we also considered the transformations between BDD's and BDD's with output inverters. We show that both of the transformations are also NC1-reducible to REACHBILITY.
We develop a convergence theory of the simple genetic algorithm (SGA) for two-bit problems (Type I TBP and Type II TBP). SGA consists of two operations, reproduction and crossover. These are imitations of selection and recombination in biological systems. TBP is the simplest optimization problem that is devised with an intention to deceive SGA into deviating from the maximum point. It has been believed that, empirically, SGA can deviate from the maximum point for Type II while it always converges to the maximum point for Type I. Our convergence theory is a first mathematical achievement to ensure that the belief is true. Specifically, we demonstrate the following. (a) SGA always converges to the maximum point for Type I, starting from any initial point. (b) SGA converges either to the maximum or second maximum point for Type II, depending upon its initial points. Regarding Type II, we furthermore elucidate a typical sufficient initial condition under which SGA converges either to the maximum or second maximum point. Consequently, our convergence theory establishes a solid foundation for more general GA convergence theory that is in its initial stage of research. Moreover, it can bring powerful analytical techniques back to the research of original biological systems.
Kazuhiro OKANOUE Akihisa USHIROKAWA Hideho TOMITA Yukitsuna FURUYA
This paper presents an adaptive MLSE (Maximum Likelihood Sequence Estimator) suitable for TDMA cellular systems. The proposed MLSE has two special features such as handling wide dynamic range signals without analogue gain controls and fast channel tracking capability. In order to handle wide dynamic range signals without conventional AGCs (Automatic Gain Controller), the proposed MLSE uses envelope components of received signals obtained from a non-linear log-amplifier module which has wide log-linear gain characteristics. By using digital signal processing technique, the log-converted envelope components are normalized and converted to linear values which conventional adaptive MLSEs can handle. As a channel tracking algorithm of the channel estimator, the proposed MLSE adopts a QT-LMS (Quick-Tracking Least Mean Square) algorithm, which is obtained by modifying LMS algorithm to enable a faster tracking capability. The algorithm has a fast tracking capability with low complexity and is suitable for implementation in a fixed-point digital signal processor. The performances of the MLSE have been evaluated through experiments in TDMA cellular environments with π/4-shifted QPSK, 24.3k symbol/sec. It is shown that, under conditions of 65dB amplitude variations and 80Hz Doppler frequency, the MLSE successfully achieves less than 3% B.E.R., which is required for digital cellular systems.
A parallel quicksort algorithm in Ada is proposed and analyzed, its computational complexities are derived, and its performance profile is determined by simulation.
This paper reviews recent progress in adaptive signal processing techniques for digital mobile radio communications. In Radio Signal Processing (RSP) , digital signal processing is becoming more important because it makes it relatively easy to develop sophisticated adaptive processing techniques, Adaptive signal processing is especially important for carrier signal processing in RSP. Its main objective is to realize optimal or near-optimal radio signal transmission. Application environments of adaptive signal processing in mobile radio are clarified. Adaptive equalization is discussed in detail with the focus on adaptive MLSE based on the blind algorithm. Demodulation performance examples obtained by simulations and experiments are introduced, which demonstrates the recent advances in this field. Next, new trends in adaptive array processing, interference cancelling, and orthogonalization processing are reviewed. Finally, the three automatic calibration techniques that are based on adaptive signal processing are described for realizing high precision transmission devices.
Tsuyoshi KAWAGUCHI Yoshinori TAMURA Kouichi UTSUMIYA
The linear array processor architecture is an important class of interconnection structures that are suitable for VLSI. In this paper we study the problem of mapping a task tree onto a linear array to minimize the total execution time. First, an optimization algorithm is presented for a message scheduling probrem which occurs in the task tree mapping problem. Next, we give a heuristic algorithm for the task tree mapping problem. The algorithm partitions the node set of a task tree into clusters and maps these clusters onto processors. Simulation experiments showed that the proposed algorithm is much more efficient than a conventional algorithm.
Kazuhiko FUKAWA Hiroshi SUZUKI
This paper proposes a new adaptive Interference Cancelling Equalizer (ICE) with a blind algorithm. From a received signal, ICE not only eliminates inter-symbol interference, but also cancels co-channel interference. Blind ICE can operate well even if training signals for the interference are unknown. First, training signal conditions for applying blind ICE are considered. Next, a theoretical derivation for blind ICE is developed in detail by applying the maximum likelihood estimation theory. It is shown that RLS-MLSE with diversity, which is derived for mobile radio equalizers, is also effective for blind ICE. Computer simulations demonstrate the 40kb/s QDPSK transmission performance of Blind ICE as a blind canceller with two branch diversity reception under Rayleigh fading in a single interference environment. The simulations assume synchronous training; the canceller is trained for the desired signal but not for the interference signals. Blind ICE can be successfully achieved at more than -10dB CIR values when average Eb/N0 is 15dB and a maximum Doppler frequency is 40Hz.
We consider a class of unknown scenes Sk(n) with rectangular obstacles aligned with the axes such that Euclidean distance between the start point and the target is n, and any side length of each obstacle is at most k. We propose a strategy called the adaptive-bias heuristic for navigating a robot in such a scene, and analyze its efficiency. We show that a ratio of the total distance walked by a robot using the strategy to the shortest path distance between the start point and the target is at most 1+(3/5) k, if k=o(n) and if the start point and the target are at the same horizontal level. This ratio is better than a ratio obtained by any strategy previously known in the class of scenes, Sk(n), such that k=o(n).
Satoshi KAGAMI Masato EDAHIRO Takao ASANO
The planar point location problem is one of the most fundamental problems in computational geometry and stated as follows: Given a straight line planar graph (subdivision) with n vertices and an arbitary query point Q, determine the region containing Q. Many algorithms have been proposed, and some of them are known to be theoretically optimal (O(log n) search time, O(n) space and O(n log n) preprocessing time). In this paper, we implement several representative algorithms in C, and investigate their practical efficiencies by computational experiments on Voronoi diagrams with 210 - 217 vertices.
Imbaby I.MAHMOUD Koji ASAKURA Takashi NISHIBU Tatsuo OHTSUKI
This paper advocates the use of linear objective function in analytic analog placement. The role of linear and quadratic objctive functions in the behavior and results of an analog placement algorithm based on the force directed method is discussed. Experimental results for a MCNC benchmark circuit and another one from text books are shown to demonstrate the effect of a linear and a quadratic objective function on the analog constraint satisfaction and CPU time. By introducing linear objective function to the algorithm, we obtain better placements in terms of analog constraint satisfaction and computation cost than in case of conventional quadratic objective function.
Motonobu HATTORI Masafumi HAGIWARA Masao NAKAGAWA
Recently, many researches on associative memories have been made a lot of neural network models have been proposed. Bidirectional Associative Memory (BAM) is one of them. The BAM uses Hebbian learning. However, unless the traning vectors are orthogonal, Hebbian learning does not guarantee the recall of all training pairs. Namely, the BAM which is trained by Hebbian learning suffers from low memory capacity. To improve the storage capacity of the BAM, Pseudo-Relaxation Learning Algorithm for BAM (PRLAB) has been proposed. However, PRLAB needs long learning epochs because of random initial weights. In this paper, we propose Quick Learning for BAM which greatly reduces learning epochs and guarantees the recall of all training pairs. In the proposed algorithm, the BAM is trained by Hebbian learning in the first stage and then trained by PRLAB. Owing to the use of Hebbian learning in the first stage, the weights are much closer to the solution space than the initial weights chosen randomly. As a result, the proposed algorithm can reduce the learning epocks. The features of the proposed algorithm are: 1) It requires much less learning epochs. 2) It guarantees the recall of all training pairs. 3) It is robust for noisy inputs. 4) The memory capacity is much larger than conventional BAM. In addition, we made clear several important chracteristics of the conventional and the proposed algorithms such as noise reduction characteristics, storage capacity and the finding of an index which relates to the noise reduction.
In this paper, a middle-mapping learning algorithm for cellular associative memories is presented. This algorithm makes full use of the properties of the cellular neural network so that the associative memory has some advantages compared with the memory designed by the ourter product method. It can guarantee each prototype is stored at an equilibrium point. In the practical implementation, it is easy to build up the circuit because the weight matrix presenting the connection between cells is not symmetric. The synchronous updating rule makes its associative speed very fast compared to the Hopfield associative memory.
Kitaek KWON Hisao ISHIBUCHI Hideo TANAKA
This paper proposes an approach for approximately realizing nonlinear mappings of interval vectors by interval neural networks. Interval neural networks in this paper are characterized by interval weights and interval biases. This means that the weights and biases are given by intervals instead of real numbers. First, an architecture of interval neural networks is proposed for dealing with interval input vectors. Interval neural networks with the proposed architecture map interval input vectors to interval output vectors by interval arithmetic. Some characteristic features of the nonlinear mappings realized by the interval neural networks are described. Next, a learning algorithm is derived. In the derived learning algorithm, training data are the pairs of interval input vectors and interval target vectors. Last, using a numerical example, the proposed approach is illustrated and compared with other approaches based on the standard back-propagation neural networks with real number weights.
A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.
Sadayuki HONGO Isamu YOROIZAWA
We propose a fast computation method of stochastic relaxation for the continuous-valued Markov random field (MRF) whose energy function is represented in the quadratic form. In the case of regularization in visual information processing, the probability density function of a state transition can be transformed to a Gaussian function, therefore, the probablistic state transition is realized with Gaussian random numbers whose mean value and variance are calculated based on the condition of the input data and the neighborhood. Early visual information processing can be represented with a coupled MRF model which consists of continuity and discontinuity processes. Each of the continuity or discontinuity processes represents a visual property, which is like an intensity pattern, or a discontinuity of the continuity process. Since most of the energy function for early visual information processing can be represented by the quadratic form in the continuity process, the probability density of local computation variables in the continuity process is equivalent to the Gaussian function. If we use this characteristic, it is not necessary for the discrimination function computation to calculate the summation of the probabilities corresponding to all possible states, therefore, the computation load for the state transition is drastically decreased. Furthermore, if the continuous-valued discontinuity process is introduced, the MRF model can directly represent the strength of discontinuity. Moreover, the discrimination function of this energy function in the discontinuity process, which is linear, can also be calculated without probability summation. In this paper, a fast method for calculating the state transition probability for the continuous-valued MRF on the visual informtion processing is theoretically explained. Next, initial condition dependency, computation time and dependency on the statistical estimation of the condition are investigated in comparison with conventional methods using the examples of the data restoration for a corrupted square wave and a corrupted one-dimensional slice of a natural image.
This paper presents an algorithm which sums up n binary values on an n m reconfigurable mesh in O(log n/(m log m)1/2) time. This algorithm also yields a corollary which states that n binary values can be summed up on an nlog2n/log log n reconfigurable mesh in constant time.
Yukio KUMAGAI Joarder KAMRUZZAMAN Hiromitsu HIKITA
In this letter, we present a distinct alternative of cross talk formulation of associative memory based on the outer product algorithm extended to the higher order and a performance evaluation in terms of the probability of exact data recall by using this formulation. The significant feature of these formulations is that both cross talk and the probability formulated are explicitly represented as the functional forms of Hamming distance between the memorized keys and the applied input key, and the degree of higher order correlation. Simulation results show that exact data retrieval ability of the associative memory using randomly generated data and keys is in well agreement with our theoretical estimation.
Analysis of satellite images requires classificatio of image objects. Since different categories may have almost the same brightness or feature in high dimensional remote sensing data, many object categories overlap with each other. How to segment the object categories accurately is still an open question. It is widely recognized that the assumptions required by many classification methods (maximum likelihood estimation, etc.) are suspect for textural features based on image pixel brightness. We propose an image feature based neural network approach for the segmentation of AVHRR images. The learning algoriothm is a modified backpropagation with gain and weight decay, since feedforward networks using the backpropagation algorithm have been generally successful and enjoy wide popularity. Destructive algorithms that adapt the neural architecture during the training have been developed. The classification accuracy of 100% is reached for a validation data set. Classification result is compared with that of Kohonen's LVQ and basic backpropagation algorithm based pixel-by-pixel method. Visual investigation of the result images shows that our method can not only distinguish the categories with similar signatures very well, but also is robustic to noise.