Kensaku FUJII Yoshinori TANAKA
The adaptive system design by 16-bit fixed point processing enables to employ an inexpensive digital signal processor (DSP). The narrow dynamic range of such 16 bits, however, does not guarantee the same performance that is confirmed beforehand by computer simulations. A cause of degrading the performance originates in the operation halving the word length doubled by multiplication. This operation rounds off small signals staying in the lower half of the doubled word length to zero. This problem can be solved by limiting the multiplier to only its sign () like the signed regressor algorithm, named 'bi-quantized-x' algorithm in this paper, for the convenience mentioned below. This paper first derives the equation describing the convergence property provided by a type of signed regressor algorithms, the bi-quantized-x normalized least mean square (NLMS) algorithm, and then formulates its convergence condition and the step size maximizing the convergence rate. This paper second presents a technique to improve the convergence property. The bi-qiantized-x NLMS algorithm quantizes the reference signal to 1 according to the sign of the reference signal, whereas the technique moreover assigns zero to the reference signal whose amplitude is less than a predetermined level. This paper explains the principle that the 'tri-qunatized-x' NLMS algorithm employing the technique can improve the convergence property, and confirms the improvement effect by computer simulations.
Large-scale effects of locally interacting agents are called emergent properties of the system. Emergent properties are often surprising because they can be hard to anticipate the full consequences of even simple forms of interaction. In this paper we address the following questions: how do heterogeneous agents generate emergent coordination, and how do they manage and self-organize macroscopic orders from bottom up without any central authority? These questions will depend crucially on how they interact and adapt their behavior. Agents myopically evolve their behavior based on the threshold rules, which are obtained as the functions of the collective behavior and their idiosyncratic utilities. We obtain the micro-macro dynamics that relate the aggregate behavior with the underlying individual behavior. We show agents' rational behavior combined with the behavior of others produce stable macro behavior, and sometimes unanticipated cyclic behavior. We also consider the roles of conformists and nonconformists to manage emergent macro behavior. As a specific example, we address an emergent and evolutionary approach for designing the efficient network routings.
This letter focuses on the design of a unified estimator for scheduled control in nonlinear systems with unknown parameter. An estimation law with a finite convergence time is formulated to compute the unknown scheduling parameter that drives a scheduled controller. This estimator can also be extended to the types of scheduled controllers addressed in the literature.
Kouki HOJO Boris Ya. RYABKO Joe SUZUKI
Currently, the most popular model in data compression theory is that of stationary ergodic sources. But there do exist sequences each of which is not emitted from any stationary ergodic source but can be compressed sufficiently by a certain algorithm. We estimate the size of the set of such sequences in terms of Hausdorff dimension.
Seung-Pyo CHAE Jeong-Woo LEE Woo-Young JANG Byung-Seop SONG Myoung-Nam KIM Si-Yeol KIM Jin-Ho CHO
An electroretinogram (ERG) represents the global responses of the retina to a visual stimulus and shows accumulated responses of each layer of the retina relative to the signal processing mechanisms occurring within the retina. Thus, investigating the reaction types of each ERG wave provides information required for diagnosis and for identifying the signal processing mechanisms in the retina. In this study, an ERG signal is generated by simulating the volume conductor field response for each retina layer, which are then summed algebraically. The retina model used for the simulation is Shah's Computer Retina model, which is the most reliable model developed so far. When the generated ERG is compared with a typical clinical ERG it exhibits a close similarity. Based on changing the parameters of the ERG model, a diagnostic investigation is performed with a variation in the ERG waveform.
Kensaku FUJII Mitsuji MUNEYASU Takao HINAMOTO Yoshinori TANAKA
The normalized least mean square (NLMS) algorithm has the drawback that the convergence speed of adaptive filter coefficients decreases when the reference signal has high auto-correlation. A technique to improve the convergence speed is to apply the decorrelated reference signal to the calculation of the gradient defined in the NLMS algorithm. So far, only the effect of the improvement is experimentally examined. The convergence property of the adaptive algorithm to which the technique is applied is not analized yet enough. This paper first defines a cost function properly representing the criterion to estimate the coefficients of adaptive filter. The name given in this paper to the adaptive algorithm exploiting the decorrelated reference signal, 'normalized least mean EE' algorithm, exactly expresses the criterion. This adaptive algorithm estimates the coefficients so as to minimize the product of E and E' that are the differences between the responses of the unknown system and the adaptive filter to the original and the decorrelated reference signals, respectively. By using the cost function, this paper second specifies the convergence condition of the normalized least mean EE' algorithm and finally presents computer simulations, which are calculated using real speech signal, to demonstrate the validity of the convergence condition.
Naoki ABE Jun-ichi TAKEUCHI Manfred K. WARMUTH
We consider the problem of efficient learning of probabilistic concepts (p-concepts) and more generally stochastic rules in the sense defined by Kearns and Schapire and by Yamanishi. Their models extend the PAC-learning model of Valiant to the learning scenario in which the target concept or function is stochastic rather than deterministic as in Valiant's original model. In this paper, we consider the learnability of stochastic rules with respect to the classic 'Kullback-Leibler divergence' (KL divergence) as well as the quadratic distance as the distance measure between the rules. First, we show that the notion of polynomial time learnability of p-concepts and stochastic rules with fixed range size using the KL divergence is in fact equivalent to the same notion using the quadratic distance, and hence any of the distances considered in [6] and [18]: the quadratic, variation, and Hellinger distances. As a corollary, it follows that a wide range of classes of p-concepts which were shown to be polynomially learnable with respect to the quadratic distance in [6] are also learnable with respect to the KL divergence. The sample and time complexity of algorithms that would be obtained by the above general equivalence, however, are far from optimal. We present a polynomial learning algorithm with reasonable sample and time complexity for the important class of convex linear combinations of stochastic rules. We also develop a simple and versatile technique for obtaining sample complexity bounds for learning classes of stochastic rules with respect to the KL-divergence and quadratic distance, and apply them to produce bounds for the classes of probabilistic finite state acceptors (automata), probabilistic decision lists, and convex linear combinations.
Noboru YABUKI Yoshitaka MATSUDA Makoto OTA Yasuaki SUMI Yutaka FUKUI Shigehiko MIKI
Processes in image recognition include target detection and shape extraction. Active Net has been proposed as one of the methods for such processing. It treats the target detection in an image as an energy optimization problem. In this paper, a problem of the conventional Active Net is presented and the new Active Net is proposed. The new net is improved the ability for detecting a target. Finally, the validity of the proposed net is confirmed by experimental results.
It is well known that based on the structure of a transversal filter, the RLS equaliser provides the fastest convergence in stationary environments. This paper addresses an adaptive transversal equaliser which has the potential to provide more faster convergence than the RLS equaliser. A comparison is made with respect to computational complexity required for each update of equaliser coefficients, and computer simulations are demonstrated to show the superiority of the proposed equaliser.
SungEun JO Sang Woo KIM Jin Soo LEE
This paper provides a normalized Iterative Feedback Tuning (IFT) method that assures the boundedness of the gradient vector estimate (ρ) and the Hessian matrix estimate without the assumption that the internal signals are bounded. The proposed method uses the unbiased Gauss-Newton direction by the addition of the 4-th experiment. We also present blended control criteria and a PID-like controller as new design choices. In examples, the normalized IFT method results in a good convergence although the internal signal or the measurement noise variance is large.
This paper proposes an algorithm that adaptively estimates time-varying noise variance used in Kalman filtering for real-time speech signal enhancement. In the speech signal contaminated by white noise, the spectral components except dominant ones in high frequency band are expected to reflect the noise energy. Our approach is first to find the dominant energy bands over speech spectrum using LPC. We then calculate the average value of the actual spectral components over the high frequency region excluding the dominant energy bands and use it as the noise variance. The resulting noise variance estimate is then applied to Kalman filtering to suppress the background noise. Experimental results indicate that the proposed approach achieves a significant improvement in terms of speech enhancement over those of the conventional Kalman filtering that uses the average noise power over silence interval only. As a refinement of our results, we employ multiple-Kalman filtering with multiple noise models and improve the intelligibility.
The maximum likelihood estimate of a mixture model is usually found by using the EM algorithm. However, the EM algorithm suffers from a local optima problem and therefore we cannot obtain the potential performance of mixture models in practice. In the case of mixture models, local maxima often have too many components of a mixture model in one part of the space and too few in another, widely separated part of the space. To escape from such configurations we proposed a new variant of the EM algorithm in which simultaneous split and merge operations are repeatedly performed by using a new criterion for efficiently selecting the split and merge candidates. We apply the proposed algorithm to the training of Gaussian mixtures and the dimensionality reduction based on a mixture of factor analyzers using synthetic and real data and show that the proposed algorithm can markedly improve the ML estimates.
Kensaku FUJII Yoshinori TANAKA
The signed regressor algorithm, a variation of the least mean square (LMS) algorithm, is characterized by the estimation way of using the clipped reference signals, namely, its sign (). This clipping, equivalent to quantizing the reference signal to 1, only increases the estimation error by about 2 dB. This paper proposes to increase the number of the quantization steps to three, namely, 1 and 0, and shows that the 'tri-quantized-x' normalized least mean square (NLMS) algorithm with three quantization steps improves the convergence property.
First order line seach optimization techniques gained essential practical importance over second order optimization techniques due to their computational simplicity and low memory requirements. The computational excess of second order methods becomes unbearable for large optimization tasks. The only applicable optimization techniques in such cases are variations of first order approaches. This article presents one such variation of first order line search optimization technique. The presented algorithm has substantially simplified a line search subproblem into a single step calculation of the appropriate value of step length. This remarkably simplifies the implementation and computational complexity of the line search subproblem and yet does not harm the stability of the method. The algorithm is theoretically proven convergent, with superlinear convergence rates, and exactly classified within the formerly proposed classification framework for first order optimization. Performance of the proposed algorithm is practically evaluated on five data sets and compared to the relevant standard first order optimization technique. The results indicate superior performance of the presented algorithm over the standard first order method.
Koji INOUE Koji KAI Kazuaki MURAKAMI
This paper proposes an on-chip memory-path architecture employing the dynamically variable line-size (D-VLS) cache for high performance and low energy consumption. The D-VLS cache exploits the high on-chip memory bandwidth attainable on merged DRAM/logic LSIs by replacing a whole large cache line in one cycle. At the same time, it attempts to avoid frequent evictions by decreasing the cache-line size when programs have poor spatial locality. Activating only on-chip DRAM subarrays corresponding to a replaced cache-line size produces a significant energy reduction. In our simulation, it is observed that our proposed on-chip memory-path architecture, which employs a direct-mapped D-VLS cache, improves the ED (Energy Delay) product by more than 75% over a conventional memory-path model.
Numerous scientific and engineering fields extensively utilize optimization techniques for finding appropriate parameter values of models. Various optimization methods are available for practical use. The optimization algorithms are classified primarily due to the rates of convergence. Unfortunately, it is often the case in practice that the particular optimization method with specified convergence rates performs substantially differently on diverse optimization tasks. Theoretical classification of convergence rates then lacks its relevance in the context of the practical optimization. It is therefore desirable to formulate a novel classification framework relevant to the theoretical concept of convergence rates as well as to the practical optimization. This article introduces such classification framework. The proposed classification framework enables specification of optimization techniques and optimization tasks. It also underlies its inherent relationship to the convergence rates. Novel classification framework is applied to categorizing the tasks of optimizing polynomials and the problem of training multilayer perceptron neural networks.
Markus H. KLEIN Rob J. M. M. SNIJKERS Gerjan J. M. HAGELAAR
Low luminous efficacy is one of the major drawbacks of PDPs, with the discharge being the predominant limiting factor. Numeric simulations granting deeper insight in the core processes of the discharge are presented and the key parameters influencing the plasma efficiency are examined.
Jacir L. BORDIM JiangTao CUI Tatsuya HAYASHI Koji NAKANO Stephan OLARIU
The main contribution of this work is to propose energy-efficient randomized initialization protocols for ad-hoc radio networks (ARN, for short). First, we show that if the number n of stations is known beforehand, the single-channel ARN can be initialized by a protocol that terminates, with high probability, in O(n) time slots with no station being awake for more than O(log n) time slots. We then go on to address the case where the number n of stations in the ARN is not known beforehand. We begin by discussing, an elegant protocol that provides a tight approximation of n. Interestingly, this protocol terminates, with high probability, in O((log n)2) time slots and no station has to be awake for more than O(log n) time slots. We use this protocol to design an energy-efficient initialization protocol that terminates, with high probability, in O(n) time slots with no station being awake for more than O(log n) time slots. Finally, we design an energy-efficient initialization protocol for the k-channel ARN that terminates, with high probability, in O(n/k+log n) time slots, with no station being awake for more than O(log n) time slots.
In this paper, a new set of difference equations is derived for transient analysis of the convergence of adaptive FIR filters using the Sign-Sign Algorithm with Gaussian reference input and additive Gaussian noise. The analysis is based on the assumption that the tap weights are jointly Gaussian distributed. Residual mean squared error after convergence and simpler approximate difference equations are further developed. Results of experiment exhibit good agreement between theoretically calculated convergence and that of simulation for a wide range of parameter values of adaptive filters.
The individually normalized least mean square (INLMS) algorithm is proposed as an adaptive algorithm suitable for the fixed point processing. The convergence property of the INLMS algorithm, however, is not yet analyzed enough. This paper first derives an equation describing the convergence property by exploiting the technique of expressing the INLMS algorithm as a first order infinite impulse response (IIR) filter. According to the equation derived thus, the decreasing process of the estimation error is represented as the response of another IIR filter expression. By using the representation, this paper second derives the convergence condition of the INLMS algorithm as the range of the step size making a low path filter of the latter IIR filter. This paper also derives the step size maximizing the convergence speed as the maximum coefficient of the latter IIR filter and finally clarifies the range of the step size recommended in the practical system design.