The expected lengths of the parsed segments obtained by applying Lempel-Ziv incremental parsing algorithm for i.i.d. source satisfy simple recurrence relations. By extracting a combinatorial essence from the previous proof, we obtain a simpler derivation.
Tsutomu KAWABATA Frans M. J. WILLEMS
We propose a variation of the Context Tree Weighting algorithm for tree source modified such that the growth of the context resembles Lempel-Ziv parsing. We analyze this algorithm, give a concise upper bound to the individual redundancy for any tree source, and prove the asymptotic optimality of the data compression rate for any stationary and ergodic source.
We are interesting in the error exponent for source coding with fidelity criterion. For each fixed distortion level Δ, the maximum attainable error exponent at rate R, as a function of R, is called the reliability function. The minimum rate achieving the given error exponent is called the minimum achievable rate. For memoryless sources with finite alphabet, Marton (1974) gave an expression of the reliability function. The aim of the paper is to derive formulas for the reliability function and the minimum achievable rate for memoryless Gaussian sources.
Xiaolei GUO Tony T. LEE Hung-Hsiang Jonathan CHAO
Flow control algorithm in high speed networks is a resource-sharing policy implemented in a distributed manner. This paper introduces a novel concept of backlog balancing and demonstrates its application to network flow control and congestion control by presenting a rate-based flow control algorithm for ATM networks. The aim of flow control is to maximize the network utilization for achieving high throughput with tolerable delay for each virtual circuit (VC). In a resource-sharing environment, this objective may also cause network congestion when a cluster of aggressive VC's are contending for the same resource at a particular node. The basic idea of our algorithm is to adjust the service rate of each node along a VC according to backlog discrepancies between neighboring nodes (i.e., to reduce the backlog discrepancy). The handshaking procedure between any two consecutive nodes is carried out by a link-by-link binary feedback protocol. Each node will update its service rate periodically based on a linear projection model of the flow dynamics. The updated service rate per VC at a node indicates its explicit demand of bandwidth, so a service policy implementing dynamic bandwidth allocation is introduced to enforce such demands. Simulation study has validated the concept and its significance in achieving the goal of flow control and yet preventing network congestion at the same time.
Jianting CAO Noboru MURATA Shun-ichi AMARI Andrzej CICHOCKI Tsunehiro TAKEDA Hiroshi ENDO Nobuyoshi HARADA
Magnetoencephalography (MEG) is a powerful and non-invasive technique for measuring human brain activity with a high temporal resolution. The motivation for studying MEG data analysis is to extract the essential features from measured data and represent them corresponding to the human brain functions. In this paper, a novel MEG data analysis method based on independent component analysis (ICA) approach with pre-processing and post-processing multistage procedures is proposed. Moreover, several kinds of ICA algorithms are investigated for analyzing MEG single-trial data which is recorded in the experiment of phantom. The analyzed results are presented to illustrate the effectiveness and high performance both in source decomposition by ICA approaches and source localization by equivalent current dipoles fitting method.
This paper proposes radio resource control scheme for ABR service that execute flow-control on the transmission rate and assignable bandwidth according to the congestion conditions in both wireless and wired networks. The proposed scheme is useful in improving frequency utilization and meeting the QoS requirements. There are two methods to realize the proposed scheme: explicit rate control (ERC) and binary control (BC). We estimate the performance of the proposed scheme by simulation in comparison with a scheme without flow control in a wireless network under the conditions of a finite buffer, wired network congestion, and RM-cell errors. Consequently, we confirm that the proposed scheme is more effective than the scheme without flow-control under all service conditions. In addition, we clarify that both ERC and BC are effective under the conditions of a finite buffer, wired network congestion, and RM-cell errors.
Efficient radio resource utilization and fairness are important goals that must be achieved since wireless ATM systems support various services with different traffic characteristics such as CBR and UBR. This paper proposes a novel delay-and-queuing data size-based MAC protocol for broadband wireless ATM. The proposed MAC protocol relies on a new resource scheduling algorithm that decides the priority of channel assignment based on both the queuing delay and the queuing data size in the transmission buffer. Simulation results confirm that the proposed MAC protocol is able to provide throughput fairness and to achieve excellent throughput performance for ATM services that experience dynamic traffic fluctuations.
We consider the problem of placing resources in a distributed computing system so that certain performance requirements may be met while minimizing the number of resource copies needed. Resources include special I/O processors, expensive peripheral devices, or such software modules as compilers, library routines, and data files. Due to the delay in accessing each of these resources, system performance degrades as the distance between each processor and its nearest resource copy increases. Thus, every processor must be within a given distance k1 of at least one resource copy, which is called the k-bounded placement problem. The structure of a distributed computing system is represented by a graph. The k-bounded placement problem is first transformed into the problem of finding smallest k-dominating sets in a graph. Searching for smallest k-dominating sets is formulated as a state-space search problem. We derive heuristic information to speed up the search, which is then used to solve the problem with the well-known A* algorithm. An illustrative example and some experimental results are presented to demonstrate the effectiveness of the heuristic search.
Channel-state-dependent (CSD) radio-resource scheduling algorithms for wireless message transport using a framed ALOHA-reservation access protocol are presented. In future wireless systems that provide Mbps-class high-speed wireless links using high frequencies, burst packet errors, which last a certain number of packets in time, would cause serious performance degradation. CSD resource scheduling algorithms utilize channel-state information for increasing overall throughput. These algorithms were comparatively evaluated in terms of average allocation plus transfer delay, average throughput, variance in throughput, and utilization of resources. Computer simulation results showed that the CSD mechanism has a good effect, especially on equal sharing (ES)-based algorithms, and also CSD-ES provides low allocation plus transfer delay, high average throughput, low variance in throughput, and efficient utilization of radio resources.
In-Ho LIN Bih-Hwang LEE Chwan-Chia WU
This paper presents an object-oriented model to handle the temporal relationship for all of the multimedia objects at the presentation platform. Synchronization of the composite media objects is achieved by ensuring that all objects presented in the upcoming "manageable" period must be ready for execution. To this end, the nature of overlays is first investigated for various types of objects. Critical overlaps which are crucial in synchronization are also defined. The objective of synchronization is to ensure that the media objects can be initiated precisely at the critical point of the corresponding critical overlap. The concept of manageable presentation interval is introduced and the irreducible media group is defined. The resource scheduling of each presentation group for media object pre-fetch time versus buffer occupancy is also examined. Accordingly, a new model called group cascade object composition Petri-net (GCOCPN) is proposed and an algorithm to implement this temporal synchronization scheme is presented.
Yasuhiro ISHIDA Kazuo MURAKAWA Kouji YAMASHITA Masamitsu TOKUDA
Relating to the radiated emission sources finding method based on CISPR emission measurement system, which uses only amplitude data without phase data, the applicability to horizontally polarized sources was studied. We experimentally verified by using two spherical dipole antennas as ideal emission sources in the frequency range from 300 MHz to 1GHz. As the results, the position estimation deviation Δd was less than 0.09 m, the amplitude estimation deviation Δj was less than 1.5 dB, in which position estimation accuracy was raised so much compared with that for vertically polarized sources, and additionally the angle of its horizontal current direction could be estimated. Furthermore, it was revealed that this method can be also applied even when several sources exist, consequently the applicability of this method has been greatly expanded.
Although in recent years, considerable efforts have been exerted on treating the congestion control problems of ABR services in the ATM networks, the focus has been so far mostly on unicast applications. The inclusion of the emerging multicast services in the design of congestion control schemes is still at its infancy. The generic rate-based closed-loop congestion control scheme proposed by the ATM Forum for ABR services suffers from large delay-bandwidth product. VS/VD behavior is therefore proposed by the Forum as an supplement. In this paper, two VS/VD behavior congestion control schemes for multicast ABR services in the ATM networks are examined : forward explicit congestion notification (FECN) and backward explicit congestion notification (BECN). Their performances are analyzed and compared. We further observe that both VS/VD schemes alleviate the problem of consolidation noise and consolidation delay of the RM cells returning from the downstream nodes. The alleviation of consolidation noise and consolidation delay is a major concern of most present researches. Simulation results are also given to support the validity of our analysis and claims.
Eitake IBARAGI Akira HYOGO Keitaro SEKINE
A tail current source is often employed for many analog building blocks. It can limit the increase of excess power. It can also improve CMRR and PSRR. In this paper, we propose a very high output impedance tail current source for low voltage applications. The proposed tail current source has almost the same output impedance as the conventional cascode type tail current source in theory. Simulation results show that the output impedance of the proposed circuit becomes 1.28 GW at low frequencies. Applying the proposed circuit to a differential amplifier, the CMRR is enhanced by 66.7 dB, compared to the conventional differential amplifier. Moreover, the proposed circuit has the other excellent merit. The output stage of the proposed tail current source can operate at VDS(sat) and a quarter of VDS(sat) of the simple current source in theory and simulation, respectively. For example, in the simulation, when the reference current IREF is set to 100µA, the minimum voltage of the simple current source approximates 0.4 V, whereas that of the proposed current source approximates 0.1 V. Thus, the dynamic range can be enlarged by 0.3 V in this case. The value is still enough large value for low voltage applications. Hence, the proposed tail current source is suitable for low voltage applications.
Kai YANG Hiroyuki KUDO Tsuneo SAITO
We introduce a new wavelet image coding framework using context-based zerotree quantization, where an unique and efficient method for optimization of zerotree quantization is proposed. Because of the localization properties of wavelets, when a wavelet coefficient is to be quantized, the best quantizer is expected to be designed to match the statistics of the wavelet coefficients in its neighborhood, that is, the quantizer should be adaptive both in space and frequency domain. Previous image coders tended to design quantizers in a band or a class level, which limited their performances as it is difficult for the localization properties of wavelets to be exploited. Contrasting with previous coders, we propose to trace the localization properties with the combination of the tree-structured wavelet representations and adaptive models which are spatial-varying according to the local statistics. In the paper, we describe the proposed coding algorithm, where the spatial-varying models are estimated from the quantized causal neighborhoods and the zerotree pruning is based on the Lagrangian cost that can be evaluated from the statistics nearby the tree. In this way, optimization of zerotree quantization is no longer a joint optimization problem as in SFQ. Simulation results demonstrate that the coding performance is competitive, and sometimes is superior to the best results of zerotree-based coding reported in SFQ.
Yoshitsugu TSUCHIYA Sakae CHIKARA Fumito SATO Hiroshi ISHII
This paper proposes an implementation of the Telecommunications Information Networking Architecture (TINA) connection management system, based on our involvement of The TINA Trial (TTT). The system is used for managing ATM networks, which consist of network elements with SNMP interfaces. It provides setup, configuration, and release of ATM connection with a GUI-based network design tool that generates network resource data used for deploying TINA software components. This paper reports on a method of implementing TINA components over a Distributed Processing Environment (DPE) and an effective way to manage computational objects with multiple interfaces by using the Trading Service.
We consider the optimal average cost of variable length source code averaged with a given probability distribution over source messages. The problem was argued in Csiszar and Korner's book. In a special case of binary alphabet, we find an upper bound to the optimal cost minus an ideal cost, where the ideal cost is the entropy of the source divided by a unique scalar that makes negative costs logarithmic probabilities. Our bound is better than the one given in the book.
This paper clarifies two variable-to-fixed length codes which achieve optimum large deviations performance of empirical compression ratio. One is Lempel-Ziv code with fixed number of phrases, and the other is an arithmetic code with fixed codeword length. It is shown that Lempel-Ziv code is asymptotically optimum in the above sense, for the class of finite-alphabet and finite-state sources, and that the arithmetic code is asymptotically optimum for the class of finite-alphabet unifilar sources.
Recently there have been several attempts to construct a Markov information source based on chaotic dynamics of the PLM (piecewise-linear-monotonic) onto maps. Study, however, soon informs us that Kalman's 1956 embedding of a Markov chain is to be highly appreciated. In this paper Kalman's procedure for embedding a prescribed Markov chain into chaotic dynamics of the PLM onto map is revisited and improved by using the PLM onto map with the minimum number of subintervals.
Ruck THAWONMAS Andrzej CICHOCKI
In this paper, we discuss a neural network approach for blind signal extraction of temporally correlated sources. Assuming autoregressive models of source signals, we propose a very simple neural network model and an efficient on-line adaptive algorithm that extract, from linear mixtures, a temporally correlated source with an arbitrary distribution, including a colored Gaussian source and a source with extremely low value (or even zero) of kurtosis. We then combine these extraction processing units with deflation processing units to extract such sources sequentially in a cascade fashion. Theory and simulations show that the proposed neural network successfully extracts all arbitrarily distributed, but temporally correlated source signals from linear mixtures.
In order to accommodate periodic and bursty sources into ATM networks effectively, we propose phase assignment control (PAC), which actively controls the phase of the new connection at its connection setup phase. To realize PAC, we develop an algorithm to find a good phase of the new connection in a short time. Simulation results show that the PAC can improve the system performance.