1-14hit |
Jie CHEN Shuichi ITOH Takeshi HASHIMOTO
A complete analysis for the quantization noises and the reconstruction noises of the wavelet pyramid coding system is given. It is shown that in the (orthonormal) wavelet image coding system, there exists a simple and exact formula to compute the reconstruction mean-square-error (MSE) for any kind of quantization errors. Based on the noise analysis, an optimal bit allocation scheme which minimizes the system reconstruction distortion at a given rate is developed. The reconstruction distortion of a wavelet pyramid system is proved to be directly proportional to 2-2, where is a given bit rate. It is shown that, when the optimal bit allocation scheme is adopted, the reconstruction noises can be approximated to white noises. Particularly, it is shown that with only one known quantization MSE of a wavelet decomposition at any layer of the wavelet pyramid, all of the reconstruction MSE's and the quantization MSE's of the coding system can be easily calculated. When uniform quantizers are used, it is shown that at two successive layers of the wavelet pyramid, the optimal quantization step size is a half of its predecessor, which coincides with the resolution version of the wavelet pyramid decomposition. A comparison between wavelet-based image coding and some well-known traditional image coding methods is made by simulations, and the reasons why the wavelet-based image coding is superior to the traditional image coding are explained.
Wei LIU Toshihiko KATO Seiji UENO Shuichi ITOH
Resulting from the spread of Mobile Internet, the mobile communication with QoS guarantee will be required in order to realize mobile video interactions. So far, there are some studies focusing on QoS Mobile IP communication, but they require backbone routers to maintain per-flow QoS information for all individual Mobile Nodes. So these approaches suffer from the lack of scalability. Against them, we are developing an approach which the per-flow QoS information is maintained only by Mobile IP agents such as the Home Agent and the Foreign Agent. We have adopted a hierarchical method with MPLS which MPLS paths with large bandwidth are introduced between Mobile IP related nodes, and a per-flow path with small bandwidth called Pathlet is established for individual communication between a Mobile Node and a Fixed Host. The maintenance of Pathlets is only performed by Home Agent, Foreign Agent and Fixed Host, and the network backbone MPLS routers only take care of MPLS paths with large bandwidth. In the simulation, we compare our scheme with conventional scheme by observing the total number of entries managed by routers and bandwidth prepared at individual links.
Hideaki TSUCHIYA Shuichi ITOH Takeshi HASHIMOTO
A algorithm for designing a pattern classifier, which uses MDL criterion and a binary data structure, is proposed. The algorithm gives a partitioning of the range of the multi-dimensional attribute and gives an estimated probability model for this partitioning. The volume of bins in this partitioning is upper bounded by ο((log N/N)K/(K+2)) almost surely, where N is the length of training sequence and K is the dimension of the attribute. The convergence rates of the code length and the divergence of the estimated model are asymptotically upper bounded by ο((log N/N)2/(K+2)). The classification error is asymptotically upper bounded by ο((log N/N)1/(K+2)). Simulation results for 1-dimensional and 2-dimensional attribute cases show that the algorithm is practically efficient.
The Cuong DINH Takeshi HASHIMOTO Shuichi ITOH
For L 2, M 8, and transmission rate R = (log2M-1) bit/sym, a method for constructing GU trellis codes with L MPSK constellations is proposed and it is shown that the maximally achievable free distance is twice larger than it was previously reported for GU codes. Basides being geometrically uniform, these codes perform as good as Pietrobon's non-GU trellis codes with multidimensional MPSK constellations.
Wen CHEN Shuichi ITOH Junji SHIKI
In the more general framework "shift invariant subspace," the paper obtains a different estimate of sampling in function subspace to our former work, by using the Frame Theory. The derived formula is easy to be calculated, and the estimate is relaxed in some shift invariant subspaces. The former work is now, however, a special case of the present.
When the alphabet of a source is very big, existing universal data compression algorithms fail to show their expected performances. The purpose of this paper is to provide an efficient coding algorithm even for sources with such a big alphabet size. We assume that the source has an independent identical smooth distribution but the distribution itself is unknown to us, and that the source symbols have a linear order. The algorithm is described in an adaptive enumerative coding flavor. Let Sn denote the set of observed past n samples and we are to encode a new sample x noiselessly. We look into Sn and decide the positional index of x among the ordered past n samples. This can be done by Cover's enumerative formula. Sn is divided into k groups of equal size according to that order. Then the probability that x falls into any particular group being almost equal to 1/k, the group number of x is encoded in fixed length, log k bits. Source alphabet is also partitioned into k sub-alphabets in accordance with the groups in Sn. Then the positional index of x in the selected sub-alphabet is encoded in variable length. The computational time complexity of the algorithm is linear to the input data length. The experimental analysis and discussion show superiority of our algorithm to existing ones for the sources in question. A modification to the base algorithm is also proposed for implementation purposes.
Jie CHEN Shuichi ITOH Takeshi HASHIMOTO
A new method by which images are coded with predictable and controllable subjective picture quality in the minimum cost of bit rate is developed. By using wavelet transform, the original image is decomposed into a set of subimages with different frequency channels and resolutions. By utilizing human contrast sensitivity, each decomposed subimage is treated according to its contribution to the total visual quality and to the bit rate. A relationship between physical errors (mainly quantization errors) incurred in the orthonormal wavelet image coding system and the subjective picture quality quantified as the mean opinion score (MOS) is established. Instred of using the traditional optimum bit allocation scheme which minimizes a distortion cost function under the constraint of a given bit rate, we develop an "optimum visually weighted noise power allocation" (OVNA) scheme which emphasizes the satisfying of a desired subjective picture quality in the minumum cost of bit rate. The proposed method enables us to predict and control the picture quality before the reconstruction and to compress images with desired subjective picture quality in the minimum bit rate.
The paper provides the algorithm to estimate the deviation bound admitting to recovering irregularly sampled signals in wavelet subspaces, which does not need the symmetricity sampling constraint of Paley-Wiener's and relaxes the deviation bounds in some wavelet subspaces. Meanwhile the method does not need the continuity and decay constraints imposed on scaling functions by Liu-Walter and Chen-Itoh-Shiki.
The fact that bounded interval band orthonormal scaling function shows oversampling property is demonstrated. The truncation error is estimated when scaling function with oversampling property is used to recover signals from their discrete samples.
Wen CHEN Jie CHEN Shuichi ITOH
Following our former works on regular sampling in wavelet subspaces, the paper provides two algorithms to estimate the truncation error and aliasing error respectively when the theorem is applied to calculate concrete signals. Furthermore the shift sampling case is also discussed. Finally some important examples are calculated to show the algorithm.
Jie CHEN Shuichi ITOH Takeshi HASHIMOTO
A new method for the compression of electrocardiographic (ECG) data is presented. The method is based on the orthonormal wavelet analysis recently developed in applied mathematics. By using wavelet transform, the original signal is decomposed into a set of sub-signals with different frequency channels corresponding to the different physical features of the signal. By utilizing the optimum bit allocation scheme, each decomposed sub-signal is treated according to its contribution to the total reconstruction distortion and to the bit rate. In our experiments, compression ratios (CR) from 13.5: 1 to 22.9: 1 with the corresponding percent rms difference (PRD) between 5.5% and 13.3% have been obtained at a clinically acceptable signal quality. Experimental results show that the proposed method seems suitable for the compression of ECG data in the sense of high compression ratio and high speed.
An oversampling theorem for regular sampling in wavelet subspaces is established. The sufficient-necessary condition for which it holds is found. Meanwhile the truncation error and aliasing error are estimated respectively when the theorem is applied to reconstruct discretely sampled signals. Finally an algorithm is formulated and an example is calculated to show the algorithm.
The paper obtains an algorithm to estimate the irregular sampling in wavelet subspaces. Compared to our former work on the problem, the new estimate is relaxed for some wavelet subspaces.