Development of new sliding contact, usable under sever conditions such as high-temperature, extremely low-temperature or high vacuum, has recently become an urgent necessity. This research mainly examined the contact resistance and coefficient of friction of 3 kinds of self-lubricant composite materials with electrical conductivity and mechanical stiffness. The result showed that a composite material (CMML-1) containing the least quantity of solid lubricants [WS2, Gr.(Graphite)] among them was low in both contact resistance and coefficient of friction and less in fluctuation. By EPMA analysis, contribution of Sn to electrical conductivity was suggested.
Ching-Chih KUO Wen-Thong CHANG
By modelling the quantization error as additive white noise in the transform domain, Wiener filter is used to reduce quantization noise for DCT coded images in DCT domain. Instead of deriving the spectrum of the transform coefficient, a DPCM loop is used to whiten the quantized DCT coefficients. The DPCM loop predicts the mean for each coefficient. By subtracting the mean, the quantized DCT coefficient is converted into the sum of prediction error and quantization noise. After the DPCM loop, the prediction error can be assumed uncorrelated to make the design of the subsequent Wiener filter easy. The Wiener filter is applied to remove the quantization noise to restore the prediction error. The original coefficient is reconstructed by adding the DPCM predicted mean with the restored prediction error. To increase the prediction accuracy, the decimated DCT coefficients in each subband are interpolated from the overlapped blocks.
Shinji TANAKA Tetsuyasu YAMADA Satoshi SHIRAISHI
The sizes of recent Java-based server-side applications, like J2EE containers, have been increasing continuously. Past techniques for improving the performance of Java applications have targeted relatively small applications. Moreover, when the methods of these small target applications are invoked, they are not usually distributed over the entire memory space. As a result, these techniques cannot be applied efficiently to improve the performance of current large applications. We propose a dynamic code repositioning approach to improve the hit rates of instruction caches and translation look-aside buffers. Profiles of method invocations are collected when the application performs with its heaviest processor load, and the code is repositioned based on these profiles. We also discuss a method-splitting technique to significantly reduce the sizes of methods. Our evaluation of a prototype implementing these techniques indicated 5% improvement in the throughput of the application.
Hong Kook KIM Mi Suk LEE Chul Hong KWON
A new excitation enhancement technique based on a harmonic model is proposed in this paper to improve the speech quality of low-bit-rate speech coders. This technique is employed only in the decoding process of speech coders and improves high-frequency components of excitation. We develop the procedure of harmonic model parameters estimation and harmonic generation and apply the technique to a current state-of-art low bit rate speech coder. Experiments on spectrum reading and spectrum distortion measurement show that the proposed excitation enhancement technique improves speech quality.
In this paper we apply a parallel adaptive solution algorithm to simulate nanoscale double-gate metal-oxide-semiconductor field effect transistors (MOSFETs) on a personal computer (PC)-based Linux cluster with the message passing interface (MPI) libraries. Based on a posteriori error estimation, the triangular mesh generation, the adaptive finite volume method, the monotone iterative method, and the parallel domain decomposition algorithm, a set of two-dimensional quantum correction hydrodynamic (HD) equations is solved numerically on our constructed cluster system. This parallel adaptive simulation methodology with 1-irregular mesh was successfully developed and applied to deep-submicron semiconductor device simulation in our recent work. A 10 nm n-type double-gate MOSFET is simulated with the developed parallel adaptive simulator. In terms of physical quantities and refined adaptive mesh, simulation results demonstrate very good accuracy and computational efficiency. Benchmark results, such as load-balancing, speedup, and parallel efficiency are achieved and exhibit excellent parallel performance. On a 16 nodes PC-based Linux cluster, the maximum difference among CPUs is less than 6%. A 12.8 times speedup and 80% parallel efficiency are simultaneously attained with respect to different simulation cases.
Yoshiya MIYAGAKI Mitsuru OHKURA Nobuo TAKAHASHI
A very general form of the probability density distribution of the fading envelope has been presented by M.Nakagami, including the Nakagami-Rice and Nakagami-Hoyt distributions as special cases. This paper gives the series form expanded in positive terms of the m-distribution for it. Previously, the feasibility of such an expansion was predicted, but there has been no explicit description to date. The properties of the well-known m-distribution and the positive sign in each term of this series make it practical for numerical calculation, approximation and analysis.
Hiroyuki EHARA Kazutoshi YASUNAGA Koji YOSHIDA Yusuke HIWASAKI Kazunori MANO Takao KANEKO
This paper presents a newly developed noise post-processing (NPP) algorithm and the results of several tests demonstrating its subjective performance. This NPP algorithm is designed to improve the subjective performance of low bit-rate code excited linear prediction (CELP) decoding under background noise conditions. The NPP algorithm is based on a stationary noise generator and improves the subjective quality of noisy signal input. A backward adaptive detector defines noisy input signal frames from decoded LSF, energy, and pitch parameters. The noise generator estimates and produces stationary noise signals using past line spectral frequency (LSF) and energy parameters. The stationary noise generator has a frame erasure concealment (FEC) scheme designed for stationary noise signals and therefore improves the speech decoder's robustness for frame erasure under background noise conditions. The algorithm has been applied to the following CELP decoders: 1) a candidate algorithm of the ITU-T 4-kbit/s speech coding standard and 2) existing ITU-T standards, the G.729 and G.723.1 series. In both cases, NPP improved the subjective performance of the baseline decoders. Improvements of approximately 0.25 CMOS (CCR MOS: comparison category rating mean opinion score) and around 0.2-0.8 DMOS (DCR MOS: degradation category rating mean opinion score) were demonstrated in the results of our subjective tests when applied to the 4-kbit/s decoder and G.729/G.723.1 decoders respectively. Other test results show that NPP improves the subjective performance of a G.729 decoder by around 0.45 in DMOS under both error-free and frame-erasure conditions, and a further improvement of around 0.2 DMOS is achieved by the FEC scheme in the noise generator.
Masashi YAMADA Rahmat BUDIARTO Mamoru ENDO Shinya MIYAZAKI
This paper presents a system for reading comics on cellular phones. It is necessary for comic images to be divided into frames and the contents such as speech text to be displayed at a comfortable reading size, since it is difficult to display high-resolution images in a low resolution cellular phone environment. We have developed a scheme how to decompose comic images into constituent elements frames, speech text and drawings. We implemented a system on the internet for a cellular phone company in our country, that provides downloadable comic data and a program for reading.
Byeong-Seob KO Ryouichi NISHIMURA Yoiti SUZUKI
A robust watermarking scheme based on the time-spread echo method is proposed in this letter. The embedding process is achieved by subband decomposition of a host signal and by controlling the amount of distortion, i.e., power of watermark, of each subband according to the Signal to Mask Ratio (SMR) calculated from MPEG psychoacoustic model. The decoding performance and robustness of the proposed method were evaluated.
We consider a new post-filtering algorithm for residual acoustic echo cancellation in hands-free application. The new post-filtering algorithm is composed of AR analysis, pitch prediction, and noise reduction algorithm. The residual acoustic echo is whitened via AR analysis and pitch prediction during no near-end talker period and then is cancelled by noise reduction algorithm. By removing speech characteristics of the residual acoustic echo, noise reduction algorithm reduces the power of the residual acoustic echo as well as the ambient noise. For the hands-free application in the moving car, the proposed system attenuated the interferences more than 15 dB at a constant speed of 80 km/h.
Environment measurement is an important issue for various applications including household robots. Pulse radars are promising candidates in a near future. Estimating target shapes using waveform data, which we obtain by scanning an omni-directional antenna, is known as one of ill-posed inverse problems. Parametric methods such as Model-fitting method have problems concerning calculation time and stability. We propose a non-parametric algorithm for high-resolution estimation of target shapes in order to solve the problems of parametric algorithms.
The paper presents a novel stroke decomposition approach based on a directional filtering technique for recognizing Chinese characters. The proposed filtering technique uses a set of the second-order Gaussian derivative (SOGD) filters to decompose a character into a number of stroke segments. Moreover, a new Gaussian function is proposed to overcome the general limitation in extracting stroke segments along some fixed and given orientations. The Gaussian function is designed to model the relationship between the orientation and power response of the stroke segment in the filter output. Then, an optimal orientation of the stroke segment can be estimated by finding the maximal power response of the stroke segment. Finally, the effects of decomposition process are analyzed using some simple structural and statistical features extracted from the stroke segments. Experimental results indicate that the proposed SOGD filtering-based approach is very efficient to decompose noisy and degraded character images into a number of stroke segments along an arbitrary orientation. Furthermore, the recognition performance from the application of decomposition process can be improved about 17.31% in test character set.
A new speaker feature extracted from multi-wavelet decomposition for speaker recognition is described. The multi-wavelet decomposition is a multi-scale representation of the covariance matrix. We have combined wavelet transform and the multi-resolution singular value algorithm to decompose eigenvector for speaker feature extraction not at the square matrix. Our results have shown that this multi-wavelet feature introduced better performance than the cepstrum and Δ-cepstrum with respect to the percentages of recognition.
Taoi HSU Wen-Liang HWANG Jiann-Ling KUO Der-Kuo TUNG
In this paper, a novel Wold decomposition algorithm is proposed to address the issue of deterministic component extraction for texture images. This algorithm exploits the wavelet-based singularity detection theory to process both harmonic a nd evanescent features from frequency domain. This exploitation is based on the 2D Lebesgue decomposition theory. When applying multiresolution analysis techniq ue to the power spectrum density (PSD) of a regular homogeneous random field, its indeterministic component will be effectively smoothed, and its deterministic component will remain dominant at coarse scale. By means of propagating these positions to the finest scale, the deterministic component can be properly extracted. From experiment, the proposed algorithm can obtain results that satisfactorily ensure its robustness and efficiency.
Shigemasa TAKAI Toshimitsu USHIO
In this paper, we study supervisory control of a class of discrete event systems with simultaneous event occurrences, which we call concurrent discrete event systems. The behavior of the system is described by a language over the simultaneous event set. We introduce a notion of concurrent well-posedness of languages. We then prove that Lm(G)-closure, controllability, and concurrent well-posedness of a specification language are necessary and sufficient conditions for the existence of a nonblocking supervisor. We address the computational complexity for verifying the existence conditions.
Comparison of intelligent and random testing in data inputting is still under discussion. Little is also known about testing for the whole software and empirical testing methodology when random testing used. This study research not only for data inputting testing, but also operation of software (called transitions) in order to test the whole GUI software by intelligent and random testing. Methodology of this study is that we attempt to research efficiency of random and intelligent testing by Chinese postman problem. In general, random testing is considered straightforward but not efficient. Chinese postman problem testing is complicated but efficient. The comparison between random and intelligent testing would give further recommendation for software testing methodology.
We propose Optimal Temporal Decomposition (OTD) of speech for voice morphing preserving Δ cepstrum. OTD is an optimal modification of the original Temporal Decomposition (TD) by B. Atal. It is theoretically shown that OTD can achieve minimal spectral distortion for the TD-based approximation of time-varying LPC parameters. Moreover, by applying OTD to preserving Δ cepstrum, it is also theoretically shown that Δ cepstrum of a target speaker can be reflected to that of a source speaker. In frequency domain interpolation, the Laplacian Spectral Distortion (LSD) measure is introduced to improve the Inverse Function of Integrated Spectrum (IFIS) based non-uniform frequency warping. Experimental results indicate that Δ cepstrum of the OTD-based morphing spectra of a source speaker is mostly equal to that of a target speaker except for a piecewise constant factor and subjective listening tests show that the speech intelligibility of the proposed morphing method is superior to the conventional method.
Naihua YUAN Anh DINH Ha H. NGUYEN
A time-domain equalization (TEQ) algorithm is presented to shorten the effective channel impulse response to increase the transmission efficiency of the 54 Mbps IEEE 802.11a orthogonal frequency division multiplexing (OFDM) system. In solving the linear equation Aw = B for the optimum TEQ coefficients, A is shown to be Hermitian and positive definite. The LDLT and LU decompositions are used to factorize A to reduce the computational complexity. Simulation results show high performance gains at a data rate of 54 Mbps with moderate orders of TEQ finite impulse response (FIR) filter. The design and implementation of the algorithm in field programmable gate array (FPGA) are also presented. The regularities among the elements of A are exploited to reduce hardware complexity. The LDLT and LU decompositions are combined in hardware design to find the TEQ coefficients in less than 4 µs. To compensate the effective channel impulse response, a radix-4 pipeline fast Fourier transform (FFT) is implemented in performing zero forcing equalization. The hardware implementation information is provided and simulation results are compared to mathematical values to verify the functionalities of the chips running at 54 Mbps.
Takeshi MASUYAMA Hiroshi NAKAGAWA
Although many researchers have verified the superiority of Support Vector Machine (SVM) on text categorization tasks, some recent papers have reported much lower performance of SVM based text categorization methods when focusing on all types of parts of speech (POS) as input words and treating large numbers of training documents. This was caused by the overfitting problem that SVM sometimes selected unsuitable support vectors for each category in the training set. To avoid the overfitting problem, we propose a two step text categorization method with a variable cascaded feature selection (VCFS) using SVM. VCFS method selects a pair of the best number of words and the best POS combination for each category at each step of the cascade. We made use of the difference of words with the highest mutual information for each category on each POS combination. Through the experiments, we confirmed the validation of VCFS method compared with other SVM based text categorization methods, since our results showed that the macro-averaged F1 measure (64.8%) of VCFS method was significantly better than any reported F1 measures, though the micro-averaged F1 measure (85.4%) of VCFS method was similar to them.
Takeshi KOIDE Shuichi SHINMORI Hiroaki ISHII
Marginal reliability importance (MRI) of a component in a system is defined as the rate at which the system reliability changes over changes of the component reliability. MRI helps network designers to construct a reliable network layout. We consider a problem to compute MRI of all components in a network system considering all-terminal reliability in order to rank the components with respect to MRI. The problem is time-consuming since computing network reliability is #P-complete. This paper improves the traditional approach for the problem to proposes an efficient algorithm. The algorithm applies some network transformations, three network reductions and one network decomposition. We have proved lemmas with respect to the relationship between the transformations and MRI, which compute MRI for an original network by using MRI and reliability for transformed networks. Additionally, we have derived a deformed formula to compute MRI, which can also reduce computational task. Numerical experiments revealed that the proposed algorithm reduced computational time considerably compared to the traditional approach.