Jiahai WANG Zheng TANG Qiping CAO
In this paper, introducing a stochastic hill-climbing dynamics into an optimal competitive Hopfield network model (OCHOM), we propose a new algorithm that permits temporary energy increases, which helps the OCHOM escape from local minima. In graph theory, a clique is a completely connected subgraph and the maximum clique problem (MCP) is to find a clique of maximum size of a graph. The MCP is a classic optimization problem in computer science and in graph theory with many real-world applications, and is also known to be NP-complete. Recently, Galan-Marin et al. proposed the OCHOM for the MCP. It can guarantee convergence to a global/local minimum of energy function, and performs better than other competitive neural approaches. However, the OCHOM has no mechanism to escape from local minima. The proposed algorithm introduces stochastic hill-climbing dynamics which helps the OCHOM escape from local minima, and it is applied to the MCP. A number of instances have been simulated to verify the proposed algorithm.
Beom-Joon CHO Bong-Kee SIN Jin H. KIM
The traditional methods of HMM, although highly successful in 1-D time series analysis, have not yet been successfully extended to 2-D image analysis while fully exploiting the hierarchical design and extension of HMM networks for complex structured signals. Apart from the traditional method off-line training of the Baum-Welch algorithm, we propose a new method of real time creation of word or composite character HMMs for 2-D word/character patterns. Unlike the Latin words in which letters run left-to-right, the composition of word/character components need not be linear, as in Korean Hangul and Chinese characters. The key idea lies in the character composition at the image level and the image-to-model conversion followed by redundancy reduction. Although the resulting model is not optimal, the proposed method has much greater advantage in regard to memory usage and training difficulty. In a series of experiments in character/word spotting in document images, the system recorded the hit ratios of 80% and 67% in Hangul character and word spotting respectively without language models.
Both the Hopfield network and the genetic algorithm are powerful tools for object recognition tasks, e.g., subgraph matching problems. Unfortunately, they both have serious drawbacks. The Hopfield network is very sensitive to its initial state, and it stops at a local minimum if the initial state is not properly given. The genetic algorithm, on the other hand, usually only finds a near-global solution, and it is time-consuming for large-scale problems. In this paper, we propose an integrated scheme of these two methods, while eliminating their drawbacks and keeping their advantages, to solve object recognition problems under affine transformations. Some arrangements and programming strategies are required. First, we use some specialized 2-D genetic algorithm operators to accelerate the convergence. Second, we extract the "seeds" of the solution of the genetic algorithm to serve as the initial state of the Hopfield network. This procedure further improves the efficiency of the system. In addition, we also include several pertinent post matching algorithms for refining the accuracy and robustness. In the examples, the proposed scheme is used to solve some subgraph matching problems with occlusions under affine transformations. As shown by the results, this integrated scheme does outperform many counterpart algorithms in accuracy, efficiency, and stability.
Visual defects, called mura in the field, sometimes occur during the manufacturing of the flat panel liquid crystal displays. In this paper we propose an automatic inspection method that reliably detects and quantifies TFT-LCD region-mura defects. The method consists of two phases. In the first phase we segment candidate region-muras from TFT-LCD panel images using the modified regression diagnostics and Niblack's thresholding. In the second phase, based on the human eye's sensitivity to mura, we quantify mura level for each candidate, which is used to identify real muras by grading them as pass or fail. Performance of the proposed method is evaluated on real TFT-LCD panel samples.
Yoshinori MORITA Tetsuo FUNADA Hideyuki NOMURA
The bandwidth occupied by individual telecommunication devices in the field of mobile radio communication must be narrow in order to effectively exploit the limited frequency band. Therefore, it is necessary to implement low-bit-rate speech coding that is robust against background noise. We examine vector quantization using a neural network (NNVQ) as a robust LSP encoder. In this paper, we compare four types of binary patterns of a hidden layer, and clarify the dependency of quantization distortion on the bit pattern. By delayed decision (selection of low-distortion codes in decoding, i.e., EbD method) the spectral distortion (SD) can be decreased by 0.8 dB (20%). For noisy speech, the performance of the EbD method is better than that of the conventional VQ codebook mapping method. In addition, the SD can be decreased by 2.3 dB (40%) by using a method in which the neural networks for encoding and decoding are combined and re-trained. Finally, we examine the SD for speech having different signal-to-noise ratios (SNRs) from that used in training. The experimental results show that training using SNR between 30 and 40 dB is appropriate.
We present a built-in self-reconfiguring system for a mesh-connected processor array where faulty processor elements are compensated for by spare processing elements located in one row and one column. It has advantages in that the number of spare processing elements is small and additional control circuits and networks for changing interconnections of processing elements is so simple that hardware overhead for reconfiguration is also small. First, to indicate the motivation to the proposed reconfiguration scheme, we briefly describe other schemes with the same number of spares as that of the proposed scheme where faulty processing elements are replaced using straight shifts toward spares, and compare their reconfiguration probabilities to each other. Then, we show that a variant of the proposed scheme has the highest probability. Next, we present a built-in self-reconfiguring system for the scheme and formally prove that it works correctly. It can automatically replace faulty processors by spare processors on detecting faults of processors.
Jaeyong SHIM Minkyu LEE Dongsoo HAN
A workflow definition containing errors might cause serious problems for an enterprise especially when it involves mission critical business processes or inter-organizational interaction. So workflow definitions should be defined in a strict and rigorous way. In this paper, we develop a workflow definition language and analysis methods for the language to support strict and rigorous workflow definitions. Faults or mistakes causing communication deadlock, access conflicts, and improper exception specification in workflow definitions can be detected and notified automatically using the methods. The proposed workflow definition language borrows structured constructs of conventional programming languages because many good features of conventional programming languages also can be used effectively in expressing workflow processes. With slight modifications and scope restrictions, the developed analysis techniques in this paper can be used in any workflow definition languages and they can help workflow designers define workflow processes in much more safe and reliable manner.
Gen-ichiro OHTA Mitsuru UESUGI Takuro SATO Hideyoshi TOMINAGA
This paper proposes a new SSB-QPSK modulation/demodulation method. The present method multiplexes the USB (Upper Side Band) and LSB (Lower Side Band) of a QPSK-modulated SSB (Single Side Band) on the same SSB complex frequency band. The present method thus achieves 2 bit/s/Hz. This method is an orthogonal SSB-QPSK method, because the multiplex signals are orthogonal to each other. The demodulator consists of two SSB demodulators. A simulation result in AWGN conditions, shows that the proposed method has better BER (Bit Error Rate) performance than 16 QAM. The degradation of BER in comparison with QPSK is less than 0.2 dB on Eb/No (bit-energy-to-noise-power ratio). In a fading/Doppler environment, the BER performance of the orthogonal SSB-QPSK is the same as that of QPSK.
Kwee-Bo SIM Ji-Yoon KIM Dong-Wook LEE
When we try to solve Multiobjective Optimization Problems (MOPs) using an evolutionary algorithm, the Pareto Genetic Algorithm (Pareto GA) introduced by Goldberg in 1989 has now become a sort of standard. After the first introduction, this approach was further developed and lead to many applications. All of these approaches are based on Pareto ranking and use the fitness sharing function to maintain diversity. On the other hand in the early 50's another scheme was presented by Nash. This approach introduced the notion of Nash Equilibrium and aimed at solving optimization problems having multiobjective functions that are originated from Game Theory and Economics. Since the concept of Nash Equilibrium as a solution of these problems was introduced, game theorists have attempted to formalize aspects of the equilibrium solution. The Nash Genetic Algorithm (Nash GA), which is introduced by Sefrioui, is the idea to bring together genetic algorithms and Nash strategy. The aim of this algorithm is to find the Nash Equilibrium of MOPs through the genetic process. Another central achievement of evolutionary game theory is the introduction of a method by which agents can play optimal strategies in the absence of rationality. Not the rationality but through the process of Darwinian selection, a population of agents can evolve to an Evolutionary Stable Strategy (ESS) introduced by Maynard Smith in 1982. In this paper, we propose Game theory based Co-Evolutionary Algorithm (GCEA) and try to find the ESS as a solution of MOPs. By applying newly designed co-evolutionary algorithm to several MOPs, the first we will confirm that evolutionary game can be embodied by co-evolutionary algorithm and this co-evolutionary algorithm can find ESSs as a solutions of MOPs. The second, we show optimization performance of GCEA by applying this model to several test MOPs and comparing with the solutions of previously introduced evolutionary optimization algorithms.
Hideki YAGI Manabu KOBAYASHI Toshiyasu MATSUSHIMA Shigeichi HIRASAWA
Reliability-based maximum likelihood decoding (MLD) algorithms of linear block codes have been widely studied. These algorithms efficiently search the most likely codeword using the generator matrix whose most reliable and linearly independent k (dimension of the code) columns form the identity matrix. In this paper, conditions for omitting unnecessary metrics computation of candidate codewords are derived in reliability-based MLD algorithms. The proposed conditions utilize an order relation of binary vectors. A simple method for testing if the proposed conditions are satisfied is devised. The method for testing proposed conditions requires no real number operations and, consequently, the MLD algorithm employing this method reduces the number of real number operations, compared to known reliability-based MLD algorithms.
Parallel concatenated convolutional codes, turbo codes, are very attractive scheme at a point of view of an error probability performance. An bit error rate (BER) evaluation for turbo codes is done by a uniform interleaver bound calculation and/or a computer simulation. The former is calculated under the assumption of uniform interleaver, and is only effective for an BER evaluation with a pseudo random interleaver. The latter dose not have any interleaver restrictions. However, for a very low BER evaluation, it takes enormous simulation time. In this paper, a new error probability evaluation method for turbo codes is proposed. It is based on the error event simulation method. For each evaluation for the predetermined error sequence, importance sampling, which is one of the fast simulation methods, is applied. To prove the effectiveness of the proposed method, numerical examples are shown. The proposed method well approximates the BER at the error floor region. Under the same accuracy, the IS estimation time at BER = 10-7 is reduced to 1/6358 of the ordinary Monte-Carlo simulation time.
Ryoji IKEGAYA Kenta KASAI Tomoharu SHIBUYA Kohichi SAKANIWA
In this paper, we provide explicit representations of average weight and stopping set distributions and asymptotic expressions of their exponent for detailedly represented irregular LDPC code ensembles. Further we present numerical examples which compare a detailedly represented irregular LDPC code ensemble with a conventional one with respect to both of weight and stopping set distributions.
In this study, a fourth-order cumulants based iterative algorithm for blind channel equalization is introduced, which is robust with respect to the existence of heavy Gaussian noise in a channel and does not require the minimum phase characteristic of the channel. The transmitted signals at the receiver are over-sampled to ensure the channel described by a full-column rank matrix. It changes a single-input/single-output (SISO) finite-impulse response (FIR) channel to a single-input/multi-output (SIMO) channel. Based on the properties of the fourth-order cumulants of the over-sampled channel inputs, the iterative algorithm is derived to estimate the deconvolution matrix which makes the overall transfer matrix transparent, i.e., it can be reduced to the identity matrix by simple reordering and scaling. In simulation studies, both a closed-form and a stochastic version of the proposed algorithm are tested with three-ray multi-path channels, and their performances are compared with the methods based on conventional second-order statistics and higher-order statistics (HOS) as well. Relatively good results with fast convergence speed are achieved, even when the transmitted symbols are significantly corrupted with Gaussian noise.
A novel signal enhancement scheme using the rotation of signal subspace (RSS) and Toeplitz matrix approximation (TMA) methods to enhance the performance of an adaptive antenna array in multirate DS/CDMA systems is proposed. The basis of RSS is to find a transformation matrix in order to recover the desired complex array covariance matrix from a sampled complex array covariance matrix which is contaminated by an interference-plus-noise component, which is the total noise. Also, the objective of TMA is to change the output matrix of RSS into a matrix having the theoretical properties of a total noise-free signal. Consequently, the proposed signal enhancement scheme using RSS and TMA methods can greatly improve the performance of an adaptive antenna array by reducing the undesired total noise effect from the sampled complex array covariance matrix of the pre-correlation received signal vector that is used to calculate a weight vector of an adaptive antenna array. It is shown through various simulation results that the system performance using the proposed signal enhancement scheme is much superior to that of the conventional method.
An antenna with a wide bandwidth is required for ultra-wideband (UWB) system of the future. Several types of wideband antenna that cover the whole frequency range have been proposed. Since the UWB system would cover from 3.1 to 10.6 GHz, it is necessary to suppress the interference from other systems using some of this frequency band. This paper presents two types of novel planar monopole antenna: one consists of two connected rectangular plates and another one is an orthogonal type. The return loss characteristics, radiation pattern, and current distribution of these antennas were simulated by using the FDTD method. The proposed antennas had dual frequency and broad bandwidth characteristics at both resonant frequencies. The return loss level at the eliminated frequency between the resonant frequencies was almost 0 dB. The radiation patterns for the whole frequency range were almost omni-directional in the horizontal plane. The current distributions at each frequency were similar to that of a planar rectangular monopole. The radiation patterns thus were omni-directional in the horizontal plane at each resonant frequency. Therefore, the results showed that wide bandwidth characteristics could be achieved with such antennas.
A deniable authentication protocol is used to identify the source of a received message for a receiver, but the receiver is unable to prove to a third party the source of the received message. Recently, Fan et al. proposed a deniable authentication protocol based on Diffie-Hellman algorithm. In this paper, we show that Fan et al.'s protocol does not possess the deniable property as they claimed. A cheating receiver can prove the source of the received message to a third party. In addition, we also present a modification of Fan et al.'s protocol to overcome the security flaw.
Nail AKAR brahim HOKELEK Ezhan KARASAN
In this paper, we propose a novel traffic engineering architecture for IP networks with MPLS backbones. In this architecture, two link-disjoint label switched paths, namely the primary and secondary paths, are established among every pair of IP routers located at the edges of an MPLS backbone network. As the main building block of this architecture, we propose that primary paths are given higher priority against the secondary paths in the MPLS data plane to cope with the so-called knock-on effect. Inspired by the ABR flow control mechanism in ATM networks, we propose to split traffic between a source-destination pair between the primary and secondary paths using explicit rate feedback from the network. Taking into consideration the performance deteriorating impact of packet reordering in packet-based load balancing schemes, we propose a traffic splitting mechanism that operates on a per-flow basis (i.e., flow-based multipath routing). We show via an extensive simulation study that using flow-based multipath traffic engineering with explicit rate feedback not only provides consistently better throughput than that of a single path but is also void of out-of-order packet delivery.
Estimating the parameters of a statistical distribution from measured sample values forms an essential part of many signal processing tasks. K-distribution has been proven to be an appropriate model for characterising the amplitude of sea clutter. In this paper, a new method for estimating the parameters of K-Distribution is proposed. The method greatly lowers the computational requirement and variance of parameter estimates when compared with the existing non-maximum likelihood methods.
Takahiro SHINOZAKI Sadaoki FURUI
One of the most important issues in spontaneous speech recognition is how to cope with the degradation of recognition accuracy due to speaking rate fluctuation within an utterance. This paper proposes an acoustic model for adjusting mixture weights and transition probabilities of the HMM for each frame according to the local speaking rate. The proposed model is implemented along with variants and conventional models using the Bayesian network framework. The proposed model has a hidden variable representing variation of the "mode" of the speaking rate, and its value controls the parameters of the underlying HMM. Model training and maximum probability assignment of the variables are conducted using the EM/GEM and inference algorithms for the Bayesian networks. Utterances from meetings and lectures are used for evaluation where the Bayesian network-based acoustic models are used to rescore the likelihood of the N-best lists. In the experiments, the proposed model indicated consistently higher performance than conventional HMMs and regression HMMs using the same speaking rate information.
This paper presents a new template matching method based on marker-controlled watershed segmentation (TMCWS). It is applied to recognize numbers on special metal plates in production lines where traditional image recognition methods do not work well. TMCWS is a shape based matching method that uses different pattern images and their corresponding marker images as probes to explore a gradient space of an unknown image to determine which pattern best matches a target object in it. Different from other matching algorithms, TMCWS firstly creates a marker image for each pattern, and then takes both the pattern image and its corresponding marker image as a template window and shifts this window across a gradient space pixel by pixel to do a search. At each position, the marker image is used to try to extract the contour of the target object with the help of marker-controlled watershed segmentation, and the pattern image is employed to evaluate the extracted shape in each trial. All of the pattern images and their corresponding marker images are tried and the pattern that best matches the target object is the recognition result. TMCWS contains shape extraction procedures and it is a high-level template matching method. Experiments are performed with this method on nearly 400 images of metal plates and the test results show its effectiveness in recognizing numbers in noisy images.