Toshiyuki UTO Masaaki IKEHARA Kenji OHUE
This paper describes a design method of cosine-modulated filter banks (CMFB's) for an efficient coding of images. Whereas the CMFB has advantages of low design and implementation cost, subband filters of the CMFB do not have linear phase property. This prevents from employing a symmetric extension in transformation process, and leads to a degradation of the image compression performance. However, a recently proposed smooth extension alleviates the problem with CMFB's. As a result, well-designed CMFB's can be expected to be good candidates for a transform block in image compression applications. In this paper, we present a novel design approach of regular CMFB's. After introducing a regularity constraint on lattice parameters of a prototype filter in paraunitary (PU) CMFB's, we also derive a regularity condition for perfect reconstruction (PR) CMFB's. Finally, we design regular 8-channel PUCMFB and PRCMFB by an unconstrained optimization of residual lattice parameters, and several simulation results for test images are compared with various transforms for evaluating the proposed image coder based on the CMFB's with one degree of regularity. In addition, we show a computational complexity of the designed CMFB's.
Karthik MURALIDHAR Kwok Hung LI Sapna GEORGE
To attain good performance in an acoustic echo cancellation system, it is important to have a variable step size (VSS) algorithm as part of an adaptive filter. In this paper, we are concerned with the development of a VSS algorithm for a recently proposed subband affine projection (SAP) adaptive filter. Two popular VSS algorithms in the literature are the methods of delayed coefficients (DC) and variable regularization (VR). However, the merits and demerits of them are mutually exclusive. We propose a VSS algorithm that is a hybrid of both methods and combines their advantages. An extensive study of the new algorithm in different scenarios like the presence double-talk (DT) during the transient phase of the adaptive filter, DT during steady state, and varying DT power is conducted and reasoning is given to support the observed behavior. The importance of the method of VR as part of a VSS algorithm is emphasized.
Makoto MISUMI Shin-ichi NAKAGAWA Ken-ichi CHINEN Yoichi SHINODA Katsunori YAMAOKA
When an IP Multicast network is constructed on a switch-based network, many IP Multicast packet broadcasts are generated, and these broadcasts cause trouble for all of the other kinds of communication. To solve this problem, implementing IGMP Snooping on various switches has been proposed. However, some switches have insufficient IP Multicast packet-handling capability. This problem is also mentioned in RFC4541. In this paper, we propose the IGMP Snooping Activator (ISA) mechanism as a way to solve the IP Multicast packet-handling problem. The ISA transmits dummy IGMP Queries to maintain the IP Multicast network, and it joins the flooding IP Multicast group to activate IGMP Snooping in switches that are unable to handle IP Multicast packets. The experimental evaluation shows the effectiveness of our proposed method: the network load decreases because of the method's effective restraint of IP Multicast packet flooding.
Considering the inaccuracy of image registration, we propose a new regularization restoration algorithm to solve the ill-posed super-resolution (SR) problem. Registration error is used to obtain cross-channel error information caused by inaccurate image registration. The registration error is considered as the noise mean added into the within-channel observation noise which is known as Additive White Gaussian Noise (AWGN). Based on this consideration, two constraints are regulated pixel by pixel within the framework of Miller's regularization. Regularization parameters connect the two constraints to construct a cost function. The regularization parameters are estimated adaptively in each pixel in terms of the registration error and in each observation channel in terms of the AWGN. In the iterative implementation of the proposed algorithm, sub-sampling operation and sampling aliasing in the detector model are dealt with respectively to make the restored HR image approach the original one further. The transpose of the sub-sampling operation is implemented by nearest interpolation. Simulations show that the proposed regularization algorithm can restore HR images with much sharper edges and greater SNR improvement.
Syed Khairuzzaman TANBEER Chowdhury Farhan AHMED Byeong-Soo JEONG Young-Koo LEE
The frequency of a pattern may not be a sufficient criterion for identifying meaningful patterns in a database. The temporal regularity of a pattern can be another key criterion for assessing the importance of a pattern in several applications. A pattern can be said regular if it appears at a regular user-defined interval in the database. Even though there have been some efforts to discover periodic patterns in time-series and sequential data, none of the existing studies have provided an appropriate method for discovering the patterns that occur regularly in a transactional database. Therefore, in this paper, we introduce a novel concept of mining regular patterns from transactional databases. We also devise an efficient tree-based data structure, called a Regular Pattern tree (RP-tree in short), that captures the database contents in a highly compact manner and enables a pattern growth-based mining technique to generate the complete set of regular patterns in a database for a user-defined regularity threshold. Our performance study shows that mining regular patterns with an RP-tree is time and memory efficient, as well as highly scalable.
Takayuki NOZAKI Kenta KASAI Tomoharu SHIBUYA Kohichi SAKANIWA
Luby et al. derived evolution of degree distributions in residual graphs for irregular LDPC code ensembles. Evolution of degree distributions in residual graphs is important characteristic which is used for finite-length analysis of the expected block and bit error probability over the binary erasure channel. In this paper, we derive detailed evolution of degree distributions in residual graphs for irregular LDPC code ensembles with joint degree distributions.
Shun GOKITA Masashi SUGIYAMA Keisuke SAKURAI
In order to obtain better learning results in supervised learning, it is important to choose model parameters appropriately. Model selection is usually carried out by preparing a finite set of model candidates, estimating a generalization error for each candidate, and choosing the best one from the candidates. If the number of candidates is increased in this procedure, the optimization quality may be improved. However, this in turn increases the computational cost. In this paper, we focus on a generalization error estimator called the regularized subspace information criterion and derive an analytic form of the optimal model parameter over a set of infinitely many model candidates. This allows us to maximize the optimization quality while the computational cost is kept moderate.
Yasushi HIDAKA Masashi SUGIYAMA
In order to obtain better generalization performance in supervised learning, model parameters should be determined appropriately, i.e., they should be determined so that the generalization error is minimized. However, since the generalization error is inaccessible in practice, the model parameters are usually determined so that an estimator of the generalization error is minimized. The regularized subspace information criterion (RSIC) is such a generalization error estimator for model selection. RSIC includes an additional regularization parameter and it should be determined appropriately for better model selection. A meta-criterion for determining the regularization parameter has also been proposed and shown to be useful in practice. In this paper, we show that there are several drawbacks in the existing meta-criterion and give an alternative meta-criterion that can solve the problems. Through simulations, we show that the use of the new meta-criterion further improves the model selection performance.
Shinsaku KIYOMOTO Kazuhide FUKUSHIMA Toshiaki TANAKA Kouichi SAKURAI
In this paper, we examine the effectiveness of clock control in protecting stream ciphers from a distinguishing attack, and show that this form of control is effective against such attacks. We model two typical clock-controlled stream ciphers and analyze the increase in computational complexity for these attacks due to clock control. We then analyze parameters for the design of clock-controlled stream ciphers, such as the length of the LFSR used for clock control. By adopting the design criteria described in this paper, a designer can find the optimal length of the clock-control sequence LFSR.
Takayuki ITSUI Kenta KASAI Ryoji IKEGAYA Tomoharu SHIBUYA Kohichi SAKANIWA
The average bit erasure probability of a binary linear code ensemble under maximum a-posteriori probability (MAP) decoding over binary erasure channel (BEC) can be calculated with the average support weight distribution of the ensemble via the EXIT function and the shortened information function. In this paper, we formulate the relationship between the average bit erasure probability under MAP decoding over BEC and the average support weight distribution for a binary linear code ensemble. Then, we formulate the average support weight distribution and the average bit erasure probability under MAP decoding over BEC for regular LDPC code ensembles.
Ryoji IKEGAYA Kenta KASAI Tomoharu SHIBUYA Kohichi SAKANIWA
In this paper, we derive an upper bound for the average block error probability of a standard irregular low-density parity-check (LDPC) code ensemble under the maximum-likelihood (ML) decoding. Moreover, we show that the upper bound asymptotically decreases polynomially with the code length. Furthermore, when we consider several regular LDPC code ensembles as special cases of standard irregular ones over an additive white Gaussian noise channel, we numerically show that the signal-to-noise ratio (SNR) thresholds at which the proposed bound converges to zero as the code length tends to infinity are smaller than those for a bound provided by Miller et al.. We also give an example of a standard irregular LDPC code ensemble which has a lower SNR threshold than a given regular LDPC code ensemble.
Rong SUN Arika FUKUDA Kaiji MUKUMOTO Xinmei WANG
Based on the channel properties of of the meteor burst communication, a kind of semi-irregular LDPC codes suitable for MBC is presented. Simulation results show that the application of this kind of semi-irregular LDPC codes in MBC yields better performance than the regular ones. Some theoretical analyses are given.
Satoshi GOUNAI Tomoaki OHTSUKI Toshinobu KANEKO
Irregular LDPC codes can achieve better error rate performance than regular LDPC codes. However, irregular LDPC codes have higher error floors than regular LDPC codes. The Ordered Statistic Decoding (OSD) algorithm achieves approximate Maximum Likelihood (ML) decoding. ML decoding is effective to lower error floors. However, the OSD estimates satisfy the parity check equation of the LDPC code even the estimates are wrong. Hybrid decoder combining LLR-BP decoding algorithm and the OSD algorithm cannot also lower error floors, because wrong estimates also satisfy the LDPC parity check equation. We proposed the concatenated code constructed with an inner irregular LDPC code and an outer Cyclic Redundancy Check (CRC). Owing to CRC, we can detect wrong codewords from OSD estimates. Our CRC-LDPC code with hybrid decoder can lower error floors in an AWGN channel. In wireless communications, we cannot neglect the effects of the channel. The OSD algorithm needs the ordering of each bit based on the reliability. The Channel State Information (CSI) is used for deciding reliability of each bit. In this paper, we evaluate the Block Error Rate (BLER) of the CRC-LDPC code with hybrid decoder in a fast fading channel with perfect and imperfect CSIs where 'imperfect CSI' means that the distribution of channel and those statistical average of the fading amplitudes are known at the receiver. By computer simulation, we show that the CRC-LDPC code with hybrid decoder can lower error floors than the conventional LDPC code with hybrid decoder in the fast fading channel with perfect and imperfect CSIs. We also show that combining error detection with the OSD algorithm is effective not only for lowering the error floor but also for reducing computational complexity of the OSD algorithm.
Hiroaki YAMAMOTO Takashi MIYAZAKI Masayuki OKAMOTO
The aim of the paper is to design efficient bit-parallel algorithms for translating regular expressions into nondeterministic finite automata (NFAs). Let r be a regular expression over an alphabet Σ, and let n be the length of r and let m be the number of symbols of Σ occurring in r. Then we present bit-parallel algorithms for translating regular expressions into Glushkov automata (position automata) and follow automata using Thompson automata. For Glushkov automata, we will give an algorithm which runs in O(n+mm/W) time and space. For follow automata, we will give a randomized algorithm which runs in O(n+mm/W) expected time and O(n+mm/W) space. We rely on a W-bit uniform RAM for estimating the complexities of algorithms. Since the best known algorithms for these automata runs in O(n+m2) time and space, our algorithms achieve an almost W-fold speed-up.
A semi-supervised classification method is presented. A robust unsupervised spectral mapping method is extended to a semi-supervised situation. Our proposed algorithm is derived by linearization of this nonlinear semi-supervised mapping method. Experiments using the proposed method for some public benchmark data reveal that our method outperforms a supervised algorithm using the linear discriminant analysis for the iris and wine data and is also more accurate than a semi-supervised algorithm of the logistic GRF for the ionosphere dataset.
Kenta KASAI Shinya MIYAMOTO Tomoharu SHIBUYA Kohichi SAKANIWA
Irregular Repeat-Accumulate (IRA) codes, introduced by Jin et al., have a linear-time encoding algorithm and their decoding performance is comparable to that of irregular low-density parity-check (LDPC) codes. Meanwhile the authors have introduced detailedly represented irregular LDPC code ensembles specified with joint degree distributions between variable nodes and check nodes. In this paper, by using density evolution method [7],[8], we optimize IRA codes specified with joint degree distributions. Resulting codes have higher thresholds than Jin's IRA codes.
Masashi SUGIYAMA Keisuke SAKURAI
For obtaining a higher level of generalization capability in supervised learning, model parameters should be optimized, i.e., they should be determined in such a way that the generalization error is minimized. However, since the generalization error is inaccessible in practice, model parameters are usually determined in such a way that an estimate of the generalization error is minimized. A standard procedure for model parameter optimization is to first prepare a finite set of candidates of model parameter values, estimate the generalization error for each candidate, and then choose the best one from the candidates. If the number of candidates is increased in this procedure, the optimization quality may be improved. However, this in turn increases the computational cost. In this paper, we give methods for analytically finding the optimal model parameter value from a set of infinitely many candidates. This maximally enhances the optimization quality while the computational cost is kept reasonable.
This paper presents a performance and thresholds for Irregular Tree-LDPC codes. We obtain optimal irregular degree distributions and threshold by the density evolution technique. It is presented that Irregular Tree-LDPC code has performance gain at low SNR.
In this paper, we introduce a low computing post processing algorithm to simultaneously suppress blocking and ringing artifacts of compressed video sequences. A new regularization function to incorporate smoothness to neighboring pixels is defined, where the function is composed of four sub-functions combined with pixel-based data fidelity and smoothing terms. Therefore, the solution can be obtained without inverse matrix or vector-matrix computation, so that low complexity implementation is possible. In addition, the regularization parameter controlling the relative importance between the data fidelity and the degree of smoothness is estimated from the available overhead information in decoder, such as, macroblock type and quantization step size. The experimental results show the capability and efficiency of the proposed algorithm.
Muhammad HUSSAIN Yoshihiro OKADA Koichi NIIJIMA
Displaced subdivision surface representation [13] is a new form of representing a polygonal surface model, where a detailed surface model is defined as a scaler-valued displacement map over a smooth domain surface; it puts forth a number of attractive features for editing, geometry compression, animation, scalability, and adaptive rendering of polygonal models. The construction of the smooth domain surface is a challenging task in the conversion process of a detailed polygonal surface model into this representation. In this paper, we propose a new efficient method for defining the smooth domain surface based on -subdivision scheme. The proposed algorithm not only performs better in terms of the quality of the generated surfaces but is computationally more efficient and occupies less memory as compared to the original algorithm [13] and generates surfaces with more levels of detail due to the specific nature of -subdivision when the prescribed target complexity of the generated mesh must not be exceeded. To corroborate the efficiency and the quality of the new technique, the conversion results for several public domain models have been presented.