Recently, a high dimensional classification framework has been proposed to introduce spatial and anatomical priors in classical single kernel support vector machine optimization scheme, wherein the sequential minimal optimization (SMO) training algorithm is adopted, for brain image analysis. However, to satisfy the optimization conditions required in the single kernel case, it is unreasonably assumed that the spatial regularization parameter is equal to the anatomical one. In this letter, this approach is improved by combining SMO algorithm with multiple kernel learning to avoid that assumption and optimally estimate two parameters. The improvement is comparably demonstrated by experimental results on classification of Alzheimer patients and elderly controls.
Nobutaka SUZUKI Kosetsu IKEDA Yeondae KWON
In this paper, we consider solving the all-pairs regular path problem on large graphs efficiently. Let G be a graph and r be a regular path query, and consider finding the answers of r on G. If G is so small that it fits in main memory, it suffices to load entire G into main memory and traverse G to find paths matching r. However, if G is too large and cannot fit in main memory, we need another approach. In this paper, we propose a novel approach based on external memory algorithm. Our algorithm finds the answers matching r by scanning the node list of G sequentially. We made a small experiment, which suggests that our algorithm can solve the problem efficiently.
Zhong ZHANG Shuang LIU Zhiwei ZHANG
Sparsity-based methods have been recently applied to abnormal event detection and have achieved impressive results. However, most such methods suffer from the problem of dimensionality curse; furthermore, they also take no consideration of the relationship among coefficient vectors. In this paper, we propose a novel method called consistent sparse representation (CSR) to overcome the drawbacks. We first reconstruct each feature in the space spanned by the clustering centers of training features so as to reduce the dimensionality of features and preserve the neighboring structure. Then, the consistent regularization is added to the sparse representation model, which explicitly considers the relationship of coefficient vectors. Our method is verified on two challenging databases (UCSD Ped1 database and Subway batabase), and the experimental results demonstrate that our method obtains better results than previous methods in abnormal event detection.
Yilong ZHANG Yuehua LI Guanhua HE Sheng ZHANG
Aperture synthesis technology represents an effective approach to millimeter-wave radiometers for high-resolution observations. However, the application of synthetic aperture imaging radiometer (SAIR) is limited by its large number of antennas, receivers and correlators, which may increase noise and cause the image distortion. To solve those problems, this letter proposes a compressive regularization imaging algorithm, called CRIA, to reconstruct images accurately via combining the sparsity and the energy functional of target space. With randomly selected visibility samples, CRIA employs l1 norm to reconstruct the target brightness temperature and l2 norm to estimate the energy functional of it simultaneously. Comparisons with other algorithms show that CRIA provides higher quality target brightness temperature images at a lower data level.
Qi ZHAO Hongwei DENG Hongbo ZHAO
The Earth's ionosphere can hinder radio propagation with two serious problems: group delay and phase advance. Ionospheric irregularities are significantly troublesome since they make the amplitude and phase of the radio signals fluctuate rapidly, which is known as ionospheric scintillation. Severe ionospheric scintillation could cause loss of phase lock, which would degrade the positioning accuracy and affect the performance of navigation systems. Based on the phase screen model, this paper presents a novel power spectrum model of phase scintillation and a model of amplitude scintillation. Preliminary results show that, when scintillation intensity increases, the random phase and amplitude fluctuations become stronger, coinciding with the observations. Simulations of the scintillation effects on the acquisition of Beidou signals predict acquisition probability. In addition, acquisition probabilities of GPS and Beidou signals under different scintillation intensities are presented. And by the same SNR the acquisition probability decreases when the scintillation intensity increases. The simulation result shows that scintillation could cause the loss of the acquisition performance of Beidou navigation system. According to the comparison of Beidou and GPS simulations, the code length and code rate of satellite signals have an effect on the acquisition performance of navigation system.
This letter studies the problem of cooperative spectrum sensing in wideband cognitive radio networks. Based on the basis expansion model (BEM), the problem of estimation of power spectral density (PSD) is transformed to estimation of BEM coefficients. The sparsity both in frequency domain and space domain is used to construct a sparse estimation structure. The theory of L1/2 regularization is used to solve the compressed sensing problem. Simulation results demonstrate the effectiveness of the proposed method.
Headway irregularity not only increases average passenger waiting time but also causes additional energy consumption and more delay time. A real-time headway control model is proposed to maintain headway regularity in railway networks by adjusting the travel time on each segment for each train. The adjustment of travel time is based on a consensus algorithm. In the proposed consensus algorithm, the control law is obtained by solving the Riccati equation. The minimum running time on a segment is also considered. The computation time of the proposed method is analyzed and the analysis results show that it can satisfy the requirement on real-time operation. The proposed model is tested and the consensus trend of headways can be observed through simulation. The simulation results also demonstrate that the average passenger waiting time decreases from 52 to 50 seconds/passenger. Additionally, the delay time is reduced by 6.5% at least and energy consumption can be reduced by 0.1% at most after using the proposed method.
In this paper, an image prior based on soft-morphological filters and its application to image recovery are presented. In morphological image processing, a gray-scale image is represented as a subset in a three-dimensional space, which is spanned by spatial and intensity axes. Morphological opening and closing, which are basic operations in morphological image processing, respectively approximate the image subset and its complementary images as the unions of structuring elements that are translated in the three-dimensional space. In this study, the opening and closing filters are applied to an image prior to resolve the regularization problem of image recovery. When the proposed image prior is applied, the image is recovered as an image that has no noise component, which is eliminated by the opening and closing. However, the closing and opening filters are less able to eliminate Gaussian noise. In order to improve the robustness against Gaussian noise, the closing and opening filters are respectively approximated as soft-closing and soft-opening with relaxed max and min functions. In image recovery experiments, image denoising and deblurring using the proposed prior are demonstrated. Comparisons of the proposed prior with the existing priors that impose a penalty on the gradient of the intensity are also shown.
Jorge TREVINO Takuma OKAMOTO Yukio IWAYA Yôiti SUZUKI
Sound field reproduction systems seek to realistically convey 3D spatial audio by re-creating the sound pressure inside a region enclosing the listener. High-order Ambisonics (HOA), a sound field reproduction technology, is notable for defining a scalable encoding format that characterizes the sound field in a system-independent way. Sound fields sampled with a particular microphone array and encoded into the HOA format can be reproduced using any sound presentation device, typically a loudspeaker array, by using a HOA decoder. The HOA encoding format is based on the spherical harmonic decomposition; this makes it easier to design a decoder for large arrays of loudspeakers uniformly distributed over all directions. In practice, it is seldom possible to cover all directions with loudspeakers placed at regular angular intervals. An irregular array, one where the angular separation between adjacent loudspeakers is not constant, does not perform as well as a regular one when reproducing HOA due to the uneven sampling of the spherical harmonics. This paper briefly introduces the techniques used in HOA and advances a new approach to design HOA decoders for irregular loudspeaker arrays. The main difference between conventional methods and our proposal is the use of a new error metric: the radial derivative of the reconstruction error. Minimizing this metric leads to a smooth reproduction, accurate over a larger region than that achieved by conventional HOA decoders. We evaluate our proposal using the computer simulation of two 115-channel loudspeaker arrays: a regular and an irregular one. We find that our proposal results in a larger listening region when used to decode HOA for reproduction using the irregular array. On the other hand, applying our method matches the high-quality reproduction that can be attained with the regular array and conventional HOA decoders.
We consider the problem of optimizing the quantizer design for distributed estimation systems where all nodes located at different sites collect measurements and transmit quantized data to a fusion node, which then produces an estimate of the parameter of interest. For this problem, the goal is to minimize the amount of information that the nodes have to transmit in order to attain a certain application accuracy. We propose an iterative quantizer design algorithm that seeks to find a non-regular mapping between quantization partitions and their codewords so as to minimize global distortion such as the estimation error. We apply the proposed algorithm to a system where an acoustic amplitude sensor model is employed at each node for source localization. Our experiments demonstrate that a significant performance gain can be achieved by our technique as compared with standard typical designs and even with distributed novel designs recently published.
Regularized forward selection is viewed as a method for obtaining a sparse representation in a nonparametric regression problem. In regularized forward selection, regression output is represented by a weighted sum of several significant basis functions that are selected from among a large number of candidates by using a greedy training procedure in terms of a regularized cost function and applying an appropriate model selection method. In this paper, we propose a model selection method in regularized forward selection. For the purpose, we focus on the reduction of a cost function, which is brought by appending a new basis function in a greedy training procedure. We first clarify a bias and variance decomposition of the cost reduction and then derive a probabilistic upper bound for the variance of the cost reduction under some conditions. The derived upper bound reflects an essential feature of the greedy training procedure; i.e., it selects a basis function which maximally reduces the cost function. We then propose a thresholding method for determining significant basis functions by applying the derived upper bound as a threshold level and effectively combining it with the leave-one-out cross validation method. Several numerical experiments show that generalization performance of the proposed method is comparable to that of the other methods while the number of basis functions selected by the proposed method is greatly smaller than by the other methods. We can therefore say that the proposed method is able to yield a sparse representation while keeping a relatively good generalization performance. Moreover, our method has an advantage that it is free from a selection of a regularization parameter.
Eunchul YOON Suhan CHOI Unil YUN
Two channel correlation estimation (CCE) schemes exploiting pilots are presented for an OFDM system with a comb-type pilot pattern under the assumption that there exist virtual subcarriers in the OFDM block. Whereas the first scheme is designed based on the conventional regularized-least square (LS) approach, the second scheme is designed by a newly devised technique based on LS. As the second scheme removes the necessity of computing the matrix inverse by making the minimum eigenvalue of the inversed matrix positive, it leads to reduced implementation complexity and improved performance. It is shown by simulation that the proposed CCE schemes substantially enhance the mean equare error and symbol error rate performances of the MMSE based channel estimation by providing more accurate channel correlation information.
Asahi TAKAOKA Satoshi TAYU Shuichi UENO
We consider the minimum feedback vertex set problem for some bipartite graphs and degree-constrained graphs. We show that the problem is linear time solvable for bipartite permutation graphs and NP-hard for grid intersection graphs. We also show that the problem is solvable in O(n2log 6n) time for n-vertex graphs with maximum degree at most three.
Wittawat JITKRITTUM Hirotaka HACHIYA Masashi SUGIYAMA
Feature selection is a technique to screen out less important features. Many existing supervised feature selection algorithms use redundancy and relevancy as the main criteria to select features. However, feature interaction, potentially a key characteristic in real-world problems, has not received much attention. As an attempt to take feature interaction into account, we propose
Makoto NAKASHIZUKA Yu ASHIHARA Youji IIGUNI
This paper proposes an adaptation method for structuring elements of morphological filters. A structuring element of a morphological filter specifies a shape of local structures that is eliminated or preserved in the output. The adaptation of the structuring element is hence a crucial problem for image denoising using morphological filters. Existing adaptation methods for structuring elements require preliminary training using example images. We propose an adaptation method for structuring elements of morphological opening filters that does not require such training. In our approach, the opening filter is interpreted as an approximation method with the union of the structuring elements. In order to eliminate noise components, a penalty defined from an assumption of image smoothness is imposed on the structuring element. Image denoising is achieved through decreasing the objective function, which is the sum of an approximation error term and the penalty function. In experiments, we use the proposed method to demonstrate positive impulsive noise reduction from images.
Hyunduk KIM Sang-Heon LEE Myoung-Kyu SOHN Dong-Ju KIM Byungmin KIM
Super resolution (SR) reconstruction is the process of fusing a sequence of low-resolution images into one high-resolution image. Many researchers have introduced various SR reconstruction methods. However, these traditional methods are limited in the extent to which they allow recovery of high-frequency information. Moreover, due to the self-similarity of face images, most of the facial SR algorithms are machine learning based. In this paper, we introduce a facial SR algorithm that combines learning-based and regularized SR image reconstruction algorithms. Our conception involves two main ideas. First, we employ separated frequency components to reconstruct high-resolution images. In addition, we separate the region of the training face image. These approaches can help to recover high-frequency information. In our experiments, we demonstrate the effectiveness of these ideas.
Ro-Yu WU Jou-Ming CHANG An-Hang CHEN Ming-Tat KO
A non-regular tree T with a prescribed branching sequence (s1,s2,...,sn) is a rooted and ordered tree such that its internal nodes are numbered from 1 to n in preorder and every internal node i in T has si children. Recently, Wu et al. (2010) introduced a concise representation called RD-sequences to represent all non-regular trees and proposed a loopless algorithm for generating all non-regular trees in a Gray-code order. In this paper, based on such a Gray-code order, we present efficient ranking and unranking algorithms of non-regular trees with n internal nodes. Moreover, we show that the ranking algorithm and the unranking algorithm can be run in O(n2) time and O(n2+nSn-1) time, respectively, provided a preprocessing takes O(n2Sn-1) time and space in advance, where .
Ran LI Zong-Liang GAN Zi-Guan CUI Xiu-Chang ZHU
Novel joint motion-compensated interpolation using eight-neighbor block motion vectors (8J-MCI) is presented. The proposed method uses bi-directional motion estimation (BME) to obtain the motion vector field of the interpolated frame and adopts motion vectors of the interpolated block and its 8-neighbor blocks to jointly predict the target block. Since the smoothness of the motion vector filed makes the motion vectors of 8-neighbor blocks quite close to the true motion vector of the interpolated block, the proposed algorithm has the better fault-tolerancy than traditional ones. Experiments show that the proposed algorithm outperforms the motion-aligned auto-regressive algorithm (MAAR, one of the state-of-the-art frame rate up-conversion (FRUC) schemes) in terms of the average PSNR for the test image sequence and offers better subjective visual quality.
We propose a novel network traffic matrix decomposition method named Stable Principal Component Pursuit with Frequency-Domain Regularization (SPCP-FDR), which improves the Stable Principal Component Pursuit (SPCP) method by using a frequency-domain noise regularization function. An experiment demonstrates the feasibility of this new decomposition method.
Yuichi ASAHIRO Hiroshi ETO Eiji MIYANO
Given a connected graph G = (V, E) on n vertices, the MAXIMUM r-REGULAR INDUCED CONNECTED SUBGRAPH (r-MaxRICS) problem asks for a maximum sized subset of vertices S ⊆ V such that the induced subgraph G[S] on S is connected and r-regular. It is known that 2-MaxRICS and 3-MaxRICS are NP-hard. Moreover, 2-MaxRICS cannot be approximated within a factor of n1-ε in polynomial time for any ε > 0 unless P= NP. In this paper, we show that r-MaxRICS are NP-hard for any fixed integer r ≥ 4. Furthermore, we show that for any fixed integer r ≥ 3, r-MaxRICS cannot be approximated within a factor of n1/6-ε in polynomial time for any ε > 0 unless P= NP.