Lingyan LI Xiaoyan ZHOU Yuan ZONG Wenming ZHENG Xiuzhen CHEN Jingang SHI Peng SONG
Over the past several years, the research of micro-expression recognition (MER) has become an active topic in affective computing and computer vision because of its potential value in many application fields, e.g., lie detection. However, most previous works assumed an ideal scenario that both training and testing samples belong to the same micro-expression database, which is easily broken in practice. In this letter, we hence consider a more challenging scenario that the training and testing samples come from different micro-expression databases and investigated unsupervised cross-database MER in which the source database is labeled while the label information of target database is entirely unseen. To solve this interesting problem, we propose an effective method called target-adapted least-squares regression (TALSR). The basic idea of TALSR is to learn a regression coefficient matrix based on the source samples and their provided label information and also enable this learned regression coefficient matrix to suit the target micro-expression database. We are thus able to use the learned regression coefficient matrix to predict the micro-expression categories of the target micro-expression samples. Extensive experiments on CASME II and SMIC micro-expression databases are conducted to evaluate the proposed TALSR. The experimental results show that our TALSR has better performance than lots of recent well-performing domain adaptation methods in dealing with unsupervised cross-database MER tasks.
Hiroyasu ISHIKAWA Hiroki ONUKI Hideyuki SHINONAGA
Unmanned aircraft systems (UASs) have been developed and studied as temporal communication systems for emergency and rescue services during disasters, such as earthquakes and serious accidents. In a typical UAS model, several unmanned aerial vehicles (UAVs) are used to provide services over a large area. The UAV is comprised of a transmitter and receiver to transmit/receive the signals to/from terrestrial stations and terminals. Therefore, the carrier frequencies of the transmitted and received signals experience Doppler shifts due to the variations in the line-of-sight velocity between the UAV and the terrestrial terminal. Thus, by observing multiple Doppler shifts from different UAVs, it is possible to detect the position of a user that possesses a communication terminal for the UAS. This study aims to present a methodology for position detection based on the least-squares method to the Doppler shift frequencies. Further, a positioning accuracy index is newly proposed, which can be used as an index for measuring the position accurately, instead of the dilution-of-precision (DOP) method, which is used for global positioning systems (GPSs). A computer simulation was conducted for two different flight route models to confirm the applicability of the proposed positioning method and the positioning accuracy index. The simulation results confirm that the parameters, such as the flight route, the initial position, and velocity of the UAVs, can be optimized by using the proposed positioning accuracy index.
Takahiro OGAWA Akira TANAKA Miki HASEYAMA
A Wiener-based inpainting quality prediction method is presented in this paper. The proposed method is the first method that can predict inpainting quality both before and after the intensities have become missing even if their inpainting methods are unknown. Thus, when the target image does not include any missing areas, the proposed method estimates the importance of intensities for all pixels, and then we can know which areas should not be removed. Interestingly, since this measure can be also derived in the same manner for its corrupted image already including missing areas, the expected difficulty in reconstruction of these missing pixels is predicted, i.e., we can know which missing areas can be successfully reconstructed. The proposed method focuses on expected errors derived from the Wiener filter, which enables least-squares reconstruction, to predict the inpainting quality. The greatest advantage of the proposed method is that the same inpainting quality prediction scheme can be used in the above two different situations, and their results have common trends. Experimental results show that the inpainting quality predicted by the proposed method can be successfully used as a universal quality measure.
Bing DENG Zhengbo SUN Le YANG Dexiu HU
A linear-correction method is developed for source position and velocity estimation using time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements. The proposed technique first obtains an initial source location estimate using the first-step processing of an existing algebraic algorithm. It then refines the initial localization result by estimating via weighted least-squares (WLS) optimization and subtracting out its estimation error. The new solution is shown to be able to achieve the Cramer-Rao lower bound (CRLB) accuracy and it has better accuracy over several benchmark methods at relatively high noise levels.
Junil AHN Jaewon CHANG Chiho LEE
The integer least-squares (ILS) problem frequently arises in wireless communication systems. Sphere decoding (SD) is a systematic search scheme for solving ILS problem. The enumeration of candidates is a key part of SD for selecting a lattice point, which will be searched by the algorithm. Herein, the authors present a computationally efficient Schnorr-Euchner enumeration (SEE) algorithm to solve the constrained ILS problems, where the solution is limited into the finite integer lattice. To trace only valid lattice points within the underlying finite lattice, the authors devise an adaptive computation of the enumeration step and counting the valid points enumerated. In contrast to previous SEE methods based on a zig-zag manner, the proposed method completely avoids enumerating invalid points outside the finite lattice, and it further reduces real arithmetic and logical operations.
The goal of dimension reduction is to represent high-dimensional data in a lower-dimensional subspace, while intrinsic properties of the original data are kept as much as possible. An important challenge in unsupervised dimension reduction is the choice of tuning parameters, because no supervised information is available and thus parameter selection tends to be subjective and heuristic. In this paper, we propose an information-theoretic approach to unsupervised dimension reduction that allows objective tuning parameter selection. We employ quadratic mutual information (QMI) as our information measure, which is known to be less sensitive to outliers than ordinary mutual information, and QMI is estimated analytically by a least-squares method in a computationally efficient way. Then, we provide an eigenvector-based efficient implementation for performing unsupervised dimension reduction based on the QMI estimator. The usefulness of the proposed method is demonstrated through experiments.
Squared-loss mutual information (SMI) is a robust measure of the statistical dependence between random variables. The sample-based SMI approximator called least-squares mutual information (LSMI) was demonstrated to be useful in performing various machine learning tasks such as dimension reduction, clustering, and causal inference. The original LSMI approximates the pointwise mutual information by using the kernel model, which is a linear combination of kernel basis functions located on paired data samples. Although LSMI was proved to achieve the optimal approximation accuracy asymptotically, its approximation capability is limited when the sample size is small due to an insufficient number of kernel basis functions. Increasing the number of kernel basis functions can mitigate this weakness, but a naive implementation of this idea significantly increases the computation costs. In this article, we show that the computational complexity of LSMI with the multiplicative kernel model, which locates kernel basis functions on unpaired data samples and thus the number of kernel basis functions is the sample size squared, is the same as that for the plain kernel model. We experimentally demonstrate that LSMI with the multiplicative kernel model is more accurate than that with plain kernel models in small sample cases, with only mild increase in computation time.
Hyunha NAM Hirotaka HACHIYA Masashi SUGIYAMA
Multi-label classification allows a sample to belong to multiple classes simultaneously, which is often the case in real-world applications such as text categorization and image annotation. In multi-label scenarios, taking into account correlations among multiple labels can boost the classification accuracy. However, this makes classifier training more challenging because handling multiple labels induces a high-dimensional optimization problem. In this paper, we propose a scalable multi-label method based on the least-squares probabilistic classifier. Through experiments, we show the usefulness of our proposed method.
Masashi SUGIYAMA Makoto YAMADA
The Hilbert-Schmidt independence criterion (HSIC) is a kernel-based statistical independence measure that can be computed very efficiently. However, it requires us to determine the kernel parameters heuristically because no objective model selection method is available. Least-squares mutual information (LSMI) is another statistical independence measure that is based on direct density-ratio estimation. Although LSMI is computationally more expensive than HSIC, LSMI is equipped with cross-validation, and thus the kernel parameter can be determined objectively. In this paper, we show that HSIC can actually be regarded as an approximation to LSMI, which allows us to utilize cross-validation of LSMI for determining kernel parameters in HSIC. Consequently, both computational efficiency and cross-validation can be achieved.
Tsubasa KOBAYASHI Masashi SUGIYAMA
The objective of pool-based incremental active learning is to choose a sample to label from a pool of unlabeled samples in an incremental manner so that the generalization error is minimized. In this scenario, the generalization error often hits a minimum in the middle of the incremental active learning procedure and then it starts to increase. In this paper, we address the problem of early labeling stopping in probabilistic classification for minimizing the generalization error and the labeling cost. Among several possible strategies, we propose to stop labeling when the empirical class-posterior approximation error is maximized. Experiments on benchmark datasets demonstrate the usefulness of the proposed strategy.
Aihua WANG Kai YANG Jianping AN Xiangyuan BU
Location of a source is of considerable interest in wireless sensor networks, and it can be estimated from passive measurements of the arrival times. A novel algorithm for source location by utilizing the time of arrival (TOA) measurements of a signal received at spatially separated sensors is proposed. The algorithm is based on total least-squares (TLS) method, which is a generalized least-squares method to solve an overdetermined set of equations whose coefficients are noisy, and gives an explicit solution. Comparisons of performance with standard least-squares method are made, and Monte Carlo simulations are performed. Simulation results indicate that the proposed TLS algorithm gives better results than LS algorithm.
Jacob BENESTY Constantin PALEOLOGU Silviu CIOCHIN
Regularization plays a fundamental role in adaptive filtering. There are, very likely, many different ways to regularize an adaptive filter. In this letter, we propose one possible way to do it based on a condition that makes intuitively sense. From this condition, we show how to regularize the recursive least-squares (RLS) algorithm.
Identifying the statistical independence of random variables is one of the important tasks in statistical data analysis. In this paper, we propose a novel non-parametric independence test based on a least-squares density ratio estimator. Our method, called least-squares independence test (LSIT), is distribution-free, and thus it is more flexible than parametric approaches. Furthermore, it is equipped with a model selection procedure based on cross-validation. This is a significant advantage over existing non-parametric approaches which often require manual parameter tuning. The usefulness of the proposed method is shown through numerical experiments.
Makoto YAMADA Masashi SUGIYAMA Gordon WICHERN Jaak SIMM
The least-squares probabilistic classifier (LSPC) is a computationally-efficient alternative to kernel logistic regression. However, to assure its learned probabilities to be non-negative, LSPC involves a post-processing step of rounding up negative parameters to zero, which can unexpectedly influence classification performance. In order to mitigate this problem, we propose a simple alternative scheme that directly rounds up the classifier's negative outputs, not negative parameters. Through extensive experiments including real-world image classification and audio tagging tasks, we demonstrate that the proposed modification significantly improves classification accuracy, while the computational advantage of the original LSPC remains unchanged.
Kazuya UEKI Masashi SUGIYAMA Yasuyuki IHARA
Over the recent years, a great deal of effort has been made to estimate age from face images. It has been reported that age can be accurately estimated under controlled environment such as frontal faces, no expression, and static lighting conditions. However, it is not straightforward to achieve the same accuracy level in a real-world environment due to considerable variations in camera settings, facial poses, and illumination conditions. In this paper, we apply a recently proposed machine learning technique called covariate shift adaptation to alleviating lighting condition change between laboratory and practical environment. Through real-world age estimation experiments, we demonstrate the usefulness of our proposed method.
Masashi SUGIYAMA Hirotaka HACHIYA Hisashi KASHIMA Tetsuro MORIMURA
Least-squares policy iteration is a useful reinforcement learning method in robotics due to its computational efficiency. However, it tends to be sensitive to outliers in observed rewards. In this paper, we propose an alternative method that employs the absolute loss for enhancing robustness and reliability. The proposed method is formulated as a linear programming problem which can be solved efficiently by standard optimization software, so the computational advantage is not sacrificed for gaining robustness and reliability. We demonstrate the usefulness of the proposed approach through a simulated robot-control task.
Chee-Hyun PARK Kwang-Seok HONG
Estimating a location of mobile phones or sound source is of considerable interest in wireless communications and signal processing. In this letter, we propose squared range weighted least squares (SRWLS) using the range estimate attained from the Taylor series-based maximum likelihood. The weight can be determined more accurately when using the proposed method, compared with the existing methods using the variance of noise. The simulation results show that the proposed method is superior to the existing methods in RMSE as the measurement noise amount of sensors increases.
This Letter deals with the problem of non-line-of-sight (NLOS) in cellular systems devoted to location purposes. In conjugation with a variable loading technique, we present an efficient technique to make covariance shaping least squares estimator has robust capabilities against the NLOS effects. Compared with other methods, the proposed improved estimator has high accuracy under white Gaussian measurement noises and NLOS effects.
Kai YANG Jianping AN Xiangyuan BU Zhan XU
A novel algorithm for source location by utilizing the time-difference-of-arrival (TDOA) of a signal received at spatially separated sensors is proposed. The algorithm is based on the constrained total least-squares (CTLS) technique and gives an explicit solution. Simulation results demonstrate that the proposed algorithm has high location accuracy and its performance is close to the Cramer-Rao lower bound (CRLB).
Seiichi NAKAMORI Raquel CABALLERO-AGUILA Aurora HERMOSO-CARAZO Jose D. JIMENEZ-LOPEZ Josefa LINARES-PEREZ
The least-squares linear filtering and fixed-point smoothing problems of uncertainly observed signals are considered when the signal and the observation additive noise are correlated at any sampling time. Recursive algorithms, based on an innovation approach, are proposed without requiring the knowledge of the state-space model generating the signal, but only the autocovariance and crosscovariance functions of the signal and the observation white noise, as well as the probability that the signal exists in the observations.