Yoshikazu FUJISHIRO Takahiko YAMAMOTO Kohji KOSHIJI
This study proposes a novel method for evaluating the transmission characteristics of a three-phase filter using the “Fortescue-mode S-parameters,” which are S-parameters whose variables are transformed into symmetrical coordinates (i.e., zero-/positive-/negative-phase sequences). The behavior of the filter under three-phase current, including its non-symmetry, can be represented by these S-parameters, without regard to frequency. This paper also describes a methodology for creating modal equivalent circuits that reflect Fortescue-mode S-parameters allowing the effects of circuit components on filter characteristics to be estimated. Thus, this method is useful not only for the measurement and evaluation but also for the analysis and design of a three-phase filter. In addition, the physical interpretation of asymmetrical/symmetrical insertion losses and the conversion method based on Fortescue-mode S-parameters are clarified.
Integrating the visual attention (VA) model into an objective image quality metric is a rapidly evolving area in modern image quality assessment (IQA) research due to the significant opportunities the VA information presents. So far, in the literature, it has been suggested to use either a task-free saliency map or a quality-task one for the integration into quality metric. A hybrid integration approach which takes the advantages of both saliency maps is presented in this paper. We compare our hybrid integration scheme with existing integration schemes using simple quality metrics. Results show that the proposed method performs better than the previous techniques in terms of prediction accuracy.
Wei LI Masayuki MUKUNOKI Yinghui KUANG Yang WU Michihiko MINOH
Re-identifying the same person in different images is a distinct challenge for visual surveillance systems. Building an accurate correspondence between highly variable images requires a suitable dissimilarity measure. To date, most existing measures have used adapted distance based on a learned metric. Unfortunately, real-world human image data, which tends to show large intra-class variations and small inter-class differences, continues to prevent these measures from achieving satisfactory re-identification performance. Recognizing neighboring distribution can provide additional useful information to help tackle the deviation of the to-be-measured samples, we propose a novel dissimilarity measure from the neighborhood-wise relative information perspective, which can deliver the effectiveness of those well-distributed samples to the badly-distributed samples to make intra-class dissimilarities smaller than inter-class dissimilarities, in a learned discriminative space. The effectiveness of this method is demonstrated by explanation and experimentation.
Guanwen ZHANG Jien KATO Yu WANG Kenji MASE
There exist two intrinsic issues in multiple-shot person re-identification: (1) large differences in camera view, illumination, and non-rigid deformation of posture that make the intra-class variance even larger than the inter-class variance; (2) only a few training data that are available for learning tasks in a realistic re-identification scenario. In our previous work, we proposed a local distance comparison framework to deal with the first issue. In this paper, to deal with the second issue (i.e., to derive a reliable distance metric from limited training data), we propose an adaptive learning method to learn an adaptive distance metric, which integrates prior knowledge learned from a large existing auxiliary dataset and task-specific information extracted from a much smaller training dataset. Experimental results on several public benchmark datasets show that combined with the local distance comparison framework, our adaptive learning method is superior to conventional approaches.
Keisuke KODAIRA Mihoko WADA Tomoharu SHIBUYA
The amplitude damping (AD) quantum channel is one of the models describing evolution of quantum states. The construction of quantum error correcting codes for the AD channel based on classical codes has been presented, and Shor et al. proposed a class of classical codes over F3 which are efficiently applicable to this construction. In this study, we expand Shor's construction to that over F7, and succeeded to construct an AD code that has better parameters than AD codes constructed by Shor et al.
Takuma WATANABE Hiroyoshi YAMADA Motofumi ARII Ryoichi SATO Sang-Eun PARK Yoshio YAMAGUCHI
Soil moisture retrieval from polarimetric synthetic aperture radar (SAR) imagery over forested terrain is quite a challenging problem, because the radar backscatter is affected by not only the moisture content, but also by large vegetation structures such as the trunks and branches. Although a large number of algorithms which exploit radar backscatter to infer soil moisture have been developed, most of them are limited to the case of bare soil or little vegetation cover that an incident wave can easily reach the soil surface without serious disturbance. However, natural land surfaces are rarely free from vegetation, and the disturbance in radar backscatter must be properly compensated to achieve accurate soil moisture measurement in a diversity of terrain surfaces. In this paper, a simple polarimetric parameter, co-polarized backscattering ratio, is shown to be a criterion to infer moisture content of forested terrain, from both a theoretical forest scattering simulation and an appropriate experimental validation under well-controlled condition. Though modeling of forested terrain requires a number of scattering mechanisms to be taken into account, it is essential to isolate them one by one to better understand how soil moisture affects a specific and principal scattering component. For this purpose, we consider a simplified microwave scattering model for forested terrain, which consists of a cloud of dielectric cylinders as a representative of trunks, vertically stood on a flat dielectric soil surface. This simplified model can be considered a simple boreal forest model, and it is revealed that the co-polarization ratio in the ground-trunk double-bounce backscattering can be an useful index to monitor the relative variation in the moisture content of the boreal forest.
Chuang SHI Hideyuki NOMURA Tomoo KAMAKURA Woon-Seng GAN
Earlier attempts to deploy two units of parametric loudspeakers have shown encouraging results in improving the accuracy of spatial audio reproductions. As compared to a pair of conventional loudspeakers, this improvement is mainly a result of being free of crosstalk due to the sharp directivity of the parametric loudspeaker. By replacing the normal parametric loudspeaker with the steerable parametric loudspeaker, a flexible sweet spot can be created that tolerates head movements of the listener. However, spatial aliasing effects of the primary frequency waves are always observed in the steerable parametric loudspeaker. We are motivated to make use of the spatial aliasing effects to create two sound beams from one unit of the steerable parametric loudspeaker. Hence, a reduction of power consumption and physical size can be achieved by cutting down the number of loudspeakers used in an audio system. By introducing a new parameter, namely the relative steering angle, we propose a stereophonic beamsteering method that can control the amplitude difference corresponding to the interaural level difference (ILD) between two sound beams. Currently, this proposed method does not support the reproduction of interaural time differences (ITD).
Masaki KUBO Kensuke NAKANISHI Kentaro YANAGIHARA Shinsuke HARA
The use of cooperative nodes is effective for enhancing the reliability of wireless data transmission between a source and a destination by means of transmit diversity effect. However, in its application to wireless multi-hop networks, how to form cooperative node candidates and how to select multiple cooperative nodes out of them have not been well investigated. In this paper, we propose a multiple cooperative node selection method based on a criterion composed of “quality” and “angle” metrics, which can select and order adequate cooperative nodes. Computer simulation results show that the proposed method can effectively reduce the packet error rate without any knowledge on node location.
Minjia SHI Yan LIU Patrick SOLÉ
The Lee complete ρ weight enumerator and the exact complete ρ weight enumerator over Mn×s(Fl+vFl+v2Fl)(v3=v) are defined, and the MacWilliams identities with respect to RT metric for these two weight enumerators of linear codes over Mn×s(Fl+vFl+v2Fl) are obtained, respectively. Finally, we give two examples to illustrate the obtained results.
Thao-Ngoc NGUYEN Bac LE Kazunori MIYATA
This paper introduces a novel approach of feature description by integrating the intensity order and textures in different support regions into a compact vector. We first propose the Intensity Order Local Binary Pattern (IO-LBP) operator, which simultaneously encodes the gradient and texture information in the local neighborhood of a pixel. We divide each region of interest into segments according to the order of pixel intensities, build one histogram of IO-LBP patterns for each segment, and then concatenate all histograms to obtain a feature descriptor. Furthermore, multi support regions are adopted to enhance the distinctiveness. The proposed descriptor effectively describes a region at both local and global levels, and thus high performance is expected. Experimental results on the Oxford benchmark and images of cast shadows show that our approach is invariant to common photometric and geometric transformations, such as illumination change and image rotation, and robust to complex lighting effects caused by shadows. It achieves a comparable accuracy to that of state-of-art methods while performs considerably faster.
Jinhee CHUN Akiyoshi SHIOURA Truong MINH TIEN Takeshi TOKUYAMA
We give a unified view to greedy geometric routing algorithms in ad hoc networks. For this, we first present a general form of greedy routing algorithm using a class of objective functions which are invariant under congruent transformations of a point set. We show that several known greedy routing algorithms such as Greedy Routing, Compass Routing, and Midpoint Routing can be regarded as special cases of the generalized greedy routing algorithm. In addition, inspired by the unified view of greedy routing, we propose three new greedy routing algorithms. We then derive a sufficient condition for our generalized greedy routing algorithm to guarantee packet delivery on every Delaunay graph. This condition makes it easier to check whether a given routing algorithm guarantees packet delivery, and it is closed under convex linear combination of objective functions. It is shown that Greedy Routing, Midpoint Routing, and the three new greedy routing algorithms proposed in this paper satisfy the sufficient condition, i.e., they guarantee packet delivery on Delaunay graphs. We also discuss merits and demerits of these methods.
Lechang LIU Keisuke ISHIKAWA Tadahiro KURODA
Parametric resonance based solutions for sub-gigahertz radio frequency transceiver with 0.3V supply voltage are proposed in this paper. As an implementation example, a 0.3V 720µW variation-tolerant injection-locked frequency multiplier is developed in 90nm CMOS. It features a parametric resonance based multi-phase synthesis scheme, thereby achieving the lowest supply voltage with -110dBc@ 600kHz phase noise and 873MHz-1.008GHz locking range in state-of-the-art frequency synthesizers.
Recently, the wavelet-based estimation method has gradually been becoming popular as a new tool for software reliability assessment. The wavelet transform possesses both spatial and temporal resolution which makes the wavelet-based estimation method powerful in extracting necessary information from observed software fault data, in global and local points of view at the same time. This enables us to estimate the software reliability measures in higher accuracy. However, in the existing works, only the point estimation of the wavelet-based approach was focused, where the underlying stochastic process to describe the software-fault detection phenomena was modeled by a non-homogeneous Poisson process. In this paper, we propose an interval estimation method for the wavelet-based approach, aiming at taking account of uncertainty which was left out of consideration in point estimation. More specifically, we employ the simulation-based bootstrap method, and derive the confidence intervals of software reliability measures such as the software intensity function and the expected cumulative number of software faults. To this end, we extend the well-known thinning algorithm for the purpose of generating multiple sample data from one set of software-fault count data. The results of numerical analysis with real software fault data make it clear that, our proposal is a decision support method which enables the practitioners to do flexible decision making in software development project management.
Rubing HUANG Dave TOWEY Jinfu CHEN Yansheng LU
Combinatorial interaction testing has been well studied in recent years, and has been widely applied in practice. It generally aims at generating an effective test suite (an interaction test suite) in order to identify faults that are caused by parameter interactions. Due to some constraints in practical applications (e.g. limited testing resources), for example in combinatorial interaction regression testing, prioritized interaction test suites (called interaction test sequences) are often employed. Consequently, many strategies have been proposed to guide the interaction test suite prioritization. It is, therefore, important to be able to evaluate the different interaction test sequences that have been created by different strategies. A well-known metric is the Average Percentage of Combinatorial Coverage (shortly APCCλ), which assesses the rate of interaction coverage of a strength λ (level of interaction among parameters) covered by a given interaction test sequence S. However, APCCλ has two drawbacks: firstly, it has two requirements (that all test cases in S be executed, and that all possible λ-wise parameter value combinations be covered by S); and secondly, it can only use a single strength λ (rather than multiple strengths) to evaluate the interaction test sequence - which means that it is not a comprehensive evaluation. To overcome the first drawback, we propose an enhanced metric Normalized APCCλ (NAPCC) to replace the APCCλ Additionally, to overcome the second drawback, we propose three new metrics: the Average Percentage of Strengths Satisfied (APSS); the Average Percentage of Weighted Multiple Interaction Coverage (APWMIC); and the Normalized APWMIC (NAPWMIC). These metrics comprehensively assess a given interaction test sequence by considering different interaction coverage at different strengths. Empirical studies show that the proposed metrics can be used to distinguish different interaction test sequences, and hence can be used to compare different test prioritization strategies.
Takayuki NOZAKI Kenta KASAI Kohichi SAKANIWA
In this paper, we propose a message passing decoding algorithm which lowers decoding error rates in the error floor regions for non-binary low-density parity-check (LDPC) codes transmitted over the binary erasure channel (BEC) and the memoryless binary-input output-symmetric (MBIOS) channels. In the case for the BEC, this decoding algorithm is a combination with belief propagation (BP) decoding and maximum a posteriori (MAP) decoding on zigzag cycles, which cause decoding errors in the error floor region. We show that MAP decoding on the zigzag cycles is realized by means of a message passing algorithm. Moreover, we extend this decoding algorithm to the MBIOS channels. Simulation results demonstrate that the decoding error rates in the error floor regions by the proposed decoding algorithm are lower than those by the BP decoder.
Takao MURAKAMI Kenta TAKAHASHI Kanta MATSUURA
Biometric identification has recently attracted attention because of its convenience: it does not require a user ID nor a smart card. However, both the identification error rate and response time increase as the number of enrollees increases. In this paper, we combine a score level fusion scheme and a metric space indexing scheme to improve the accuracy and response time in biometric identification, using only scores as information sources. We firstly propose a score level indexing and fusion framework which can be constructed from the following three schemes: (I) a pseudo-score based indexing scheme, (II) a multi-biometric search scheme, and (III) a score level fusion scheme which handles missing scores. A multi-biometric search scheme can be newly obtained by applying a pseudo-score based indexing scheme to multi-biometric identification. We secondly propose the NBS (Naive Bayes search) scheme as a multi-biometric search scheme and discuss its optimality with respect to the retrieval error rate. We evaluated our proposal using the datasets of multiple fingerprints and face scores from multiple matchers. The results showed that our proposal significantly improved the accuracy of the unimodal biometrics while reducing the average number of score computations in both the datasets.
Trung Thanh NGO Yasushi MAKIHARA Hajime NAGAHARA Yasuhiro MUKAIGAWA Yasushi YAGI
Gait-based owner authentication using accelerometers has recently been extensively studied owing to the development of wearable electronic devices. An actual gait signal is always subject to change due to many factors including variation of sensor attachment. In this research, we tackle to the practical sensor-orientation inconsistency, for which signal sequences are captured at different sensor orientations. We present an iterative signal matching algorithm based on phase-registration technique to simultaneously estimate relative sensor-orientation and register the 3D acceleration signals. The iterative framework is initialized by using 1D orientation-invariant resultant signals which are computed from 3D signals. As a result, the matching algorithm is robust to any initial sensor-orientation. This matching algorithm is used to match a probe and a gallery signals in the proposed owner authentication method. Experiments using actual gait signals under various conditions such as different days, sensors, weights being carried, and sensor orientations show that our authentication method achieves positive results.
Osamu TODA Masahiro YUKAWA Shigenobu SASAKI Hisakazu KIKUCHI
We propose a novel adaptive filtering scheme named metric-combining normalized least mean square (MC-NLMS). The proposed scheme is based on iterative metric projections with a metric designed by combining multiple metric-matrices convexly in an adaptive manner, thereby taking advantages of the metrics which rely on multiple pieces of information. We compare the improved PNLMS (IPNLMS) algorithm with the natural proportionate NLMS (NPNLMS) algorithm, which is a special case of MC-NLMS, and it is shown that the performance of NPNLMS is controllable with the combination coefficient as opposed to IPNLMS. We also present an application to an acoustic echo cancellation problem and show the efficacy of the proposed scheme.
Two classes of 3rd order correlation immune symmetric Boolean functions have been constructed respectively in [1] and [2], in which some interesting phenomena of the algebraic degree have been observed as well. However, a good explanation has not been given. In this paper, we obtain the formulas for the degree of these functions, which can well explain the behavior of their degree.
Regularized forward selection is viewed as a method for obtaining a sparse representation in a nonparametric regression problem. In regularized forward selection, regression output is represented by a weighted sum of several significant basis functions that are selected from among a large number of candidates by using a greedy training procedure in terms of a regularized cost function and applying an appropriate model selection method. In this paper, we propose a model selection method in regularized forward selection. For the purpose, we focus on the reduction of a cost function, which is brought by appending a new basis function in a greedy training procedure. We first clarify a bias and variance decomposition of the cost reduction and then derive a probabilistic upper bound for the variance of the cost reduction under some conditions. The derived upper bound reflects an essential feature of the greedy training procedure; i.e., it selects a basis function which maximally reduces the cost function. We then propose a thresholding method for determining significant basis functions by applying the derived upper bound as a threshold level and effectively combining it with the leave-one-out cross validation method. Several numerical experiments show that generalization performance of the proposed method is comparable to that of the other methods while the number of basis functions selected by the proposed method is greatly smaller than by the other methods. We can therefore say that the proposed method is able to yield a sparse representation while keeping a relatively good generalization performance. Moreover, our method has an advantage that it is free from a selection of a regularization parameter.