Ryo NAKAMATA Ryo OYAMA Shouhei KIDERA Tetsuo KIRIMOTO
Synthetic aperture radar (SAR) is an indispensable tool for low visibility ground surface measurement owing to its robustness against optically harsh environments such as adverse weather or darkness. As a leading-edge approach for SAR image processing, the coherent change detection (CCD) technique has been recently established; it detects a temporal change in the same region according to the phase interferometry of two complex SAR images. However, in the case of general damage assessment following an earthquake or mudslide, the technique requires not only the detection of surface change but also an assessment for height change quantity, such as occurs with a building collapse or road subsidence. While the interferometric SAR (InSAR) approach is suitable for height assessment, it is basically unable to detect change if only a single observation is made. To address this issue, we previously proposed a method of estimating height change according to phase interferometry of the coherence function obtained by dual band-divided SAR images. However, the accuracy of this method significantly degrades in noisy situations owing to the use of the phase difference. To resolve this problem, this paper proposes a novel height estimation method by exploiting the frequency characteristic of coherence phases obtained by each SAR image multiply band-divided. The results obtained from numerical simulations and experimental data demonstrate that our proposed method offers accurate height change estimation while avoiding degradation in the spatial resolution.
Hiroshi FUJIWARA Yasuhiro KONNO Toshihiro FUJITO
The multislope ski-rental problem is an extension of the classical ski-rental problem, where the player has several options of paying both of a per-time fee and an initial fee, in addition to pure renting and buying options. Damaschke gave a lower bound of 3.62 on the competitive ratio for the case where arbitrary number of options can be offered. In this paper we propose a scheme that for the number of options given as an input, provides a lower bound on the competitive ratio, by extending the method of Damaschke. This is the first to establish a lower bound for each of the 5-or-more-option cases, for example, a lower bound of 2.95 for the 5-option case, 3.08 for the 6-option case, and 3.18 for the 7-option case. Moreover, it turns out that our lower bounds for the 3- and 4-option cases respectively coincide with the known upper bounds. We therefore conjecture that our scheme in general derives a matching lower and upper bound.
Kenshi SAHO Hiroaki HOMMA Takuya SAKAMOTO Toru SATO Kenichi INOUE Takeshi FUKUDA
Recent studies have focused on developing security systems using micro-Doppler radars to detect human bodies. However, the resolution of these conventional methods is unsuitable for identifying bodies and moreover, most of these conventional methods were designed for a solitary or sufficiently well-spaced targets. This paper proposes a solution to these problems with an image separation method for two closely spaced pedestrian targets. The proposed method first develops an image of the targets using ultra-wide-band (UWB) Doppler imaging radar. Next, the targets in the image are separated using a supervised learning-based separation method trained on a data set extracted using a range profile. We experimentally evaluated the performance of the image separation using some representative supervised separation methods and selected the most appropriate method. Finally, we reject false points caused by target interference based on the separation result. The experiment, assuming two pedestrians with a body separation of 0.44m, shows that our method accurately separates their images using a UWB Doppler radar with a nominal down-range resolution of 0.3m. We describe applications using various target positions, establish the performance, and derive optimal settings for our method.
Atsushi TAKAYASU Noboru KUNIHIRO
At CaLC 2001, Howgrave-Graham proposed the polynomial time algorithm for solving univariate linear equations modulo an unknown divisor of a known composite integer, the so-called partially approximate common divisor problem. So far, two forms of multivariate generalizations of the problem have been considered in the context of cryptanalysis. The first is simultaneous modular univariate linear equations, whose polynomial time algorithm was proposed at ANTS 2012 by Cohn and Heninger. The second is modular multivariate linear equations, whose polynomial time algorithm was proposed at Asiacrypt 2008 by Herrmann and May. Both algorithms cover Howgrave-Graham's algorithm for univariate cases. On the other hand, both multivariate problems also become identical to Howgrave-Graham's problem in the asymptotic cases of root bounds. However, former algorithms do not cover Howgrave-Graham's algorithm in such cases. In this paper, we introduce the strategy for natural algorithm constructions that take into account the sizes of the root bounds. We work out the selection of polynomials in constructing lattices. Our algorithms are superior to all known attacks that solve the multivariate equations and can generalize to the case of arbitrary number of variables. Our algorithms achieve better cryptanalytic bounds for some applications that relate to RSA cryptosystems.
Yu TSAO Ting-Yao HU Sakriani SAKTI Satoshi NAKAMURA Lin-shan LEE
This study proposes a variable selection linear regression (VSLR) adaptation framework to improve the accuracy of automatic speech recognition (ASR) with only limited and unlabeled adaptation data. The proposed framework can be divided into three phases. The first phase prepares multiple variable subsets by applying a ranking filter to the original regression variable set. The second phase determines the best variable subset based on a pre-determined performance evaluation criterion and computes a linear regression (LR) mapping function based on the determined subset. The third phase performs adaptation in either model or feature spaces. The three phases can select the optimal components and remove redundancies in the LR mapping function effectively and thus enable VSLR to provide satisfactory adaptation performance even with a very limited number of adaptation statistics. We formulate model space VSLR and feature space VSLR by integrating the VS techniques into the conventional LR adaptation systems. Experimental results on the Aurora-4 task show that model space VSLR and feature space VSLR, respectively, outperform standard maximum likelihood linear regression (MLLR) and feature space MLLR (fMLLR) and their extensions, with notable word error rate (WER) reductions in a per-utterance unsupervised adaptation manner.
Ryuichi HARASAWA Yutaka SUEYOSHI Aichi KUDO
In the paper [4], the authors generalized the Cipolla-Lehmer method [2][5] for computing square roots in finite fields to the case of r-th roots with r prime, and compared it with the Adleman-Manders-Miller method [1] from the experimental point of view. In this paper, we compare these two methods from the theoretical point of view.
Chidambaram CHIDAMBARAM Hugo VIEIRA NETO Leyza Elmeri Baldo DORINI Heitor Silvério LOPES
Face recognition plays an important role in security applications, but in real-world conditions face images are typically subject to issues that compromise recognition performance, such as geometric transformations, occlusions and changes in illumination. Most face detection and recognition works to date deal with single face images using global features and supervised learning. Differently from that context, here we propose a multiple face recognition approach based on local features which does not rely on supervised learning. In order to deal with multiple face images under varying conditions, the extraction of invariant and discriminative local features is achieved by using the SURF (Speeded-Up Robust Features) approach, and the search for regions from which optimal features can be extracted is done by an improved ABC (Artificial Bee Colony) algorithm. Thresholds and parameters for SURF and improved ABC algorithms are determined experimentally. The approach was extensively assessed on 99 different still images - more than 400 trials were conducted using 20 target face images and still images under different acquisition conditions. Results show that our approach is promising for real-world face recognition applications concerning different acquisition conditions and transformations.
Iakovos OURANOS Kazuhiro OGATA Petros STEFANEAS
In this paper we report on experiences gained and lessons learned by the use of the Timed OTS/CafeOBJ method in the formal verification of TESLA source authentication protocol. These experiences can be a useful guide for the users of the OTS/CafeOBJ, especially when dealing with such complex systems and protocols.
Collaborative business has been increasingly developing with the environment of globalization and advanced information technologies. In a collaboration environment with multiple organizations, participants from different organizations always have different views about modeling the overall business process due to different knowledge and cultural backgrounds. Moreover, flexible support, privacy preservation and process reuse are important issues that should be considered in business process management across organizational boundaries. This paper presents a novel approach of modeling interorganizational business process for collaboration. Our approach allows for modeling loosely coupled interorganizational business process considering different views of organizations. In the proposed model, organizations have their own local process views of modeling business process instead of sharing pre-defined global processes. During process cooperation, local process of an organization can be invisible to other organizations. Further, we propose the coordination mechanisms for different local process views to detect incompatibilities among organizations. We illustrate our proposed approach by a case study of interorganizational software development collaboration.
Numerous studies have been focusing on the improvement of bag of features (BOF), histogram of oriented gradient (HOG) and scale invariant feature transform (SIFT). However, few works have attempted to learn the connection between them even though the latter two are widely used as local feature descriptor for the former one. Motivated by the resemblance between BOF and HOG/SIFT in the descriptor construction, we improve the performance of HOG/SIFT by a) interpreting HOG/SIFT as a variant of BOF in descriptor construction, and then b) introducing recently proposed approaches of BOF such as locality preservation, data-driven vocabulary, and spatial information preservation into the descriptor construction of HOG/SIFT, which yields the BOF-driven HOG/SIFT. Experimental results show that the BOF-driven HOG/SIFT outperform the original ones in pedestrian detection (for HOG), scene matching and image classification (for SIFT). Our proposed BOF-driven HOG/SIFT can be easily applied as replacements of the original HOG/SIFT in current systems since they are generalized versions of the original ones.
Masahiro FUKUI Shigeaki SASAKI Yusuke HIWASAKI Kimitaka TSUTSUMI Sachiko KURIHARA Hitoshi OHMURO Yoichi HANEDA
We proposes a new adaptive spectral masking method of algebraic vector quantization (AVQ) for non-sparse signals in the modified discreet cosine transform (MDCT) domain. This paper also proposes switching the adaptive spectral masking on and off depending on whether or not the target signal is non-sparse. The switching decision is based on the results of MDCT-domain sparseness analysis. When the target signal is categorized as non-sparse, the masking level of the target MDCT coefficients is adaptively controlled using spectral envelope information. The performance of the proposed method, as a part of ITU-T G.711.1 Annex D, is evaluated in comparison with conventional AVQ. Subjective listening test results showed that the proposed method improves sound quality by more than 0.1 points on a five-point scale on average for speech, music, and mixed content, which indicates significant improvement.
Jaeyul CHOO Chihyun CHO Hosung CHOO
This paper designs tag antennas to satisfy three key goals: mounting on very small objects, extending the reading range with planar structures, and maintaining stable performance on various materials. First, the size of the tag is reduced up to 17% compared to the half-wavelength dipole without a large reduction in bandwidth and efficiency by introducing an inductively coupled feed structure. Second, the reading range is increased to 1.68 times that of the reference dipole tags while maintaining the planar structure using circular polarization characteristics. Finally, a stable reading range is achieved with a deviation in the reading range of only 30% of that of commercial tags on various objects by employing the capacitively-loaded and T-matching network.
Hatsuhiro KATO Hatsuyoshi KATO
Flexural waves on a thin elastic plate are governed by the fourth-order differential equation, which is attractive not only from a harmonic analysis viewpoint but also useful for an efficient numerical method in the elastdynamics. In this paper, we proposed two novel ideas: (1) use of the tensor bases to describe flexural waves on inhomogeneous elastic plates, (2) weak form discretization to derive the second-order difference equation from the fourth-order differential equation. The discretization method proposed in this study is of preliminary consideration about the recursive transfer method (RTM) to analyse the scattering problem of flexural waves. More importantly, the proposed discretization method can be applied to any system which can be formulated by the weak form theory. The accuracy of the difference equation derived by the proposed discretization method is confirmed by comparing the analytical and numerical solutions of waveguide modes. As a typical problem to confirm the validity of the resultant governing equation, the influence of the spatially modulated elastic constant in waveguide modes is discussed.
Recently, the wavelet-based estimation method has gradually been becoming popular as a new tool for software reliability assessment. The wavelet transform possesses both spatial and temporal resolution which makes the wavelet-based estimation method powerful in extracting necessary information from observed software fault data, in global and local points of view at the same time. This enables us to estimate the software reliability measures in higher accuracy. However, in the existing works, only the point estimation of the wavelet-based approach was focused, where the underlying stochastic process to describe the software-fault detection phenomena was modeled by a non-homogeneous Poisson process. In this paper, we propose an interval estimation method for the wavelet-based approach, aiming at taking account of uncertainty which was left out of consideration in point estimation. More specifically, we employ the simulation-based bootstrap method, and derive the confidence intervals of software reliability measures such as the software intensity function and the expected cumulative number of software faults. To this end, we extend the well-known thinning algorithm for the purpose of generating multiple sample data from one set of software-fault count data. The results of numerical analysis with real software fault data make it clear that, our proposal is a decision support method which enables the practitioners to do flexible decision making in software development project management.
In this paper, we propose an algorithm for contrast enhancement based on Adaptive Histogram Equalization (AHE) to improve image quality. Most histogram-based contrast enhancement methods have problems with excessive or low image contrast enhancement. This results in unnatural output images and the loss of visual information. The proposed method manipulates the slope of the input of the Probability Density Function (PDF) histogram. We also propose a pixel redistribution method using convolution to compensate for excess pixels after the slope modification procedure. Our method adaptively enhances the contrast of the input image and shows good simulation results compared with conventional methods.
A new scheme based on multi-order visual comparison is proposed for full-reference image quality assessment. Inspired by the observation that various image derivatives have great but different effects on visual perception, we perform respective comparison on different orders of image derivatives. To obtain an overall image quality score, we adaptively integrate the results of different comparisons via a perception-inspired strategy. Experimental results on public databases demonstrate that the proposed method is more competitive than some state-of-the-art methods, benchmarked against subjective assessment given by human beings.
Akihiro NAGASE Nami NAKANO Masako ASAMURA Jun SOMEYA Gosuke OHASHI
The authors have evaluated a method of expanding the bit depth of image signals called SGRAD, which requires fewer calculations, while degrading the sharpness of images less. Where noise is superimposed on image signals, the conventional method for obtaining high bit depth sometimes incorrectly detects the contours of images, making it unable to sufficiently correct the gradation. Requiring many line memories is also an issue with the conventional method when applying the process to vertical gradation. As a solution to this particular issue, SGRAD improves the method of detecting contours with transiting gradation to effectively correct the gradation of image signals which noise is superimposed on. In addition, the use of a prediction algorithm for detecting gradation reduces the scale of the circuit with less correction of the vertical gradation.
Daichi KITAMURA Hiroshi SARUWATARI Kosuke YAGI Kiyohiro SHIKANO Yu TAKAHASHI Kazunobu KONDO
In this letter, we address monaural source separation based on supervised nonnegative matrix factorization (SNMF) and propose a new penalized SNMF. Conventional SNMF often degrades the separation performance owing to the basis-sharing problem. Our penalized SNMF forces nontarget bases to become different from the target bases, which increases the separated sound quality.
With the development of global navigation satellite systems (GNSS), the interference among global navigation satellite systems, known as the radio frequency compatibility problem, has become a matter of great concern to system providers and user communities. The acceptable compatibility threshold should be determined in the radio frequency compatibility assessment process. However, there is no common standard for the acceptable threshold in the radio frequency compatibility assessment. This paper firstly introduces the comprehensive radio frequency compatibility methodology combining the spectral separation coefficient (SSC) and code tracking spectral sensitivity coefficient (CT_SSC). Then, a method for determination of the acceptable compatibility threshold is proposed. The proposed method considers the receiver processing phase including acquisition, code and carrier tracking and data demodulation. Simulations accounting for the interference effects are carried out at each time step and every place on earth. The simulations mainly consider the signals of GPS, Galileo and BeiDou Navigation Satellite System (BDS) in the L1 band. Results show that all of the sole systems are compatible with other GNSS systems with respect to a special receiver configuration used in the simulations.
Hung V. LE Hasan Md. MOHIBUL Takuichi HIRANO Toru TANIGUCHI Akira YAMAGUCHI Jiro HIROKAWA Makoto ANDO
The millimeter-wave band suffers strong attenuation due to rain. While calculating the link budget for a wireless system using this frequency band, the behavior of rain, attenuation due to rain, and the amount of degradation must be accurately understood. This paper presents an evaluation of the influence of rain and its attenuation on link performance in a Tokyo Institute of Technology (Tokyo Tech) millimeter-wave model mesh network. Conventional statistical analyses including cumulative rain rate distribution and specific rain attenuation constants are performed on data collected from 2009 onwards. The unique effects arising due to the highly localized behaviors of strong rainfalls have become clear and are characterized in terms of variograms rather than correlation coefficients. Spatial separation even in the small network here with links of less than 1 km provides effective diversity branches for better availability performance.