Riku AKEMA Masao YAMAGISHI Isao YAMADA
The Canonical Polyadic Decomposition (CPD) is the tensor analog of the Singular Value Decomposition (SVD) for a matrix and has many data science applications including signal processing and machine learning. For the CPD, the Alternating Least Squares (ALS) algorithm has been used extensively. Although the ALS algorithm is simple, it is sensitive to a noise of a data tensor in the applications. In this paper, we propose a novel strategy to realize the noise suppression for the CPD. The proposed strategy is decomposed into two steps: (Step 1) denoising the given tensor and (Step 2) solving the exact CPD of the denoised tensor. Step 1 can be realized by solving a structured low-rank approximation with the Douglas-Rachford splitting algorithm and then Step 2 can be realized by solving the simultaneous diagonalization of a matrix tuple constructed by the denoised tensor with the DODO method. Numerical experiments show that the proposed algorithm works well even in typical cases where the ALS algorithm suffers from the so-called bottleneck/swamp effect.
Xueyan ZHANG Libin QU Zhangkai LUO
Coprime (pair of) DFT filter banks (coprime DFTFB), which process signals like a spectral analyzer in time domain, divides the power spectrum equally into MN bands by employing two DFT filter banks (DFTFBs) of size only M and N respectively, where M and N are coprime integers. With coprime DFTFB, frequencies in wide sense stationary (WSS) signals can be effectively estimated with a much lower sampling rates than the Nyquist rates. However, the imperfection of practical FIR filter and the correlation based detection mode give rise to two kinds of spurious peaks in power spectrum estimation, that greatly limit the application of coprime DFTFB. Through detailed analysis of the spurious peaks, this paper proposes a modified spectral analyzer based on dual coprime DFTFBs and sub-decimation, which not only depresses the spurious peaks, but also improves the frequency estimation accuracy. The mathematical principle proof of the proposed spectral analyzer is also provided. In discussion of simultaneous signals detection, an O-extended MN-band coprime DFTFB (OExt M-N coprime DFTFB) structure is naturally deduced, where M, N, and O are coprime with each other. The original MN-band coprime DFTFB (M-N coprime DFTFB) can be seen a special case of the OExt M-N coprime DFTFB with extending factor O equals ‘1’. In the numerical simulation section, BPSK signals with random carrier frequencies are employed to test the proposed spectral analyzer. The results of detection probability versus SNR curves through 1000 Monte Carlo experiments verify the effectiveness of the proposed spectrum analyzer.
Jun GOTO Akimichi HIROTA Kyosuke MOCHIZUKI Satoshi YAMAGUCHI Kazunari KIHIRA Toru TAKAHASHI Hideo SUMIYOSHI Masataka OTSUKA Naofumi YONEDA Jiro HIROKAWA
We present a novel circularly polarized ring microstrip antenna and its design. The shorting pins discretely disposed on the inner edge of the ring microstrip antenna are introduced as a new degree of freedom for improving the resonance frequency control. The number and diameter of the shorting pins control the resonance frequency; the resonance frequency can be almost constant with respect to the inner/outer diameter ratio, which expands the use of the ring microstrip antenna. The dual-band antenna where the proposed antenna includes another ring microstrip antenna is designed and measured, and simulated results agree well with the measured one.
Active network monitoring based on Boolean network tomography is a promising technique to localize link failures instantly in transport networks. However, the required set of monitoring trails must be recomputed after each link failure has occurred to handle succeeding link failures. Existing heuristic methods cannot compute the required monitoring trails in a sufficiently short time when multiple-link failures must be localized in the whole of large-scale managed networks. This paper proposes an approach for computing the required monitoring trails within an allowable expected period specified beforehand. A random walk-based analysis estimates the number of monitoring trails to be computed in the proposed approach. The estimated number of monitoring trails are computed by a lightweight method that only guarantees partial localization within restricted areas. The lightweight method is repeatedly executed until a successful set of monitoring trails achieving unambiguous localization in the entire managed networks can be obtained. This paper demonstrates that the proposed approach can compute a small number of monitoring trails for localizing all independent dual-link failures in managed networks made up of thousands of links within a given expected short period.
Tomoki KANEKO Hirobumi SAITO Akira HIROSE
This paper proposes an analytical method to design septum-type polarizers by assuming a polarizer as a series of four septum elements with a short ridge-waveguide approximation. We determine parameters of respective elements in such a manner that, at the center frequency, the reflection coefficient of the first element is equal to that of the second one, the reflection of the third one equals to that of the forth, and the electrical lengths of the first, second and third elements are 90 deg. We name this method the Short Ridge-waveguide Approximation Method (SRAM). We fabricated an X-band polarizer, which achieves a cross polarization discrimination (XPD) value of 40.7-64.1 dB over 8.0-8.4 GHz, without any numerical optimization.
Masayuki ODAGAWA Takumi OKAMOTO Tetsushi KOIDE Toru TAMAKI Shigeto YOSHIDA Hiroshi MIENO Shinji TANAKA
In this paper, we present a classification method for a Computer-Aided Diagnosis (CAD) system in a colorectal magnified Narrow Band Imaging (NBI) endoscopy. In an endoscopic video image, color shift, blurring or reflection of light occurs in a lesion area, which affects the discrimination result by a computer. Therefore, in order to identify lesions with high robustness and stable classification to these images specific to video frame, we implement a CAD system for colorectal endoscopic images with the Convolutional Neural Network (CNN) feature and Support Vector Machine (SVM) classification on the embedded DSP core. To improve the robustness of CAD system, we construct the SVM learned by multiple image sizes data sets so as to adapt to the noise peculiar to the video image. We confirmed that the proposed method achieves higher robustness, stable, and high classification accuracy in the endoscopic video image. The proposed method also can cope with differences in resolution by old and new endoscopes and perform stably with respect to the input endoscopic video image.
Xiaoyu CHEN Huanchang LI Yihan ZHANG Yubo LI
A new construction of shift sequences is proposed under the condition of P|L, and then the inter-group complementary (IGC) sequence sets are constructed based on the shift sequence. By adjusting the parameter q, two or three IGC sequence sets can be obtained. Compared with previous methods, the proposed construction can provide more sequence sets for both synchronous and asynchronous code-division multiple access communication systems.
Aye Mon HTUN Maung SANN MAW Iwao SASASE P. Takis MATHIOPOULOS
In this paper, we propose a novel user selection scheme based on jointly combining channel gain (CG) and signal to interference plus noise ratio (SINR) to improve the sum-rate as well as to reduce the computation complexity of multi-user massive multi-input multi-output (MU-massive MIMO) downlink transmission through a block diagonalization (BD) precoding technique. By jointly considering CG and SINR based user sets, sum-rate performance improvement can be achieved by selecting higher gain users with better SINR conditions as well as by eliminating the users who cause low sum-rate in the system. Through this approach, the number of possible outcomes for the user selection scheme can be reduced by counting the common users for every pair of user combinations in the selection process since the common users of CG-based and SINR-based sets possess both higher channel gains and better SINR conditions. The common users set offers not only sum-rate performance improvements but also computation complexity reduction in the proposed scheme. It is shown by means of computer simulation experiments that the proposed scheme can increase the sum-rate with lower computation complexity for various numbers of users as compared to conventional schemes requiring the same or less computational complexity.
This paper presents an X-band power-combined pulsed high power amplifier (HPA) based on the low insertion loss waveguide combiner. Relationships between the return loss and isolation of the magic Tee (MT) have been analyzed and the accurate design technique is given. The combination network is validated by the measurement of a single MT and a four-way passive network, and the characterization of the combined HPA module is designed, fabricated and discussed. The HPA delivers 200W output power with an associated power-added efficiency close to 40% within the frequency range of 7.8 GHz to 12.3 GHz. The combination efficiency is higher than 93%.
Huakang XIA Yidie YE Xiudeng WANG Ge SHI Zhidong CHEN Libo QIAN Yinshui XIA
A self-powered flyback pulse resonant circuit (FPRC) is proposed to extract energy from piezoelectric (PEG) and thermoelectric generators (TEG) simultaneously. The FPRC is able to cold start with the PEG voltage regardless of the TEG voltage, which means the TEG energy is extracted without additional cost. The measurements show that the FPRC can output 102 µW power under the input PEG and TEG voltages of 2.5 V and 0.5 V, respectively. The extracted power is increased by 57.6% compared to the case without TEGs. Additionally, the power improvement with respect to an ideal full-wave bridge rectifier is 2.71× with an efficiency of 53.9%.
Itaru KAMOHARA Ulrich WELLING Ulrich KLOSTERMANN Wolfgang DEMMERLE
This paper presents a simulation study on the printing behavior of three different EUV resist systems. Stochastic models for negative metal-based resist and conventional chemically amplified resist (CAR) were calibrated and then validated. As for negative-tone development (NTD) CAR, we commenced from a positive-tone development (PTD) CAR calibrated (material) and NTD development models, since state-of-the-art measurements are not available. A conceptual study between PTD CAR and NTD CAR shows that the stochastic inhibitor fluctuation differs for PTD CAR: the inhibitor level exhibits small fluctuation (Mack development). For NTD CAR, the inhibitor fluctuation depends on the NTD type, which is defined by categorizing the difference between the NTD and PTD development thresholds. Respective NTD types have different inhibitor concentration level. Moreover, contact hole printing between negative metal-based and NTD CAR was compared to clarify the stochastic process window (PW) for tone reversed mask. For latter comparison, the aerial image (AI) and secondary electron effect are comparable. Finally, the local CD uniformity (LCDU) for the same 20 nm size, 40 nm pitch contact hole was compared among the three different resists. Dose-dependent behavior of LCDU and stochastic PW for NTD were different for the PTD CAR and metal-based resist. For NTD CAR, small inhibitor level and large inhibitor fluctuation around the development threshold were observed, causing LCDU increase, which is specific to the inverse Mack development resist.
Yukasa MURAKAMI Masateru TSUNODA
Although many software engineering studies have been conducted, it is not clear whether they meet the needs of software development practitioners. Some studies evaluated the effectiveness of software engineering research by practitioners, to clarify the research satisfies the needs of the practitioners. We performed replicated study of them, recruiting practitioners who mainly belong to SMEs (small and medium-sized enterprises) to the survey. We asked 16 practitioners to evaluate cutting-edge software engineering studies presented in ICSE 2016. In the survey, we set the viewpoint of the evaluation as the effectiveness for the respondent's own work. As a result, the ratio of positive answers (i.e., the answers were greater than 2 on a 5-point scale) was 33.3%, and the ratio was lower than past studies. The result was not affected by the number of employees in the respondent's company, but would be affected by the viewpoint of the evaluation.
Due to the rapid development of different processors, e.g., x86 and Sunway, software porting between different platforms is becoming more frequent. However, the migrated software's execution efficiency on the target platform is different from that of the source platform, and most of the previous studies have investigated the improvement of the efficiency from the hardware perspective. To the best of our knowledge, this is the first paper to exclusively focus on studying what software factors can result in performance change after software migration. To perform our study, we used SonarQube to detect and measure five software factors, namely Duplicated Lines (DL), Code Smells Density (CSD), Big Functions (BF), Cyclomatic Complexity (CC), and Complex Functions (CF), from 13 selected projects of SPEC CPU2006 benchmark suite. Then, we measured the change of software performance by calculating the acceleration ratio of execution time before (x86) and after (Sunway) software migration. Finally, we performed a multiple linear regression model to analyze the relationship between the software performance change and the software factors. The results indicate that the performance change of software migration from the x86 platform to the Sunway platform is mainly affected by three software factors, i.e., Code Smell Density (CSD), Cyclomatic Complexity (CC), and Complex Functions (CF). The findings can benefit both researchers and practitioners.
In infrastructure-as-a-service platforms, cloud users can adjust their database (DB) service scale to dynamic workloads by changing the number of virtual machines running a DB management system (DBMS), called DBMS instances. Replicating a DBMS instance is a non-trivial task since DBMS replication is time-consuming due to the trend that cloud vendors offer high-spec DBMS instances. This paper presents BalenaDB, which performs urgent DBMS replication for handling sudden workload increases. Unlike convectional replication schemes that implicitly assume DBMS replicas are generated on remote machines, BalenaDB generates a warmed-up DBMS replica on an instance running on the local machine where the master DBMS instance runs, by leveraging the master DBMS resources. We prototyped BalenaDB on MySQL 5.6.21, Linux 3.17.2, and Xen 4.4.1. The experimental results show that the time for generating the warmed-up DBMS replica instance on BalenaDB is up to 30× shorter than an existing DBMS instance replication scheme, achieving significantly efficient memory utilization.
Zihao SONG Peng SONG Chao SHENG Wenming ZHENG Wenjing ZHANG Shaokai LI
Unsupervised Feature selection is an important dimensionality reduction technique to cope with high-dimensional data. It does not require prior label information, and has recently attracted much attention. However, it cannot fully utilize the discriminative information of samples, which may affect the feature selection performance. To tackle this problem, in this letter, we propose a novel discriminative virtual label regression method (DVLR) for unsupervised feature selection. In DVLR, we develop a virtual label regression function to guide the subspace learning based feature selection, which can select more discriminative features. Moreover, a linear discriminant analysis (LDA) term is used to make the model be more discriminative. To further make the model be more robust and select more representative features, we impose the ℓ2,1-norm on the regression and feature selection terms. Finally, extensive experiments are carried out on several public datasets, and the results demonstrate that our proposed DVLR achieves better performance than several state-of-the-art unsupervised feature selection methods.
Bodin CHINTHANET Raula GAIKOVINA KULA Rodrigo ELIZA ZAPATA Takashi ISHIO Kenichi MATSUMOTO Akinori IHARA
It has become common practice for software projects to adopt third-party dependencies. Developers are encouraged to update any outdated dependency to remain safe from potential threats of vulnerabilities. In this study, we present an approach to aid developers show whether or not a vulnerable code is reachable for JavaScript projects. Our prototype, SōjiTantei, is evaluated in two ways (i) the accuracy when compared to a manual approach and (ii) a larger-scale analysis of 780 clients from 78 security vulnerability cases. The first evaluation shows that SōjiTantei has a high accuracy of 83.3%, with a speed of less than a second analysis per client. The second evaluation reveals that 68 out of the studied 78 vulnerabilities reported having at least one clean client. The study proves that automation is promising with the potential for further improvement.
Shinnosuke KURATA Toshinori OTAKA Yusuke KAMEDA Takayuki HAMAMOTO
We propose a HDR (high dynamic range) reconstruction method in an image sensor with a pixel-parallel ADC (analog-to-digital converter) for non-destructively reading out the intermediate exposure image. We report the circuit design for such an image sensor and the evaluation of the basic HDR reconstruction method.
Chen LI Junjun ZHENG Hiroyuki OKAMURA Tadashi DOHI
Utilization data (a kind of incomplete data) is defined as the fraction of a fixed period in which the system is busy. In computer systems, utilization data is very common and easily observable, such as CPU utilization. Unlike inter-arrival times and waiting times, it is more significant to consider the parameter estimation of transaction-based systems with utilization data. In our previous work [7], a novel parameter estimation method using utilization data for an Mt/M/1/K queueing system was presented to estimate the parameters of a non-homogeneous Poisson process (NHPP). Since NHPP is classified as a simple counting process, it may not fit actual arrival streams very well. As a generalization of NHPP, Markovian arrival process (MAP) takes account of the dependency between consecutive arrivals and is often used to model complex, bursty, and correlated traffic streams. In this paper, we concentrate on the parameter estimation of an MAP/M/1/K queueing system using utilization data. In particular, the parameters are estimated by using maximum likelihood estimation (MLE) method. Numerical experiments on real utilization data validate the proposed approach and evaluate the effective traffic intensity of the arrival stream of MAP/M/1/K queueing system. Besides, three kinds of utilization datasets are created from a simulation to assess the effects of observed time intervals on both estimation accuracy and computational cost. The numerical results show that MAP-based approach outperforms the exiting method in terms of both the estimation accuracy and computational cost.
Gergely HUSZAK Hiroyoshi MORITA George ZIMMERMAN
IEEE P802.3cg established a new pair of Ethernet physical layer devices (PHY), one of which, the short-reach 10BASE-T1S, uses 4B/5B mapping over Differential Manchester Encoding to maintain a data rate of 10 Mb/s at MAC/PLS interface, while providing in-band signaling between transmitter and receivers. However, 10BASE-T1S does not have any error correcting capability built into it. As a response to emerging building, industrial, and transportation requirements, this paper outlines research that leads to the possibility of establishing low-complexity, backward-compatible Forward Error Correction with per-frame configurable guaranteed burst error and erasure correcting capabilities over any 10BASE-T1S Ethernet network segment. The proposed technique combines a specialized, systematic Reed-Solomon code and a novel, three-tier, technique to avoid the appearance of certain inadmissible codeword symbols at the output of the encoder. In this way, the proposed technique enables error and erasure correction, while maintaining backwards compatibility with the current version of the standard.
Masayuki FUKUMITSU Shingo HASEGAWA
Multisignatures enable multiple users to sign a message interactively. Many instantiations are proposed for multisignatures, however, most of them are quantum-insecure, because these are based on the integer factoring assumption or the discrete logarithm assumption. Although there exist some constructions based on the lattice problems, which are believed to be quantum-secure, their security reductions are loose. In this paper, we aim to improve the security reduction of lattice-based multisignature schemes concerning tightness. Our basic strategy is combining the multisignature scheme proposed by El Bansarkhani and Sturm with the lattice-based signature scheme by Abdalla, Fouque, Lyubashevsky, and Tibouchi which has a tight security reduction from the Ring-LWE (Ring Learning with Errors) assumption. Our result shows that proof techniques for standard signature schemes can be applied to multisignature schemes, then we can improve the polynomial loss factor concerning the Ring-LWE assumption. Our second result is to address the problem of security proofs of existing lattice-based multisignature schemes pointed out by Damgård, Orlandi, Takahashi, and Tibouchi. We employ a new cryptographic assumption called the Rejected-Ring-LWE assumption, to complete the security proof.