Ouyang JUNJIE Naoto YANAI Tatsuya TAKEMURA Masayuki OKADA Shingo OKAMURA Jason Paul CRUZ
The BGPsec protocol, which is an extension of the border gateway protocol (BGP) for Internet routing known as BGPsec, uses digital signatures to guarantee the validity of routing information. However, the use of digital signatures in routing information on BGPsec causes a lack of memory in BGP routers, creating a gaping security hole in today's Internet. This problem hinders the practical realization and implementation of BGPsec. In this paper, we present APVAS (AS path validation based on aggregate signatures), a new protocol that reduces the memory consumption of routers running BGPsec when validating paths in routing information. APVAS relies on a novel aggregate signature scheme that compresses individually generated signatures into a single signature. Furthermore, we implement a prototype of APVAS on BIRD Internet Routing Daemon and demonstrate its efficiency on actual BGP connections. Our results show that the routing tables of the routers running BGPsec with APVAS have 20% lower memory consumption than those running the conventional BGPsec. We also confirm the effectiveness of APVAS in the real world by using 800,000 routes, which are equivalent to the full route information on a global scale.
Zheying HUANG Ji XU Qingwei ZHAO Pengyuan ZHANG
Although end-to-end based speech recognition research for Mandarin-English code-switching has attracted increasing interests, it remains challenging due to data scarcity. Meta-learning approach is popular with low-resource modeling using high-resource data, but it does not make full use of low-resource code-switching data. Therefore we propose a two-fold cross-validation training framework combined with meta-learning approach. Experiments on the SEAME corpus demonstrate the effects of our method.
A fast cross-validation algorithm for model selection in kernel ridge regression problems is proposed, which is aiming to further reduce the computational cost of the algorithm proposed by An et al. by eigenvalue decomposition of a Gram matrix.
Naoto ISHIDA Takashi ISHIO Yuta NAKAMURA Shinji KAWAGUCHI Tetsuya KANDA Katsuro INOUE
Defects in spacecraft software may result in loss of life and serious economic damage. To avoid such consequences, the software development process incorporates code review activity. A code review conducted by a third-party organization independently of a software development team can effectively identify defects in software. However, such review activity is difficult for third-party reviewers, because they need to understand the entire structure of the code within a limited time and without prior knowledge. In this study, we propose a tool to visualize inter-module dataflow for source code of spacecraft software systems. To evaluate the method, an autonomous rover control program was reviewed using this visualization. While the tool does not decreases the time required for a code review, the reviewers considered the visualization to be effective for reviewing code.
Takashi WATANABE Akito MONDEN Zeynep YÜCEL Yasutaka KAMEI Shuji MORISAKI
Association rule mining discovers relationships among variables in a data set, representing them as rules. These are expected to often have predictive abilities, that is, to be able to predict future events, but commonly used rule interestingness measures, such as support and confidence, do not directly assess their predictive power. This paper proposes a cross-validation -based metric that quantifies the predictive power of such rules for characterizing software defects. The results of evaluation this metric experimentally using four open-source data sets (Mylyn, NetBeans, Apache Ant and jEdit) show that it can improve rule prioritization performance over conventional metrics (support, confidence and odds ratio) by 72.8% for Mylyn, 15.0% for NetBeans, 10.5% for Apache Ant and 0 for jEdit in terms of SumNormPre(100) precision criterion. This suggests that the proposed metric can provide better rule prioritization performance than conventional metrics and can at least provide similar performance even in the worst case.
Yong JIN Kunitaka KAKOI Nariyoshi YAMAI Naoya KITAGAWA Masahiko TOMOISHI
The widespread usage of computers and communication networks affects people's social activities effectively in terms of intercommunication and the communication generally begins with domain name resolutions which are mainly provided by DNS (Domain Name System). Meanwhile, continuous cyber threats to DNS such as cache poisoning also affects computer networks critically. DNSSEC (DNS Security Extensions) is designed to provide secure name resolution between authoritative zone servers and DNS full resolvers. However high workload of DNSSEC validation on DNS full resolvers and complex key management on authoritative zone servers hinder its wide deployment. Moreover, querying clients use the name resolution results validated on DNS full resolvers, therefore they only get errors when DNSSEC validation fails or times out. In addition, name resolution failure can occur on querying clients due to technical and operational issues of DNSSEC. In this paper, we propose a client based DNSSEC validation system with adaptive alert mechanism considering minimal querying client timeout. The proposed system notifies the user of alert messages with answers even when the DNSSEC validation on the client fails or timeout so that the user can determine how to handle the received answers. We also implemented a prototype system and evaluated the features on a local experimental network as well as in the Internet. The contribution of this article is that the proposed system not only can mitigate the workload of DNS full resolvers but also can cover querying clients with secure name resolution, and by solving the existing operation issues in DNSSEC, it also can promote DNSSEC deployment.
An automotive control system is a typical safety-critical embedded software, which requires extensive verification and validation (V&V) activities. This article introduces a toolset for automated V&V of automotive control system, including a test generator for automotive operating systems, a task simulator for validating task design of control software, and an API-call constraint checker to check emergent properties when composing control software with its underlying operating system. To the best of our knowledge, it is the first integrated toolset that supports V&V activities for both control software and operating systems in the same framework.
Daiki MAEHARA Gia Khanh TRAN Kei SAKAGUCHI Kiyomichi ARAKI
This paper empirically validates battery-less sensor activation via wireless energy transmission to release sensors from wires and batteries. To seamlessly extend the coverage and activate sensor nodes distributed in any indoor environment, we proposed multi-point wireless energy transmission with carrier shift diversity. In this scheme, multiple transmitters are employed to compensate path-loss attenuation and orthogonal frequencies are allocated to the multiple transmitters to avoid the destructive interference that occurs when the same frequency is used by all transmitters. In our previous works, the effectiveness of the proposed scheme was validated theoretically and also empirically by using just a spectrum analyzer to measure the received power. In this paper, we develop low-energy battery-less sensor nodes whose consumed power and required received power for activation are respectively 142µW and 400µW. In addition, we conduct indoor experiments in which the received power and activation of battery-less sensor node are simultaneously observed by using the developed battery-less sensor node and a spectrum analyzer. The results show that the coverage of single-point and multi-point wireless energy transmission without carrier shift diversity are, respectively, 84.4% and 83.7%, while the coverage of the proposed scheme is 100%. It can be concluded that the effectiveness of the proposed scheme can be verified by our experiments using real battery-less sensor nodes.
Xia YIN Jiangyuan YAO Zhiliang WANG Xingang SHI Jun BI Jianping WU
The researches on model-based testing mainly focus on the models with single component, such as FSM and EFSM. For the network protocols which have multiple components communicating with messages, CFSM is a widely accepted solution. But in some network protocols, parallel and data-shared components maybe exist in the same network entity. It is infeasible to precisely specify such protocol by existing models. In this paper we present a new model, Parallel Parameterized Extended Finite State Machine (PaP-EFSM). A protocol system can be modeled with a group of PaP-EFSMs. The PaP-EFSMs work in parallel and they can read external variables form each other. We present a 2-stage test generation approach for our new models. Firstly, we generate test sequences for internal variables of each machine. They may be non-executable due to external variables. Secondly, we process the external variables. We make the sequences for internal variables executable and generate more test sequences for external variables. For validation, we apply this method to the conformance testing of real-life protocols. The devices from different vendors are tested and implementation faults are exposed.
Dong-Geun CHOI Ki-Hwea KIM Jaehoon CHOI
New target specific absorption rate (SAR) values, calculated using a proposed reference dipole antenna and the reference flat phantom, are presented for an SAR validation test at 150MHz. The reference flat phantom recommended by the International Electrotechnical Commission (IEC) standard for 150MHz requires a significant amount of liquid owing to its large size. We conduct a numerical analysis in order to reduce the size of the flat phantom. The optimum size of the flat phantom is 780 (L1) × 540 (W) × 200 (H)mm3, which is approximately a 64% reduction in volume compared to the reference flat phantom. The length of the reference dipole antenna required for the optimized flat phantom (extrapolated from the reference values at 300MHz) becomes 760mm. The calculated and measured return losses (S11) of the antenna at 150MHz are 24.1dB and 22dB, respectively. The calculated and measured results for the return loss of the dipole antenna agree well and satisfy the IEC standard (> 20dB). The target SAR values derived from the numerical analysis are 1.08W/kg for 1g of tissue and 0.77W/kg for 10g of tissue for an SAR validation test at 150MHz.
Daiki MAEHARA Gia Khanh TRAN Kei SAKAGUCHI Kiyomichi ARAKI Minoru FURUKAWA
This paper presents a method to seamlessly extend the coverage of energy supply field for wireless sensor networks in order to free sensors from wires and batteries, where the multi-point scheme is employed to overcome path-loss attenuation, while the carrier shift diversity is introduced to mitigate the effect of interference between multiple wave sources. As we focus on the energy transmission part, sensor or communication schemes are out of scope of this paper. To verify the effectiveness of the proposed wireless energy transmission, this paper conducts indoor experiments in which we compare the power distribution and the coverage performance of different energy transmission schemes including conventional single-point, simple multi-point and our proposed multi-point scheme. To easily observe the effect of the standing-wave caused by multipath and interference between multiple wave sources, 3D measurements are performed in an empty room. The results of our experiments together with those of a simulation that assumes a similar antenna setting in free space environment show that the coverage of single-point and multi-point wireless energy transmission without carrier shift diversity are limited by path-loss, standing-wave created by multipath and interference between multiple wave sources. On the other hand, the proposed scheme can overcome power attenuation due to the path-loss as well as the effect of standing-wave created by multipath and interference between multiple wave sources.
Regularized forward selection is viewed as a method for obtaining a sparse representation in a nonparametric regression problem. In regularized forward selection, regression output is represented by a weighted sum of several significant basis functions that are selected from among a large number of candidates by using a greedy training procedure in terms of a regularized cost function and applying an appropriate model selection method. In this paper, we propose a model selection method in regularized forward selection. For the purpose, we focus on the reduction of a cost function, which is brought by appending a new basis function in a greedy training procedure. We first clarify a bias and variance decomposition of the cost reduction and then derive a probabilistic upper bound for the variance of the cost reduction under some conditions. The derived upper bound reflects an essential feature of the greedy training procedure; i.e., it selects a basis function which maximally reduces the cost function. We then propose a thresholding method for determining significant basis functions by applying the derived upper bound as a threshold level and effectively combining it with the leave-one-out cross validation method. Several numerical experiments show that generalization performance of the proposed method is comparable to that of the other methods while the number of basis functions selected by the proposed method is greatly smaller than by the other methods. We can therefore say that the proposed method is able to yield a sparse representation while keeping a relatively good generalization performance. Moreover, our method has an advantage that it is free from a selection of a regularization parameter.
The string analysis is a static analysis of dynamically generated strings in a target program, which is applied to check well-formed string construction in web applications. The string analysis constructs a finite state automaton that approximates a set of possible strings generated for a particular string variable at a program location at runtime. A drawback in the string analysis is imprecision in the analysis result, leading to false positives in the well-formedness checkers. To address the imprecision, this paper proposes an improvement technique of the string analysis to make it perform more precise analysis with respect to input validation in web applications. This paper presents the improvement by annotations representing screening of a set of possible strings, and empirical evaluation with experiments of the improved analyzer on real-world web applications.
Amir Masoud GHAREHBAGHI Masahiro FUJITA
In this paper, we have addressed the problem of ordering transactions in network-on-chips (NoCs) for post-silicon validation. The main idea is to extract the order of the transactions from the local partial orders in each NoC tile based on a set of “happened-before” rules, assuming transactions do not have a timestamp. The assumption is based on the fact that implementation and usage of a global time as timestamp in such systems may not be practical or efficient. When a new transaction is received in a tile, we send special messages to the neighboring tiles to inform them regarding the new transaction. The process of sending those special messages continues recursively in all the tiles that receive them until another such special message is detected. This way, we relate local orders of different tiles with each other. We show that our method can reconstruct the correct transaction orders when communication delays are deterministic. We have shown the effectiveness of our method by correctly ordering the transaction in NoCs with mesh and torus topologies with different sizes from 5*5 to 9*9. Also, we have implemented the proposed method in hardware to show its feasibility.
Masashi SUGIYAMA Makoto YAMADA
The Hilbert-Schmidt independence criterion (HSIC) is a kernel-based statistical independence measure that can be computed very efficiently. However, it requires us to determine the kernel parameters heuristically because no objective model selection method is available. Least-squares mutual information (LSMI) is another statistical independence measure that is based on direct density-ratio estimation. Although LSMI is computationally more expensive than HSIC, LSMI is equipped with cross-validation, and thus the kernel parameter can be determined objectively. In this paper, we show that HSIC can actually be regarded as an approximation to LSMI, which allows us to utilize cross-validation of LSMI for determining kernel parameters in HSIC. Consequently, both computational efficiency and cross-validation can be achieved.
Shigeaki TAGASHIRA Yutaka KAMINISHI Yutaka ARAKAWA Teruaki KITASUKA Akira FUKUDA
Data caching is widely known as an effective power-saving technique, in which mobile devices use local caches instead of original data placed on a server, in order to reduce the power consumption necessary for network accesses. In such data caching, a cache invalidation mechanism is important in preventing these devices from unintentionally accessing invalid data. In this paper, we propose a broadcast-based protocol for cache invalidation in a location-aware system. The proposed protocol is designed to reduce the access time required for obtaining necessary invalidation reports through broadcast media and to avoid client-side sleep fragmentation while retrieving the reports. In the proposed protocol, a Bloom filter is used as the data structure of an invalidation report, in order to probabilistically check the invalidation of caches. Furthermore, we propose three broadcast scheduling methods that are intended to achieve flexible broadcasting structured by the Bloom filter: fragmentation avoidance scheduling method (FASM), metrics balancing scheduling method (MBSM), and minimizing access time scheduling method (MASM). The broadcast schedule is arranged for consecutive accesses to geographically neighboring invalidation reports. In addition, the effectiveness of the proposed methods is evaluated by simulation. The results indicate that the MBSM and MASM achieve a high rate of performance scheduling. Compared to the FASM, the MBSM reduces the access time by 34%, while the fragmentations on the resultant schedule increase by 40%, and the MASM reduces the access time by 40%, along with an 85% increase in the number of fragmentations.
Kei HASHIMOTO Heiga ZEN Yoshihiko NANKAKU Akinobu LEE Keiichi TOKUDA
This paper proposes Bayesian context clustering using cross validation for hidden Markov model (HMM) based speech recognition. The Bayesian approach is a statistical technique for estimating reliable predictive distributions by treating model parameters as random variables. The variational Bayesian method, which is widely used as an efficient approximation of the Bayesian approach, has been applied to HMM-based speech recognition, and it shows good performance. Moreover, the Bayesian approach can select an appropriate model structure while taking account of the amount of training data. Since prior distributions which represent prior information about model parameters affect estimation of the posterior distributions and selection of model structure (e.g., decision tree based context clustering), the determination of prior distributions is an important problem. However, it has not been thoroughly investigated in speech recognition, and the determination technique of prior distributions has not performed well. The proposed method can determine reliable prior distributions without any tuning parameters and select an appropriate model structure while taking account of the amount of training data. Continuous phoneme recognition experiments show that the proposed method achieved a higher performance than the conventional methods.
Micro-gap electrostatic discharge (ESD) events due to a human with charge voltages below 1000 V cause serious malfunctions in high-tech information devices. For clarifying such a mechanism, it is indispensable to grasp the spark process of such micro-gap ESDs. For this purpose, two types of spark-resistance laws proposed by Rompe-Weizel and Toepler have often been used, which were derived from the hypotheses that spark conductivity be proportional to the internal energies and charges injected into a spark channel, respectively. However, their validity has not well been verified. To examine which spark-resistance formula could be applied for micro-gap ESDs, with a 12-GHz digital oscilloscope, we previously measured the discharge currents through the hand-held metal piece from a charged human with respect to charged voltages of 200 V and 2000 V, and thereby derived the conductance of a spark gap to reveal that both of their hypotheses are roughly valid in the initial stage of sparks. In this study, to further verify the above spark hypotheses, we derived the discharge voltages in closed forms across a spark gap based on the above spark-resistance formulae, and investigated which spark-resistance formula could be applied for micro-gap ESDs in comparison of spark gaps estimated from the measured discharge currents. As a result, we found that Rompe-Weizel's formula could well explain spark properties for micro-gap ESDs than Toepler's one regardless of charge voltages and approach speeds.
Noboru HATTORI Shuichiro YAMAMOTO Tsuneo AJISAKA Tsuyoshi KITANI
We propose requirement validation criteria and a method based on the interaction between actors in an information system. We focus on the cyclical transitions of one actor's situation against another and clarify observable stimuli and responses based on these transitions. Both actors' situations can be listed in a state transition table, which describes the observable stimuli or responses they send or receive. Examination of the interaction between both actors in the state transition tables enables us to detect missing or defective observable stimuli or responses. Typically, this method can be applied to the examination of the interaction between a resource managed by the information system and its user. As a case study, we analyzed 332 requirement defect reports of an actual system development project in Japan. We found that there were a certain amount of defects regarding missing or defective stimuli and responses, which can be detected using our proposed method if this method is used in the requirement definition phase. This means that we can reach a more complete requirement definition with our proposed method.
Youzheng WU Hideki KASHIOKA Satoshi NAKAMURA
Given a question and a set of its candidate answers, the task of answer validation (AV) aims to return a Boolean value indicating whether a given candidate answer is the correct answer to the question. Unlike previous works, this paper presents an unsupervised model, called the U-model, for AV. This approach regards AV as a classification task and investigates how effectively using redundancy of the Web into the proposed architecture. Experimental results with TREC factoid test sets and Chinese test sets indicate that the proposed U-model with redundancy information is very effective for AV. For example, the top@1/mrr@5 scores on the TREC05, and 06 tracks are 40.1/51.5% and 35.8/47.3%, respectively. Furthermore, a cross-model comparison experiment demonstrates that the U-model is the best among the redundancy-based models considered. Even compared with a syntax-based approach, a supervised machine learning approach and a pattern-based approach, the U-model performs much better.