1-16hit |
Bo GU Zhi LIU Cheng ZHANG Kyoko YAMORI Osamu MIZUNO Yoshiaki TANAKA
The demand for wireless traffic is increasing rapidly, which has posed huge challenges to mobile network operators (MNOs). A heterogeneous network (HetNet) framework, composed of a marcocell and femtocells, has been proved to be an effective way to cope with the fast-growing traffic demand. In this paper, we assume that both the macrocell and femtocells are owned by the same MNO, with revenue optimization as its ultimate goal. We aim to propose a pricing strategy for macro-femto HetNets with a user centric vision, namely, mobile users would have their own interest to make rational decisions on selecting between the macrocell and femtocells to maximize their individual benefit. We formulate a Stackelberg game to analyze the interactions between the MNO and users, and obtain the equilibrium solution for the Stackelberg game. Via extensive simulations, we evaluate the proposed pricing strategy in terms of its efficiency with respect to the revenue optimization.
Osamu MIZUNO Daisuke SHIMODA Tohru KIKUNO Yasunari TAKAGI
This paper presents an enhancement of a software project simulator to perform risk prediction with cost estimation capability. So far, we have developed a software project simulator to simulate software development projects. In this simulator, a development process was described using Petri net model, and it was applied to some actual project data in a certain company successfully. On the other hand, we have also presented a risk predicting system to find "risky" projects by statistical analysis on risk questionnaire for project managers. In this approach, only the probability to be risky was calculated for a project. Thus, the managers in the company wanted to know a concrete proof why a software project becomes risky. In this paper, to present the proof that a software project becomes risky, we try to enhance the previous project simulator so that the simulator can deal with risk factors. To consider the risk factors, we modify the previous simulator so that both the fluctuation of skill level and the deadline pressure can be represented by the parameters in the simulator. By using a case study, we confirm that the enhanced simulator can estimate the development cost under some typical risks. As a result, we can expect that the simulator shows how much the development cost of a risky project exceeds an estimate.
This paper describes a computer-aided service creation environment (CSCE) for the intelligent network which supports easier graphical specification description for service designers of various skill levels, and service logic program (SLP) generation. The CSCE design concept consists of stepwise service specification description and SLP generation, message sequence chart description language (LSDL: Layered Service Specification Description Language), computer-aided sophisticated interface (IEDs: Intelligent Editors), automatic specification verification and rapid service prototyping. Service specification is described by three steps and in LSDL or SDL, and SLPs are generated through three converters referring to two knowledge databases. Three tests are conducted on the specifications described. The effectiveness of the CSCE is demonstrated by the results that the amount of SLP descriptions for five new practical services using the CSCE is reduced to less than about 20% in LSDL description, compared to C language description.
Takeshi SUMI Osamu MIZUNO Tohru KIKUNO Masayuki HIRAYAMA
According to the proliferation of ubiquitous computing, various products which contain large-size embedded software have been developed. One of most typical features of embedded software is concurrency of software and hardware factors. That is, software has connected deeply into hardware devices. The existence of various hardware make quality assurance of embedded software more difficult. In order to assure quality of embedded software more effectively, this paper discusses features of embedded software and an effective method for quality assurance for embedded software. In this paper, we first analyze a failure distribution of embedded software and discuss the effects of hardware devices on quality of embedded software. Currently, in order to reduce hardware related faults, huge effort for testing with large number of test items is required. Thus, one of the most important issues for quality assurance of embedded software is how to reduce the cost and effort of software testing. Next, focusing on hardware constraints as well as software specifications in embedded software, we propose an evaluation metrics for determinating important functions for quality of embedded software. Furthermore, by referring to the metrics, undesirable behaviors of important functions are identified as root nodes of fault tree analysis. From the result of case study applying the proposed method to actual project data, we confirmed that test items considering the property of embedded software are constructed. We also confirmed that the constructed test items are appropriate to detect hardware related faults in embedded systems.
Satoru UEHARA Osamu MIZUNO Tohru KIKUNO
In this paper we discuss the estimation of effort needed to update program codes according to given design specification changes. In the Object-Oriented incremental development (OOID), the requirement changes occur frequently and regularly. When a requirement change occurs, a design specification is changed accordingly. Then a program code is updated for given design specification change. In order to construct the development plan dynamically, a simple and fast estimation method of efforts for code updating is strongly required by both developers and managers. However, existing estimation methods cannot be applied to the OOID. We therefore try to propose a straightforward approach to estimate effort for code updating, which reflects the specific properties of the OOID. We list up following factors of the effort estimation for OOID: (1) updating activities consist of creation, deletion, and modification, (2) the target to be updated has four kinds of types (void type, basic type, library type, and custom type), (3) the degree of information hiding is classified into private, protected and public, and (4) the degree of inheritance affects updating efforts. We then propose a new formula E(P,σ) to calculate the efforts needed to update a program P according to a set of design specification changes σ. The formula E(P,σ) includes weighting parameters: Wupd, Wtype, Winf-h and Winht according to the characteristics (1), (2), (3) and (4), respectively. Finally, we conduct experimental evaluations by applying the formula E(P,σ) to actual project data in a certain company. The evaluation results statistically showed the validity of the proposed approach to some extent.
Osamu MIZUNO Yuichi SHIMAMURA Kazuhiro NAGAYAMA
The market for IP convergence services is expanding rapidly due to the rising number of Internet users. To respond to market trends, service systems must provide services quickly. This paper discusses that application server called the service agent which provides IP convergence services. The service agent meets the requirements for four application servers, centralized intelligence, supporting various interfaces: service creativity and scalability. The architecture is based on that of AIN systems, but whole system is written in Java especially to achieve service creativity and scalability. As a result of trial manufacture, feasibility of the service agent and scalability was achieved. Enough performance was also confirmed to obtain for commercial services.
Osamu MIZUNO Shinji KUSUMOTO Tohru KIKUNO Yasunari TAKAGI Keishi SAKAMOTO
In this paper, we consider a simple development process consisting of design and debug phases, which is derived from actual concurrent development process for embedded software at a certain company. Then we propose two-phase project control that examines the initial development plan at the end of design phase, updates it to the current status of the development process and executes the debug phase under the new plan. In order to show the usefulness, we define three imaginary projects based on actually executed projects in a certain company: the project that executes debug phase under initial plan, the project that applies the proposed approach, and the project that follows a uniform plan. Moreover, to execute these projects, we use the project simulator, which has already been developed based on GSPN model. Judging from the number of residual faults in all products, we found that project B is the best among them.
Khine Yin MON Masanari KONDO Eunjong CHOI Osamu MIZUNO
Defect prediction approaches have been greatly contributing to software quality assurance activities such as code review or unit testing. Just-in-time defect prediction approaches are developed to predict whether a commit is a defect-inducing commit or not. Prior research has shown that commit-level prediction is not enough in terms of effort, and a defective commit may contain both defective and non-defective files. As the defect prediction community is promoting fine-grained granularity prediction approaches, we propose our novel class-level prediction, which is finer-grained than the file-level prediction, based on the files of the commits in this research. We designed our model for Python projects and tested it with ten open-source Python projects. We performed our experiment with two settings: setting with product metrics only and setting with product metrics plus commit information. Our investigation was conducted with three different classifiers and two validation strategies. We found that our model developed by random forest classifier performs the best, and commit information contributes significantly to the product metrics in 10-fold cross-validation. We also created a commit-based file-level prediction for the Python files which do not have the classes. The file-level model also showed a similar condition as the class-level model. However, the results showed a massive deviation in time-series validation for both levels and the challenge of predicting Python classes and files in a realistic scenario.
This paper describes a novel approach for detecting fault-prone modules using a spam filtering technique. Fault-prone module detection in source code is important for the assurance of software quality. Most previous fault-prone detection approaches have been based on using software metrics. Such approaches, however, have difficulties in collecting the metrics and constructing mathematical models based on the metrics. Because of the increase in the need for spam e-mail detection, the spam filtering technique has progressed as a convenient and effective technique for text mining. In our approach, fault-prone modules are detected in such a way that the source code modules are considered text files and are applied to the spam filter directly. To show the applicability of our approach, we conducted experimental applications using source code repositories of Java based open source developments. The result of experiments shows that our approach can correctly predict 78% of actual fault-prone modules as fault-prone.
Masaki YOSHII Ryohei BANNO Osamu MIZUNO
New services can use fog nodes to distribute Internet of Things (IoT) data. To distribute IoT data, we apply the publish/subscribe messaging model to a fog computing system. A service provider assigns a unique identifier, called a Tag ID, to a player who owes data. A Tag ID matches multiple IDs and resolves the naming rule for data acquisition. However, when users configure their fog node and distribute IoT data to multiple players, the distributed data may contain private information. We propose a table-based access control list (ACL) to manage data transmission permissions to address this issue. It is possible to avoid unnecessary transmission of private data by using a table-based ACL. Furthermore, because there are fewer data transmissions, table-based ACL reduces traffic. Consequently, the overall system's average processing delay time can be reduced. The proposed method's performance was confirmed by simulation results. Table-based ACL, particularly, could reduce processing delay time by approximately 25% under certain conditions. We also concentrated on system security. The proposed method was used, and a qualitative evaluation was performed to demonstrate that security is guaranteed.
Osamu MIZUNO Akira SHIBATA Toshiya OKAMOTO Yoshihiro NIITSU
An advanced intelligent network (IN) provides service management along with telecommunication services, and has a two-layer architecture, i.e., a transmission layer and an intelligent layer. An advanced IN's programmability is achieved by a service-independent platform of nodes in the intelligent layer, and service-dependent software called logic programs. In contrast to telecommunication services, models for service management have not yet been established. This paper presents both execution and specification models for service management. The execution model is composed of three hierarchies that apply to various kinds of management operation. The specification model has the capability to define the details of data items. The specification language for service management is also proposed. Simulation on dynamic SQL based DBMS solved that; (1) Logic programs for service management can be made small size on the model, and (2) To provide efficient database operation, programmability must be enhanced if service management includes table with variable number of field operation.
Osamu MIZUNO Joji URATA Yoshiko SUEDA Yoshihiro NIITSU
The Advanced Intelligent Network (Advanced IN) is now commercialized and the Internet is becoming popular all over the world. If these two networks were connected, the potential would exist for new services. This paper surveys and analyzes the possibility of improving both the Internet and Advanced IN with an Advanced IN-the Internet connection. Service customization, which allows customers to define their own service specifications, is one of the most important service applications for the Advanced IN. However, some issues must be resolved before that service can be offered. This paper proposes a solution in which Internet technologies are applied to the IN. We review the system architecture of Service Logic Program (SLP) definition and execution in NTT's IN for service customization. Version management and cost for delivery are the major issue for service customization with the SLP(C) creation tool. We suggest an Internet version of the SLP(C) creation tool to solve these problems. Results of the prototype shows that connecting the IN and the Internet for service customization will benefit both customers and telecommunication operators.
Sousuke AMASAKI Yasunari TAKAGI Osamu MIZUNO Tohru KIKUNO
Recently, software development projects have been required to produce highly reliable systems within a short period and with low cost. In such situation, software quality prediction helps to confirm that the software product satisfies required quality expectations. In this paper, by using a Bayesian Belief Network (BBN), we try to construct a prediction model based on relationships elicited from the embedded software development process. On the one hand, according to a characteristic of embedded software development, we especially propose to classify test and debug activities into two distinct activities on software and hardware. Then we call the proposed model "the BBN for an embedded software development process". On the other hand, we define "the BBN for a general software development process" to be a model which does not consider this classification of activity, but rather, merges them into a single activity. Finally, we conducted experimental evaluations by applying these two BBNs to actual project data. As the results of our experiments show, the BBN for the embedded software development process is superior to the BBN for the general development process and is applicable effectively for effective practical use.
Masayuki HIRAYAMA Osamu MIZUNO Tohru KIKUNO
In order to respond to the active market's needs for software with various new functions, the system testing must be completed within a limited period. Additionally, important faults, which are closely related to essential functions for users or the target system, have to be removed, preferably in system testing. Many techniques have been proposed to date for effective software testing. Among them, selective software testing is one of the most cost effective techniques. However, most of the previous techniques cannot be applied to short-term development and initial development of software with various new functions because much cost is needed for their testing preparation. In this paper, we propose a new method for selective system testing in which priorities assigned to functions play an essential role in the execution of testing. The priorities are determined based on the evaluation results of three metrics for functions: the frequency of use, the complexity of use scenario, and the fault impact to users. Detailed testing instructions are assigned to test items with high priority, and short and ordinal instructions are assigned to those with low priority. The difference in the volume of testing instruction controls the effort of checking test items. As a result of experimental application to actual software testing in a certain company, we have confirmed that the proposed selective system testing can detect both fatal faults related to key functions and critical faults for the system.
Juntong HONG Eunjong CHOI Osamu MIZUNO
Code search is a task to retrieve the most relevant code given a natural language query. Several recent studies proposed deep learning based methods use multi-encoder model to parse code into multi-field to represent code. These methods enhance the performance of the model by distinguish between similar codes and utilizing a relation matrix to bridge the code and query. However, these models require more computational resources and parameters than single-encoder models. Furthermore, utilizing the relation matrix that solely relies on max-pooling disregards the delivery of word alignment information. To alleviate these problems, we propose a combined alignment model for code search. We concatenate the multi-code fields into one sequence to represent code and use one encoding model to encode code features. Moreover, we transform the relation matrix using trainable vectors to avoid information losses. Then, we combine intra-modal and cross-modal attention to assign the salient words while matching the corresponding code and query. Finally, we apply the attention weight to code/query embedding and compute the cosine similarity. To evaluate the performance of our model, we compare our model with six previous models on two popular datasets. The results show that our model achieves 0.614 and 0.687 Top@1 performance, outperforming the best comparison models by 12.2% and 9.3%, respectively.
Keisuke TSUNODA Akihiro CHIBA Kazuhiro YOSHIDA Tomoki WATANABE Osamu MIZUNO
In this paper, we propose a low-invasive framework to predict changes in cognitive performance using only heart rate variability (HRV). Although a lot of studies have tried to estimate cognitive performance using multiple vital data or electroencephalogram data, these methods are invasive for users because they force users to attach a lot of sensor units or electrodes to their bodies. To address this problem, we proposed a method to estimate cognitive performance using only HRV, which can be measured with as few as two electrodes. However, this can't prevent loss of worker productivity because the workers' productivity had already decreased even if their current cognitive performance had been estimated as being at a low level. In this paper, we propose a framework to predict changes in cognitive performance in the near future. We obtained three principal contributions in this paper: (1) An experiment with 45 healthy male participants clarified that changes in cognitive performance caused by mental workload can be predicted using only HRV. (2) The proposed framework, which includes a support vector machine and principal component analysis, predicts changes in cognitive performance caused by mental workload with 84.4 % accuracy. (3) Significant differences were found in some HRV features for test participants, depending on whether or not their cognitive performance changes had been predicted accurately. These results lead us to conclude that the framework has the potential to help both workers and managerial personnel predict what their performances will be in the near future. This will make it possible to proactively suggest rest periods or changes in work duties to prevent losses in productivity caused by decreases of cognitive work performance.