Kosetsu TSUKUDA Keisuke ISHIDA Masahiro HAMASAKI Masataka GOTO
This paper describes a public web service called Kiite Cafe that lets users get together virtually to listen to music. When users listen to music on Kiite Cafe, their experiences are enhanced by two architectures: (i) visualization of each user's reactions, and (ii) selection of songs from users' favorite songs. These architectures enable users to feel social connection with others and the joy of introducing others to their favorite songs as if they were together listening to music in person. In addition, the architectures provide three user experiences: (1) motivation to react to played songs, (2) the opportunity to listen to a diverse range of songs, and (3) the opportunity to contribute as a curator. By analyzing the behavior logs of 2,399 Kiite Cafe users over a year, we quantitatively show that these user experiences can generate various effects (e.g., users react to a more diverse range of songs on Kiite Cafe than when listening alone). We also discuss how our proposed architectures can enrich music listening experiences with others.
Hongwei YANG Fucheng XUE Dan LIU Li LI Jiahui FENG
Service composition optimization is a classic NP-hard problem. How to quickly select high-quality services that meet user needs from a large number of candidate services is a hot topic in cloud service composition research. An efficient second-order beetle swarm optimization is proposed with a global search ability to solve the problem of cloud service composition optimization in this study. First, the beetle antennae search algorithm is introduced into the modified particle swarm optimization algorithm, initialize the population bying using a chaotic sequence, and the modified nonlinear dynamic trigonometric learning factors are adopted to control the expanding capacity of particles and global convergence capability. Second, modified secondary oscillation factors are incorporated, increasing the search precision of the algorithm and global searching ability. An adaptive step adjustment is utilized to improve the stability of the algorithm. Experimental results founded on a real data set indicated that the proposed global optimization algorithm can solve web service composition optimization problems in a cloud environment. It exhibits excellent global searching ability, has comparatively fast convergence speed, favorable stability, and requires less time cost.
Rupasingha A. H. M. RUPASINGHA Incheon PAIK Banage T. G. S. KUMARA
With the expansion of the Internet, the number of available Web services has increased. Web service clustering to identify functionally similar clusters has become a major approach to the efficient discovery of suitable Web services. In this study, we propose a Web service clustering approach that uses novel ontology learning and a similarity calculation method based on the specificity of an ontology in a domain with respect to information theory. Instead of using traditional methods, we generate the ontology using a novel method that considers the specificity and similarity of terms. The specificity of a term describes the amount of domain-specific information contained in that term. Although general terms contain little domain-specific information, specific terms may contain much more domain-related information. The generated ontology is used in the similarity calculations. New logic-based filters are introduced for the similarity-calculation procedure. If similarity calculations using the specified filters fail, then information-retrieval-based methods are applied to the similarity calculations. Finally, an agglomerative clustering algorithm, based on the calculated similarity values, is used for the clustering. We achieved highly efficient and accurate results with this clustering approach, as measured by improved average precision, recall, F-measure, purity and entropy values. According to the results, specificity of terms plays a major role when classifying domain information. Our novel ontology-based clustering approach outperforms comparable existing approaches that do not consider the specificity of terms.
Kosetsu TSUKUDA Keisuke ISHIDA Masahiro HAMASAKI Masataka GOTO
Creating new content based on existing original work is becoming popular especially among amateur creators. Such new content is called derivative work and can be transformed into the next new derivative work. Such derivative work creation is called “N-th order derivative creation.” Although derivative creation is popular, the reason an individual derivative work was created is not observable. To infer the factors that trigger derivative work creation, we have proposed a model that incorporates three factors: (1) original work's attractiveness, (2) original work's popularity, and (3) derivative work's popularity. Based on this model, in this paper, we describe a public web service for browsing derivation factors called Songrium Derivation Factor Analysis. Our service is implemented by applying our model to original works and derivative works uploaded to a video sharing service. Songrium Derivation Factor Analysis provides various visualization functions: Original Works Map, Derivation Tree, Popularity Influence Transition Graph, Creator Distribution Map, and Creator Profile. By displaying such information when users browse and watch videos, we aim to enable them to find new content and understand the N-th order derivative creation activity at a deeper level.
Wei LU Weidong WANG Ergude BAO Liqiang WANG Weiwei XING Yue CHEN
Web Service Composition (WSC) has been well recognized as a convenient and flexible way of service sharing and integration in service-oriented application fields. WSC aims at selecting and composing a set of initial services with respect to the Quality of Service (QoS) values of their attributes (e.g., price), in order to complete a complex task and meet user requirements. A major research challenge of the QoS-aware WSC problem is to select a proper set of services to maximize the QoS of the composite service meeting several QoS constraints upon various attributes, e.g. total price or runtime. In this article, a fast algorithm based on QoS-aware sampling (FAQS) is proposed, which can efficiently find the near-optimal composition result from sampled services. FAQS consists of five steps as follows. 1) QoS normalization is performed to unify different metrics for QoS attributes. 2) The normalized services are sampled and categorized by guaranteeing similar number of services in each class. 3) The frequencies of the sampled services are calculated to guarantee the composed services are the most frequent ones. This process ensures that the sampled services cover as many as possible initial services. 4) The sampled services are composed by solving a linear programming problem. 5) The initial composition results are further optimized by solving a modified multi-choice multi-dimensional knapsack problem (MMKP). Experimental results indicate that FAQS is much faster than existing algorithms and could obtain stable near-optimal result.
Hao HAN Yinxing XUE Keizo OYAMA Yang LIU
The rendering mechanism plays an indispensable role in browser-based Web application. It generates active webpages dynamically and provides human-readable layout through template engines, which are used as a standard programming model to separate the business logic and data computations from the webpage presentation. The client-side rendering mechanism, owing to the advances of rich application technologies, has been widely adopted. The adoption of client side rendering brings not only various merits but also new problems. In this paper, we propose and construct “pagelet”, a segment-based template engine for developing flexible and extensible Web applications. By presenting principles, practice and usage experience of pagelet, we conduct a comprehensive analysis of possible advantages and disadvantages brought by client-side rendering mechanism from the viewpoints of both developers and end-users.
Dajuan FAN Zhiqiu HUANG Lei TANG
One of the most important problems in web services application is the integration of different existing services into a new composite service. Existing work has the following disadvantages: (i) developers are often required to provide a composite service model first and perform formal verifications to check whether the model is correct. This makes the synthesis process of composite services semi-automatic, complex and inefficient; (ii) there is no assurance that composite services synthesized by using the fully-automatic approaches are correct; (iii) some approaches only handle simple composition problems where existing services are atomic. To address these problems, we propose a correct assurance approach for automatically synthesizing composite services based on finite state machine model. The syntax and semantics of the requirement model specifying composition requirements is also proposed. Given a set of abstract BPEL descriptions of existing services, and a composition requirement, our approach automatically generate the BPEL implementation of the composite service. Compared with existing approaches, the composite service generated by utilizing our proposed approach is guaranteed to be correct and does not require any formal verification. The correctness of our approach is proved. Moreover, the case analysis indicates that our approach is feasible and effective.
Gang WANG Li ZHANG Yonggang HUANG Yan SUN
It is the key concern for service providers that how a web service stands out among functionally similar services. QoS is a distinct and decisive factor in service selection among functionally similar services. Therefore, how to design services to meet customers' QoS requirements is an urgent problem for service providers. This paper proposes an approach using QFD (Quality Function Deployment) which is a quality methodology to transfer services' QoS requirements into services' design attribute characteristics. Fuzzy set is utilized to deal with subjective and vague assessments such as importance of QoS properties. TCI (Technical Competitive Index) is defined to compare the technical competitive capacity of a web service with those of other functionally similar services in the aspect of QoS. Optimization solutions of target values of service design attributes is determined by GA (Genetic Algorithm) in order to make the technical performance of the improved service higher than those of any other rival service products with the lowest improvement efforts. Finally, we evaluate candidate improvement solutions on cost-effectiveness. As the output of QFD process, the optimization targets and order of priority of service design attributes can be used as an important basis for developing and improving service products.
The globalization of commerce has increased the importance of retrieving and updating complex and distributed information efficiently. Web services currently show that the most promise for building distributed application systems and model-driven architecture is a new approach to developing such applications. The expanding scale and complexity of enterprise information systems (EISs) under distributed computing environments has made sharing and exchanging data particularly challenging. Data services are applications tailored specifically for information oriented tasks to deal with business service requirements, and are heavily dependent on the distributed architecture of consumer data processing. The implementation of a data service can eliminate inconsistency among various application systems in the exchange of data. This paper proposes a data-oriented model-driven developmental framework to deal with these issues, in which a platform independent model (PIM) is divided into a service model, a logic data model, and a service composition model. We also divide a platform specific model (PSM) into a physical data model and a data service model. In this development method, we define five meta-models and outline a set of rules governing the transformation from PIMs into PSMs. A code generator is also included to transform each PSM into the application code. We include a case study to demonstrate the feasibility and merits of the proposed development framework with a case study.
Shinji KIKUCHI Yoshihiro KANNA Yohsuke ISOZAKI
In recent years, there has been an increasing demand with regard to available elemental services provided by independent firms for compositing new services. Currently, however, whenever it is difficult to maintain the required level of quality of a new composite web service, assignment of the new computer's resources as provisioning at the data center is not always effective, especially in the area of performance for composite web service providers. Thus, a new approach might be required. This paper presents a new control method aiming to maintain the performance requirements for composite web services. There are three aspects of our method that are applied: first of all, the theory of constraints (TOC) proposed by E.M. Goldratt ; secondly, an evaluation process in the non-linear feed forward controlling method: and finally multiple trials in applying policies with verification. In particular, we will discuss the architectural and theoretical aspects of the method in detail, and will show the insufficiency of combining the feedback controlling approach with TOC as a result of our evaluation.
Takamichi SAITO Kiyomi SEKIGUCHI Ryosuke HATSUGAI
While the Secure Socket Layer or Transport Layer Security (SSL/TLS) is assumed to provide secure communications over the Internet, many web applications utilize basic or digest authentication of Hyper Text Transport Protocol (HTTP) over SSL/TLS. Namely, in the scheme, there are two different authentication schemes in a session. Since they are separated by a layer, these are not convenient for a web application. Moreover, the scheme may also cause problems in establishing secure communication. Then we provide a scheme of authentication binding between SSL/TLS and HTTP without modifying SSL/TLS protocols and its implementation, and we show the effectiveness of our proposed scheme.
Yoshitoshi MURATA Tsuyoshi TAKAYAMA Nobuyoshi SATO Kei KIKUCHI
The IP Multimedia Subsystem (IMS) establishes a session between end terminals as a client/server application in the Next Generation Network (NGN). These days, many application services are being provided as Web services. In this letter, we propose a new NGN architecture conforming to the architectural styles of Representational State Transfer (REST), which is a Web service technology for solving interoperability and traffic concentration problems in the Session Initiation Protocol (SIP).
Yukio TSUKISHIMA Michiaki HAYASHI Tomohiro KUDOH Akira HIRANO Takahiro MIYAMOTO Atsuko TAKEFUSA Atsushi TANIGUCHI Shuichi OKAMOTO Hidemoto NAKADA Yasunori SAMESHIMA Hideaki TANAKA Fumihiro OKAZAKI Masahiko JINNO
Platforms of hosting services are expected to provide a virtual private computing infrastructure with guaranteed levels of performance to support each reservation request sent by a client. To enhance the performance of the computing infrastructure in responding to reservation requests, the platforms are required to reserve, coordinate, and control globally distributed computing and network resources across multiple domains. This paper proposes Grid Network Service -- Web Services Interface version 2 (GNS-WSI2). GNS-WSI2 is a resource-reservation messaging protocol that establishes a client-server relationship. A server is a kind of management system in the management plane, and it allocates available network resources within its own domain in response to each reservation request from a client. GNS-WSI2 has the ability to reserve network resources rapidly and reliably over multiple network domains. This paper also presents the results of feasibility tests on a transpacific testbed that validate GNS-WSI2 in terms of the scalable reservation of network resources over multiple network domains. In the tests, two computing infrastructures over multiple network domains are dynamically provided for scientific computing and remote-visualization applications. The applications are successfully executed on the provided infrastructures.
Yuna KIM Wan Yeon LEE Kyong Hoon KIM Jong KIM
In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.
Choonhwa LEE Sunghoon KO Eunsam KIM Wonjun LEE
This letter describes combining OSGi and Web Services in service composition. According to our approach, a composite service is described in WS-BPEL. Each component service in the description may be resolved to either an OSGi service or Web Service at runtime. The proposal can overcome current limitations with OSGi technology in terms of its geographical coverage and candidate service population available for service composition.
Tomoyuki IIJIMA Hiroyasu KIMURA Makoto KITANI Yoshifumi ATARASHI
To develop a network management system (NMS) more easily, the authors developed an application programming interface (API) for configuring network devices. Because this API is used in a Java development environment, an NMS can be developed by utilizing the API and other commonly available Java libraries. It is thus possible to easily develop an NMS that is highly compatible with other IT systems. And operations that are generated from the API and that are exchanged between the NMS and network devices are based on NETCONF, which is standardized by the Internet Engineering Task Force (IETF) as a next-generation network-configuration protocol. Adopting a standardized technology ensures that the NMS developed by using the API can manage network devices provided from multi-vendors in a unified manner. Furthermore, the configuration items exchanged over NETCONF are specified in an object-oriented design. They are therefore easier to manage than such items in the Management Information Base (MIB), which is defined as data to be managed by the Simple Network Management Protocol (SNMP). We actually developed several NMSs by using the API. Evaluation of these NMSs showed that, in terms of configuration time and development time, the NMS developed by using the API performed as well as NMSs developed by using a command line interface (CLI) and SNMP. The NMS developed by using the API showed feasibility to achieve "autonomic network management" and "high interoperability with IT systems."
The steady approach of advanced nations toward realization of ubiquitous computing societies has given birth to rapidly growing demands for new-generation distributed computing (DC) applications. Consequently, economic and reliable construction of new-generation DC applications is currently a major issue faced by the software technology research community. What is needed is a new-generation DC software engineering technology which is at least multiple times more effective in constructing new-generation DC applications than the currently practiced technologies are. In particular, this author believes that a new-generation building-block (BB), which is much more advanced than the current-generation DC object that is a small extension of the object model embedded in languages C++, Java, and C#, is needed. Such a BB should enable systematic and economic construction of DC applications that are capable of taking critical actions with 100-microsecond-level or even 10-microsecond-level timing accuracy, fault tolerance, and security enforcement while being easily expandable and taking advantage of all sorts of network connectivity. Some directions considered worth pursuing for finding such BBs are discussed.
Changes in recent business and scientific environment have created a necessity for more efficient and effective workflow infrastructure. With increasing emphasis on Service-oriented architecture, service composition becomes a hot topic in workflow research. This paper proposes a novel approach of using ECA rules to realize the workflow modeling and implementation for service composition. First of all, the concept and formalization of ECA rule-based workflow is presented. Special activities and data structures are customized for the purpose of service composition. Second, an automatic event composition and decomposition algorithm is developed to ensure the correctness and validness of service composition at design time. Finally, the proposed ECA rule-based approach for service composition is illustrated through the implementation of a workflow prototype system.
Wei-Tek TSAI Xiao WEI Yinong CHEN Ray PAUL Bingnan XIAO
Current Web services testing techniques are unable to assure the desired level of trustworthiness, which presents a barrier to WS applications in mission and business critical environments. This paper presents a framework that assures the trustworthiness of Web services. New assurance techniques are developed within the framework, including specification verification via completeness and consistency checking, test case generation, and automated Web services testing. Traditional test case generation methods only generate positive test cases that verify the functionality of software. The proposed Swiss Cheese test case generation method is designed to generate both positive and negative test cases that also reveal the vulnerability of Web services. This integrated development process is implemented in a case study. The experimental evaluation demonstrates the effectiveness of this approach. It also reveals that the Swiss Cheese negative testing detects even more faults than positive testing and thus significantly reduces the vulnerability of Web services.
Eul Gyu IM Hoh Peter IN Dae-Sik CHOI Yong Ho SONG
The emergence of intelligent and sophisticated attack techniques makes web services more vulnerable than ever which are becoming an important business tool in e-commerce. Many techniques have been proposed to remove the security vulnerabilities, yet have limitations. This paper proposes an adaptive mechanism for a web-server intrusion-tolerant system (WITS) to prevent unknown patterns of attacks by adapting known attack patterns. SYN flooding attacks and their adaptive defense mechanisms are simulated as a case study to evaluate the performance of the proposed adaptation mechanism.