The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E97-D No.5  (Publication Date:2014/05/01)

    Special Section on Knowledge-Based Software Engineering
  • FOREWORD Open Access

    Saeko MATSUURA  

     
    FOREWORD

      Page(s):
    1016-1016
  • Rule-Based Verification Method of Requirements Ontology

    Dang Viet DZUNG  Bui Quang HUY  Atsushi OHNISHI  

     
    PAPER

      Page(s):
    1017-1027

    There have been many researches about construction and application of ontology in reality, notably the usage of ontology to support requirements engineering. The effect of ontology-based requirements engineering depends on quality of ontology. With the increasing size of ontology, it is difficult to verify the correctness of information stored in ontology. This paper will propose a method of using rules for verification the correctness of requirements ontology. We provide a rule description language to specify properties that requirements ontology should satisfy. Then, by checking whether the rules are consistent with requirements ontology, we verify the correctness of the ontology. We have developed a verification tool to support the method and evaluated the tool through experiments.

  • Ontology-Based Checking Method of Requirements Specification

    Dang Viet DZUNG  Atsushi OHNISHI  

     
    PAPER

      Page(s):
    1028-1038

    This paper introduces an ontology-based method for checking requirements specification. Requirements ontology is a knowledge structure that contains functional requirements (FR), attributes of FR and relations among FR. Requirements specification is compared with functional nodes in the requirements ontology, then rules are used to find errors in requirements. On the basis of the results, requirements team can ask questions to customers and correctly and efficiently revise requirements. To support this method, an ontology-based checking tool for verification of requirements has been developed. Finally, the requirements checking method is evaluated through an experiment.

  • Estimation of the Maturation Type of Requirements from Their Accessibility and Stability

    Takako NAKATANI  Shozo HORI  Keiichi KATAMINE  Michio TSUDA  Toshihiko TSUMAKI  

     
    PAPER

      Page(s):
    1039-1048

    The success of any project can be affected by requirements changes. Requirements elicitation is a series of activities of adding, deleting, and modifying requirements. We refer to the completion of requirements elicitation of a software component as requirements maturation. When the requirements of each component have reached the 100% maturation point, no requirement will come to the component. This does not mean that a requirements analyst (RA) will reject the addition of requirements, but simply, that the additional requirements will not come to the project. Our motivation is to provide measurements by which an RA can estimate one of the maturation periods: the early, middle, or late period of the project. We will proceed by introducing the requirements maturation efficiency (RME). The RME of the requirements represents how quickly the requirements of a component reach 100% maturation. Then, we will estimate the requirements maturation period for every component by applying the RME. We assume that the RME is derived from its accessibility from an RA to the requirements source and the stability of the requirements. We model accessibility as the number of information flows from the source of the requirements to the RA, and further, model stability with the requirements maturation index (RMI). According to the multiple regression analysis of a case, we are able to get an equation on RME derived from these two factors with a significant level of 5%. We evaluated the result by comparing it to another case, and then discuss the effectiveness of the measurements.

  • A Semantic-Based Topic Knowledge Map System (STKMS) for Lesson-Learned Documents Reuse in Product Design

    Ywen HUANG  Zhua JIANG  

     
    PAPER

      Page(s):
    1049-1057

    In the process of production design, engineers usually find it is difficult to seek and reuse others' empirical knowledge which is in the forms of lesson-learned documents. This study proposed a novel approach, which uses a semantic-based topic knowledge map system (STKMS) to support timely and precisely lesson-learned documents finding and reusing. The architecture of STKMS is designed, which has five major functional modules: lesson-learned documents pre-processing, topic extraction, topic relation computation, topic weights computation, and topic knowledge map generation modules. Then STKMS implementation is briefly introduced. We have conducted two sets of experiments to evaluate quality of knowledge map and the performance of utilizing STKMS in outfitting design of a ship-building company. The first experiment shows that knowledge maps generated by STKMS are accepted by domain experts from the evaluation since precision and recall are high. The second experiment shows that STKMS-based group outperforms browse-based group in both learning score and satisfaction level, which are two measurements of performance of utilizing STKMS. The promising results confirm the feasibility of STKMS in helping engineers to find needed lesson-learned documents and reuse related knowledge easily and precisely.

  • Interval Estimation Method for Decision Making in Wavelet-Based Software Reliability Assessment

    Xiao XIAO  Tadashi DOHI  

     
    PAPER

      Page(s):
    1058-1068

    Recently, the wavelet-based estimation method has gradually been becoming popular as a new tool for software reliability assessment. The wavelet transform possesses both spatial and temporal resolution which makes the wavelet-based estimation method powerful in extracting necessary information from observed software fault data, in global and local points of view at the same time. This enables us to estimate the software reliability measures in higher accuracy. However, in the existing works, only the point estimation of the wavelet-based approach was focused, where the underlying stochastic process to describe the software-fault detection phenomena was modeled by a non-homogeneous Poisson process. In this paper, we propose an interval estimation method for the wavelet-based approach, aiming at taking account of uncertainty which was left out of consideration in point estimation. More specifically, we employ the simulation-based bootstrap method, and derive the confidence intervals of software reliability measures such as the software intensity function and the expected cumulative number of software faults. To this end, we extend the well-known thinning algorithm for the purpose of generating multiple sample data from one set of software-fault count data. The results of numerical analysis with real software fault data make it clear that, our proposal is a decision support method which enables the practitioners to do flexible decision making in software development project management.

  • Mining API Usage Patterns by Applying Method Categorization to Improve Code Completion

    Rizky Januar AKBAR  Takayuki OMORI  Katsuhisa MARUYAMA  

     
    PAPER

      Page(s):
    1069-1083

    Developers often face difficulties while using APIs. API usage patterns can aid them in using APIs efficiently, which are extracted from source code stored in software repositories. Previous approaches have mined repositories to extract API usage patterns by simply applying data mining techniques to the collection of method invocations of API objects. In these approaches, respective functional roles of invoked methods within API objects are ignored. The functional role represents what type of purpose each method actually achieves, and a method has a specific predefined order of invocation in accordance with its role. Therefore, the simple application of conventional mining techniques fails to produce API usage patterns that are helpful for code completion. This paper proposes an improved approach that extracts API usage patterns at a higher abstraction level rather than directly mining the actual method invocations. It embraces a multilevel sequential mining technique and uses categorization of method invocations based on their functional roles. We have implemented a mining tool and an extended Eclipse's code completion facility with extracted API usage patterns. Evaluation results of this tool show that our approach improves existing code completion.

  • Method for Consistent GUI Arrangements by Analyzing Existing Windows and Its Evaluation

    Junko SHIROGANE  Seitaro SHIRAI  Hajime IWATA  Yoshiaki FUKAZAWA  

     
    PAPER

      Page(s):
    1084-1096

    To realize usability in software, GUI (Graphical User Interface) layouts must be consistent because consistency allows end users to operate software based on previous experiences. Often consistency can be achieved by user interface guidelines, which realize consistency in a software package as well as between various software packages within a platform. Because end users have different experiences and perceptions, GUIs based on guidelines are not always usable for end users. Thus, it is necessary to realize consistency without guidelines. Herein we propose a method to realize consistent GUIs where existing software packages are surveyed and common patterns for window layouts, which we call layout rules, are specified. Our method uses these layout rules to arrange the windows of GUIs. Concretely, source programs of developed GUIs are analyzed to identify the layout rules, and then these rules are used to extract parameters to generate source programs of undeveloped GUIs. To evaluate our method, we applied it to existing GUIs in software packages to extract the layout rules from several windows and to generate other windows. The evaluation confirms that our method easily realizes layout consistency.

  • Verifying Business Rules Using Model-Checking Techniques for Non-specialist in Model-Checking

    Yoshitaka AOKI  Saeko MATSUURA  

     
    PAPER

      Page(s):
    1097-1108

    Software programs often include many defects that are not easy to detect because of the developers' mistakes, misunderstandings caused by the inadequate definition of requirements, and the complexity of the implementation. Due to the different skill levels of the testers, the significant increase in testing person-hours interferes with the progress of development projects. Therefore, it is desireable for any inexperienced developer to identify the cause of the defects. Model checking has been favored as a technique to improve the reliability earlier in the software development process. In this paper, we propose a verification method in which a Java source code control sequence is converted into finite automata in order to detect the cause of defects by using the model-checking tool UPPAAL, which has an exhaustive checking mechanism. We also propose a tool implemented by an Eclipse plug-in to assist general developers who have little knowledge of the model-checking tool. Because source code is generally complicated and large, the tool provides a step-wise verification mechanism based on the functional structure of the code and makes it easy to verify the business rules in the specification documents by adding a user-defined specification-based model to the source code model.

  • Test Scenario Generation for Web Application Based on Past Test Artifacts

    Rogene LACANIENTA  Shingo TAKADA  Haruto TANNO  Morihide OINUMA  

     
    PAPER

      Page(s):
    1109-1118

    For the past couple of decades, the usage of the Web as a platform for deploying software products has become incredibly popular. Web applications became more prevalent, as well as more complex. Countless Web applications have already been designed, developed, tested, and deployed on the Internet. However, it is noticeable that many common functionalities are present among these vast number of applications. This paper proposes an approach based on a database containing information from previous test artifacts. The information is used to generate test scenarios for Web applications under test. We have developed a tool based on our proposed approach, with the aim of reducing the effort required from software test engineers and professionals during the test planning and creation stage of software engineering. We evaluated our approach from three viewpoints: comparison between our approach and manual generation, qualitative evaluation by professional software engineers, and comparison between our approach and two open-source tools.

  • Coordination of Local Process Views in Interorganizational Business Process

    Donghui LIN  Toru ISHIDA  

     
    PAPER

      Page(s):
    1119-1126

    Collaborative business has been increasingly developing with the environment of globalization and advanced information technologies. In a collaboration environment with multiple organizations, participants from different organizations always have different views about modeling the overall business process due to different knowledge and cultural backgrounds. Moreover, flexible support, privacy preservation and process reuse are important issues that should be considered in business process management across organizational boundaries. This paper presents a novel approach of modeling interorganizational business process for collaboration. Our approach allows for modeling loosely coupled interorganizational business process considering different views of organizations. In the proposed model, organizations have their own local process views of modeling business process instead of sharing pre-defined global processes. During process cooperation, local process of an organization can be invisible to other organizations. Further, we propose the coordination mechanisms for different local process views to detect incompatibilities among organizations. We illustrate our proposed approach by a case study of interorganizational software development collaboration.

  • Motivation Process Formalization and Its Application to Education Improvement for the Personal Software Process Course

    Masanobu UMEDA  Keiichi KATAMINE  Keiichi ISHIBASHI  Masaaki HASHIMOTO  Takaichi YOSHIDA  

     
    PAPER

      Page(s):
    1127-1138

    Software engineering education at universities plays an increasingly important role as software quality is becoming essential in realizing a safe and dependable society. This paper proposes a practical state transition model (Practical-STM) based on the Organizational Expectancy Model for the improvement of software process education based on the Personal Software Process (PSP) from a motivation point of view. The Practical-STM treats an individual trainee of the PSP course as a state machine, and formalizes a motivation process of a trainee using a set of states represented by factors regarding motivation and a set of operations carried out by course instructors. The state transition function of this model represents the features or characteristics of a trainee in terms of motivation. The model allows a formal description of the states of a trainee in terms of motivation and the educational actions of the instructors in the PSP course. The instructors are able to decide effective and efficient actions to take toward the trainees objectively by presuming a state and a state transition function of the trainees formally. Typical patterns of state transitions from an initial state to a final state, which is called a scenario, are useful for inferring possible transitions of a trainee and taking proactive operations from a motivation point of view. Therefore, the model is useful not only for improving the educational effect of the PSP course, but also for the standardization of the course management and the quality management of the instructors.

  • Special Section on Formal Approach
  • FOREWORD Open Access

    Yoshinao ISOBE  

     
    FOREWORD

      Page(s):
    1139-1139
  • Bisimilarity Control of Nondeterministic Discrete Event Systems under Event and State Observations

    Katsuyuki KIMURA  Shigemasa TAKAI  

     
    PAPER-Formal Verification

      Page(s):
    1140-1148

    In this paper, we study a supervisory control problem for plants and specifications modeled by nondeterministic automata. This problem requires to synthesize a nondeterministic supervisor such that the supervised plant is bisimilar to a given specification. We assume that a supervisor can observe not only the event occurrence but also the current state of the plant, and introduce a notion of completeness of a supervisor which guarantees that all nondeterministic transitions caused by events enabled by the supervisor are defined in the supervised plant. We define a notion of partial bisimulation between a given specification and the plant, and prove that it serves as a necessary and sufficient condition for the existence of a bisimilarity enforcing complete supervisor.

  • A Formal Verification of a Subset of Information-Based Access Control Based on Extended Weighted Pushdown System

    Pablo LAMILLA ALVAREZ  Yoshiaki TAKATA  

     
    PAPER-Formal Verification

      Page(s):
    1149-1159

    Information-Based Access Control (IBAC) has been proposed as an improvement to History-Based Access Control (HBAC) model. In modern component-based systems, these access control models verify that all the code responsible for a security-sensitive operation is sufficiently authorized to execute that operation. The HBAC model, although safe, may incorrectly prevent the execution of operations that should be executed. The IBAC has been shown to be more precise than HBAC maintaining its safety level while allowing sufficiently authorized operations to be executed. However the verification problem of IBAC program has not been discussed. This paper presents a formal model for IBAC programs based on extended weighted pushdown systems (EWPDS). The mapping process between the IBAC original semantics and the EWPDS structure is described. Moreover, the verification problem for IBAC programs is discussed and several typical IBAC program examples using our model are implemented.

  • TESLA Source Authentication Protocol Verification Experiment in the Timed OTS/CafeOBJ Method: Experiences and Lessons Learned

    Iakovos OURANOS  Kazuhiro OGATA  Petros STEFANEAS  

     
    PAPER-Formal Verification

      Page(s):
    1160-1170

    In this paper we report on experiences gained and lessons learned by the use of the Timed OTS/CafeOBJ method in the formal verification of TESLA source authentication protocol. These experiences can be a useful guide for the users of the OTS/CafeOBJ, especially when dealing with such complex systems and protocols.

  • An Approach for Synthesizing Intelligible State Machine Models from Choreography Using Petri Nets

    Toshiyuki MIYAMOTO  Yasuwo HASEGAWA  Hiroyuki OIMURA  

     
    PAPER-Formal Construction

      Page(s):
    1171-1180

    A service-oriented architecture builds the entire system using a combination of independent software components. Such an architecture can be applied to a wide variety of computer systems. The problem of synthesizing service implementation models from choreography representing the overall specifications of service interaction is known as the choreography realization problem. In automatic synthesis, software models should be simple enough to be easily understood by software engineers. In this paper, we discuss a semi-formal method for synthesizing hierarchical state machine models for the choreography realization problem. The proposed method is evaluated using metrics for intelligibility.

  • Protocol Inheritance Preserving Soundizability Problem and Its Polynomial Time Procedure for Acyclic Free Choice Workflow Nets

    Shingo YAMAGUCHI  Huan WU  

     
    PAPER-Formal Construction

      Page(s):
    1181-1187

    A workflow may be extended to adapt to market growth, legal reform, and so on. The extended workflow must be logically correct, and inherit the behavior of the existing workflow. Even if the extended workflow inherits the behavior, it may be not logically correct. Can we modify it so that it satisfies not only behavioral inheritance but also logical correctness? This is named behavioral inheritance preserving soundizability problem. There are two kinds of behavioral inheritance: protocol inheritance and projection inheritance. In this paper, we tackled protocol inheritance preserving soundizability problem using a subclass of Petri nets called workflow nets. Limiting our analysis to acyclic free choice workflow nets, we formalized the problem. And we gave a necessary and sufficient condition on the problem, which is the existence of a key structure of free choice workflow nets called TP-handle. Based on this condition, we also constructed a polynomial time procedure to solve the problem.

  • Regular Section
  • Retargeting Derivative-ASIP with Assembly Converter Tool

    Agus BEJO  Dongju LI  Tsuyoshi ISSHIKI  Hiroaki KUNIEDA  

     
    PAPER-Computer System

      Page(s):
    1188-1195

    This paper firstly presents a processor design with Derivative ASIP approach. The architecture of processor is designed by making use of a well-known embedded processor's instruction-set as a base architecture. To improve its performance, the architecture is enhanced with more hardware resources such as registers, interfaces and instruction extensions which might achieve target specifications. Secondly, a new approach for retargeting compiler by means of assembly converter tool is proposed. Our retargeting approach is practical because it is performed by the assembly converter tool with a simple configuration file and independent from a base compiler. With our proposed approach, both architecture flexibility and a good quality of assembly code can be obtained at once. Compared to other compilers, experiments show that our approach capable of generating code as high efficiency as its base compiler and the developed ASIP results in better performance than its base processor.

  • Area-Efficient Microarchitecture for Reinforcement of Turbo Mode

    Shinobu MIWA  Takara INOUE  Hiroshi NAKAMURA  

     
    PAPER-Computer System

      Page(s):
    1196-1210

    Turbo mode, which accelerates many applications without major change of existing systems, is widely used in commercial processors. Since time duration or powerfulness of turbo mode depends on peak temperature of a processor chip, reducing the peak temperature can reinforce turbo mode. This paper presents that adding small amount of hardware allows microprocessors to reduce the peak temperature drastically and then to reinforce turbo mode successfully. Our approach is to find out a few small units that become heat sources in a processor and to appropriately duplicate them for reduction of their power density. By duplicating the limited units and using the copies evenly, the processor can show significant performance improvement while achieving area-efficiency. The experimental result shows that the proposed method achieves up to 14.5% of performance improvement in exchange for 2.8% of area increase.

  • ParaLite: A Parallel Database System for Data-Intensive Workflows

    Ting CHEN  Kenjiro TAURA  

     
    PAPER-Computer System

      Page(s):
    1211-1224

    To better support data-intensive workflows which are typically built out of various independently developed executables, this paper proposes extensions to parallel database systems called User-Defined eXecutables (UDX) and collective queries. UDX facilitates the description of workflows by enabling seamless integrations of external executables into SQL statements without any efforts to write programs confirming to strict specifications of databases. A collective query is an SQL query whose results are distributed to multiple clients and then processed by them in parallel, using arbitrary UDX. It provides efficient parallelization of executables through the data transfer optimization algorithms that distribute query results to multiple clients, taking both communication cost and computational loads into account. We implement this concept in a system called ParaLite, a parallel database system based on a popular lightweight database SQLite. Our experiments show that ParaLite has several times higher performance over Hive for typical SQL tasks and has 10x speedup compared to a commercial DBMS for executables. In addition, this paper studies a real-world text processing workflow and builds it on top of ParaLite, Hadoop, Hive and general files. Our experiences indicate that ParaLite outperforms other systems in both productivity and performance for the workflow.

  • Reconfigurable Out-of-Order System for Fluid Dynamics Computation Using Unstructured Mesh

    Takayuki AKAMINE  Mohamad Sofian ABU TALIP  Yasunori OSANA  Naoyuki FUJITA  Hideharu AMANO  

     
    PAPER-Computer System

      Page(s):
    1225-1234

    Computational fluid dynamics (CFD) is an important tool for designing aircraft components. FaSTAR (Fast Aerodynamics Routines) is one of the most recent CFD packages and has various subroutines. However, its irregular and complicated data structure makes it difficult to execute FaSTAR on parallel machines due to memory access problem. The use of a reconfigurable platform based on field programmable gate arrays (FPGAs) is a promising approach to accelerating memory-bottlenecked applications like FaSTAR. However, even with hardware execution, a large number of pipeline stalls can occur due to read-after-write (RAW) data hazards. Moreover, it is difficult to predict when such stalls will occur because of the unstructured mesh used in FaSTAR. To eliminate this problem, we developed an out-of-order mechanism for permuting the data order so as to prevent RAW hazards. It uses an execution monitor and a wait buffer. The former identifies the state of the computation units, and the latter temporarily stores data to be processed in the computation units. This out-of-order mechanism can be applied to various types of computations with data dependency by changing the number of execution monitors and wait buffers in accordance with the equations used in the target computation. An out-of-order system can be reconfigured by automatic changing of the parameters. Application of the proposed mechanism to five subroutines in FaSTAR showed that its use reduces the number of stalls to less than 1% compared to without the mechanism. In-order execution was speeded up 2.6-fold and software execution was speeded up 2.9-fold using an Intel Core 2 Duo processor with a reasonable amount of overhead.

  • Towards the Identification of Cross-Cutting Concerns: A Comprehensive Dynamic Approach Based on Execution Relations

    Dongjin YU  Xiang SU  Yunlei MU  

     
    PAPER-Software System

      Page(s):
    1235-1243

    Aspect-oriented software development (AOSD) helps to solve the problem of low scalability and high maintenance costs of legacy systems caused by code scattering and tangling by extracting cross-cutting concerns and inserting them into aspects. Identifying the cross-cutting concerns of legacy systems is the key to reconstructing such systems using the approach of AOSD. However, current dynamic approaches to the identification of cross-cutting concerns simply check the methods' execution sequence, but do not consider their calling context, which may cause low precision. In this paper, we propose an improved comprehensive approach to the identification of candidate cross-cutting concerns of legacy systems based on the combination of the analysis of recurring execution relations and fan-ins. We first analyse the execution trace with a given test case and identify four types of execution relations for neighbouring methods: exit-entry, entry-exit, entry-entry and exit-exit. Afterwards, we measure the methods' left cross-cutting degrees and right cross-cutting degrees. The former ensures that the candidate recurs in a similar running context, whereas the latter indicates how many times the candidate cross-cuts different methods. The final candidates are then obtained from those high fan-in methods, which not only cross-cut others more times than a predefined threshold, but are always entered or left under the same running context. The experiment conducted on three open source systems shows that our approach improves the precision of identifying cross-cutting concerns compared with tradition ones.

  • An Investigation into the Characteristics of Merged Code Clones during Software Evolution

    Eunjong CHOI  Norihiro YOSHIDA  Katsuro INOUE  

     
    PAPER-Software Engineering

      Page(s):
    1244-1253

    Although code clones (i.e. code fragments that have similar or identical code fragments in the source code) are regarded as a factor that increases the complexity of software maintenance, tools for supporting clone refactoring (i.e. merging a set of code clones into a single method or function) are not commonly used. To promote the development of refactoring tools that can be more widely utilized, we present an investigation of clone refactoring carried out in the development of open source software systems. In the investigation, we identified the most frequently used refactoring patterns and discovered how merged code clone token sequences and differences in token sequence lengths vary for each refactoring pattern.

  • Clausius Normalized Field-Based Shape-Independent Motion Segmentation

    Eunjin KOH  Chanyoung LEE  Dong Gil JEONG  

     
    PAPER-Pattern Recognition

      Page(s):
    1254-1263

    We propose a novel motion segmentation method based on a Clausius Normalized Field (CNF), a probabilistic model for treating time-varying imagery, which estimates entropy variations by observing the entropy definitions of Clausius and Boltzmann. As pixels of an image are viewed as a state of lattice-like molecules in a thermodynamic system, estimating entropy variations of pixels is the same as estimating their degrees of disorder. A greater increase in entropy means that a pixel has a higher chance of belonging to moving objects rather than to the background, because of its higher disorder. In addition to these homologous operations, a CNF naturally takes into consideration both spatial and temporal information to avoid local maxima, which substantially improves the accuracy of motion segmentation. Our motion segmentation system using CNF clearly separates moving objects from their backgrounds. It also effectively eliminates noise to a level achieved when refined post-processing steps are applied to the results of general motion segmentations. It requires less computational power than other random fields and generates automatically normalized outputs without additional post-processes.

  • Adaptive Spectral Masking of AVQ Coding and Sparseness Detection for ITU-T G.711.1 Annex D and G.722 Annex B Standards

    Masahiro FUKUI  Shigeaki SASAKI  Yusuke HIWASAKI  Kimitaka TSUTSUMI  Sachiko KURIHARA  Hitoshi OHMURO  Yoichi HANEDA  

     
    PAPER-Speech and Hearing

      Page(s):
    1264-1272

    We proposes a new adaptive spectral masking method of algebraic vector quantization (AVQ) for non-sparse signals in the modified discreet cosine transform (MDCT) domain. This paper also proposes switching the adaptive spectral masking on and off depending on whether or not the target signal is non-sparse. The switching decision is based on the results of MDCT-domain sparseness analysis. When the target signal is categorized as non-sparse, the masking level of the target MDCT coefficients is adaptively controlled using spectral envelope information. The performance of the proposed method, as a part of ITU-T G.711.1 Annex D, is evaluated in comparison with conventional AVQ. Subjective listening test results showed that the proposed method improves sound quality by more than 0.1 points on a five-point scale on average for speech, music, and mixed content, which indicates significant improvement.

  • Developing an HMM-Based Speech Synthesis System for Malay: A Comparison of Iterative and Isolated Unit Training

    Mumtaz Begum MUSTAFA  Zuraidah Mohd DON  Raja Noor AINON  Roziati ZAINUDDIN  Gerry KNOWLES  

     
    PAPER-Speech and Hearing

      Page(s):
    1273-1282

    The development of an HMM-based speech synthesis system for a new language requires resources like speech database and segment-phonetic labels. As an under-resourced language, Malay lacks the necessary resources for the development of such a system, especially segment-phonetic labels. This research aims at developing an HMM-based speech synthesis system for Malay. We are proposing the use of two types of training HMMs, which are the benchmark iterative training incorporating the DAEM algorithm and isolated unit training applying segment-phonetic labels of Malay. The preferred method for preparing segment-phonetic labels is the automatic segmentation. The automatic segmentation of Malay speech database is performed using two approaches which are uniform segmentation that applies fixed phone duration, and a cross-lingual approach that adopts the acoustic model of English. We have measured the segmentation error of the two segmentation approaches to ascertain their relative effectiveness. A listening test was used to evaluate the intelligibility and naturalness of the synthetic speech produced from the iterative and isolated unit training. We also compare the performance of the HMM-based speech synthesis system with existing Malay speech synthesis systems.

  • Study of Reducing Circuit Scale Associated with Bit Depth Expansion Using Predictive Gradation Detection Algorithm

    Akihiro NAGASE  Nami NAKANO  Masako ASAMURA  Jun SOMEYA  Gosuke OHASHI  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    1283-1292

    The authors have evaluated a method of expanding the bit depth of image signals called SGRAD, which requires fewer calculations, while degrading the sharpness of images less. Where noise is superimposed on image signals, the conventional method for obtaining high bit depth sometimes incorrectly detects the contours of images, making it unable to sufficiently correct the gradation. Requiring many line memories is also an issue with the conventional method when applying the process to vertical gradation. As a solution to this particular issue, SGRAD improves the method of detecting contours with transiting gradation to effectively correct the gradation of image signals which noise is superimposed on. In addition, the use of a prediction algorithm for detecting gradation reduces the scale of the circuit with less correction of the vertical gradation.

  • Improvements of Local Descriptor in HOG/SIFT by BOF Approach

    Zhouxin YANG  Takio KURITA  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1293-1303

    Numerous studies have been focusing on the improvement of bag of features (BOF), histogram of oriented gradient (HOG) and scale invariant feature transform (SIFT). However, few works have attempted to learn the connection between them even though the latter two are widely used as local feature descriptor for the former one. Motivated by the resemblance between BOF and HOG/SIFT in the descriptor construction, we improve the performance of HOG/SIFT by a) interpreting HOG/SIFT as a variant of BOF in descriptor construction, and then b) introducing recently proposed approaches of BOF such as locality preservation, data-driven vocabulary, and spatial information preservation into the descriptor construction of HOG/SIFT, which yields the BOF-driven HOG/SIFT. Experimental results show that the BOF-driven HOG/SIFT outperform the original ones in pedestrian detection (for HOG), scene matching and image classification (for SIFT). Our proposed BOF-driven HOG/SIFT can be easily applied as replacements of the original HOG/SIFT in current systems since they are generalized versions of the original ones.

  • Detecting Trace of Seam Carving for Forensic Analysis

    Seung-Jin RYU  Hae-Yeoun LEE  Heung-Kyu LEE  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1304-1311

    Seam carving, which preserves semantically important image content during resizing process, has been actively researched in recent years. This paper proposes a novel forensic technique to detect the trace of seam carving. We exploit the energy bias and noise level of images under analysis to reliably unveil the evidence of seam carving. Furthermore, we design a detector investigating the relationship among neighboring pixels to estimate the inserted seams. Experimental results from a large set of test images indicates the superior performance of the proposed methods for both seam carving and seam insertion.

  • Quality Analysis of Discretization Methods for Estimation of Distribution Algorithms

    Chao-Hong CHEN  Ying-ping CHEN  

     
    PAPER-Biocybernetics, Neurocomputing

      Page(s):
    1312-1323

    Estimation of distribution algorithms (EDAs), since they were introduced, have been successfully used to solve discrete optimization problems and hence proven to be an effective methodology for discrete optimization. To enhance the applicability of EDAs, researchers started to integrate EDAs with discretization methods such that the EDAs designed for discrete variables can be made capable of solving continuous optimization problems. In order to further our understandings of the collaboration between EDAs and discretization methods, in this paper, we propose a quality measure of discretization methods for EDAs. We then utilize the proposed quality measure to analyze three discretization methods: fixed-width histogram (FWH), fixed-height histogram (FHH), and greedy random split (GRS). Analytical measurements are obtained for FHH and FWH, and sampling measurements are conducted for FHH, FWH, and GRS. Furthermore, we integrate Bayesian optimization algorithm (BOA), a representative EDA, with the three discretization methods to conduct experiments and to observe the performance difference. A good agreement is reached between the discretization quality measurements and the numerical optimization results. The empirical results show that the proposed quality measure can be considered as an indicator of the suitability for a discretization method to work with EDAs.

  • Locating Fetal Facial Surface, Oral Cavity and Airways by a 3D Ultrasound Calibration Using a Novel Cones' Phantom

    Rong XU  Jun OHYA  Yoshinobu SATO  Bo ZHANG  Masakatsu G. FUJIE  

     
    PAPER-Biological Engineering

      Page(s):
    1324-1335

    Toward the actualization of an automatic navigation system for fetoscopic tracheal occlusion (FETO) surgery, this paper proposes a 3D ultrasound (US) calibration-based approach that can locate the fetal facial surface, oral cavity, and airways by a registration between a 3D fetal model and 3D US images. The proposed approach consists of an offline process and online process. The offline process first reconstructs the 3D fetal model with the anatomies of the oral cavity and airways. Then, a point-based 3D US calibration system based on real-time 3D US images, an electromagnetic (EM) tracking device, and a novel cones' phantom, computes the matrix that transforms the 3D US image space into the world coordinate system. In the online process, by scanning the mother's body with a 3D US probe, 3D US images containing the fetus are obtained. The fetal facial surface extracted from the 3D US images is registered to the 3D fetal model using an ICP-based (iterative closest point) algorithm and the calibration matrices, so that the fetal facial surface as well as the oral cavity and airways are located. The results indicate that the 3D US calibration system achieves an FRE (fiducial registration error) of 1.49±0.44mm and a TRE (target registration error) of 1.81±0.56mm by using 24 fiducial points from two US volumes. A mean TRE of 1.55±0.46 mm is also achieved for measuring location accuracy of the 3D fetal facial surface extracted from 3D US images by 14 target markers, and mean location errors of 2.51±0.47 mm and 3.04±0.59 mm are achieved for indirectly measuring location accuracy of the pharynx and the entrance of the trachea, respectively, which satisfy the requirement of the FETO surgery.

  • An Improved Low Complexity Detection Scheme in MIMO-OFDM Systems

    Jang-Kyun AHN  Hyun-Woo JANG  Hyoung-Kyu SONG  

     
    LETTER-Fundamentals of Information Systems

      Page(s):
    1336-1339

    Although the QR decomposition M algorithm (QRD-M) detection reduces the complexity and achieves near-optimal detection performance, its complexity is still very high. In the proposed scheme, the received symbols through bad channel conditions are arranged in reverse order due to the performance of a system depending on the detection capability of the first layer. Simulation results show that the proposed scheme provides almost the same performance as the QRD-M. Moreover, the complexity is about 33.6% of the QRD-M for a bit error rate (BER) with 4×4 multi input multi output (MIMO) system.

  • Multiple Kernel Learning for Quadratically Constrained MAP Classification

    Yoshikazu WASHIZAWA  Tatsuya YOKOTA  Yukihiko YAMASHITA  

     
    LETTER-Fundamentals of Information Systems

      Page(s):
    1340-1344

    Most of the recent classification methods require tuning of the hyper-parameters, such as the kernel function parameter and the regularization parameter. Cross-validation or the leave-one-out method is often used for the tuning, however their computational costs are much higher than that of obtaining a classifier. Quadratically constrained maximum a posteriori (QCMAP) classifiers, which are based on the Bayes classification rule, do not have the regularization parameter, and exhibit higher classification accuracy than support vector machine (SVM). In this paper, we propose a multiple kernel learning (MKL) for QCMAP to tune the kernel parameter automatically and improve the classification performance. By introducing MKL, QCMAP has no parameter to be tuned. Experiments show that the proposed classifier has comparable or higher classification performance than conventional MKL classifiers.

  • A Fast Parallel Algorithm for Indexing Human Genome Sequences

    Woong-Kee LOH  Kyoung-Soo HAN  

     
    LETTER-Data Engineering, Web Information Systems

      Page(s):
    1345-1348

    A suffix tree is widely adopted for indexing genome sequences. While supporting highly efficient search, the suffix tree has a few shortcomings such as very large size and very long construction time. In this paper, we propose a very fast parallel algorithm to construct a disk-based suffix tree for human genome sequences. Our algorithm constructs a suffix array for part of the suffixes in the human genome sequence and then converts it into a suffix tree very quickly. It outperformed the previous algorithms by Loh et al. and Barsky et al. by up to 2.09 and 3.04 times, respectively.

  • Fast Density-Based Clustering Using Graphics Processing Units

    Woong-Kee LOH  Yang-Sae MOON  Young-Ho PARK  

     
    LETTER-Artificial Intelligence, Data Mining

      Page(s):
    1349-1352

    Due to the recent technical advances, GPUs are used for general applications as well as screen display. Many research results have been proposed to the performance of previous CPU-based algorithms by a few hundred times using the GPUs. In this paper, we propose a density-based clustering algorithm called GSCAN, which reduces the number of unnecessary distance computations using a grid structure. As a result of our experiments, GSCAN outperformed CUDA-DClust [2] and DBSCAN [3] by up to 13.9 and 32.6 times, respectively.

  • One-Class Naïve Bayesian Classifier for Toll Fraud Detection

    Pilsung KANG  

     
    LETTER-Artificial Intelligence, Data Mining

      Page(s):
    1353-1357

    In this paper, a one-class Naïve Bayesian classifier (One-NB) for detecting toll frauds in a VoIP service is proposed. Since toll frauds occur irregularly and their patterns are too diverse to be generalized as one class, conventional binary-class classification is not effective for toll fraud detection. In addition, conventional novelty detection algorithms have struggled with optimizing their parameters to achieve a stable detection performance. In order to resolve the above limitations, the original Naïve Bayesian classifier is modified to handle the novelty detection problem. In addition, a genetic algorithm (GA) is employed to increase efficiency by selecting significant variables. In order to verify the performance of One-NB, comparative experiments using five well-known novelty detectors and three binary classifiers are conducted over real call data records (CDRs) provided by a Korean VoIP service company. The experimental results show that One-NB detects toll frauds more accurately than other novelty detectors and binary classifiers when the toll frauds rates are relatively low. In addition, The performance of One-NB is found to be more stable than the benchmark methods since no parameter optimization is required for One-NB.

  • Class Prior Estimation from Positive and Unlabeled Data

    Marthinus Christoffel DU PLESSIS  Masashi SUGIYAMA  

     
    LETTER-Artificial Intelligence, Data Mining

      Page(s):
    1358-1362

    We consider the problem of learning a classifier using only positive and unlabeled samples. In this setting, it is known that a classifier can be successfully learned if the class prior is available. However, in practice, the class prior is unknown and thus must be estimated from data. In this paper, we propose a new method to estimate the class prior by partially matching the class-conditional density of the positive class to the input density. By performing this partial matching in terms of the Pearson divergence, which we estimate directly without density estimation via lower-bound maximization, we can obtain an analytical estimator of the class prior. We further show that an existing class prior estimation method can also be interpreted as performing partial matching under the Pearson divergence, but in an indirect manner. The superiority of our direct class prior estimation method is illustrated on several benchmark datasets.

  • Evaluation of Large-Sized LCD Touch Panel Using Differential Sensing Circuit and Algorithm

    Sang Hyuck BAE  Jaewon PARK  CheolSe KIM  SeokWoo LEE  Woosup SHIN  Yong-Surk LEE  

     
    LETTER-Human-computer Interaction

      Page(s):
    1363-1366

    In this letter, we evaluate the parasitic capacitance of an LCD touch panel, the description and implementation of a differential input sensing circuit, and an algorithm suitable for large LCDs with integrated touch function. When projected capacitive touch sensors are integrated with a liquid crystal display, the sensors have a very large amount of parasitic capacitance with the display elements. A differential input sensing circuit can detect small changes in the mutual capacitance from the touch of a finger. The circuit is realized using discrete components, and for the evaluation of a large-sized LCD touch panel, a printed circuit board touch panel is used.

  • A Combing Top-Down and Bottom-Up Discriminative Dictionaries Learning for Non-specific Object Detection

    Yurui XIE  Qingbo WU  Bing LUO  Chao HUANG  Liangzhi TANG  

     
    LETTER-Pattern Recognition

      Page(s):
    1367-1370

    In this letter, we exploit a new framework for detecting the non-specific object via combing the top-down and bottom-up cues. Specifically, a novel supervised discriminative dictionaries learning method is proposed to learn the coupled dictionaries for the object and non-object feature spaces in terms of the top-down cue. Different from previous dictionary learning methods, the new data reconstruction residual terms of coupled feature spaces, the sparsity penalty measures on the representations and an inconsistent regularizer for the learned dictionaries are all incorporated in a unitized objective function. Then we derive an iterative algorithm to alternatively optimize all the variables efficiently. Considering the bottom-up cue, the proposed discriminative dictionaries learning is then integrated with an unsupervised dictionary learning to capture the objectness windows in an image. Experimental results show that the non-specific object detection problem can be effectively solved by the proposed dictionary leaning framework and outperforms some established methods.

  • Feature-Level Fusion of Finger Veins and Finger Dorsal Texture for Personal Authentication Based on Orientation Selection

    Wenming YANG  Guoli MA  Fei ZHOU  Qingmin LIAO  

     
    LETTER-Pattern Recognition

      Page(s):
    1371-1373

    This study proposes a feature-level fusion method that uses finger veins (FVs) and finger dorsal texture (FDT) for personal authentication based on orientation selection (OS). The orientation codes obtained by the filters correspond to different parts of an image (foreground or background) and thus different orientations offer different levels of discrimination performance. We have conducted an orientation component analysis on both FVs and FDT. Based on the analysis, an OS scheme is devised which combines the discriminative orientation features of both modalities. Our experiments demonstrate the effectiveness of the proposed method.

  • An Efficient Strategy for Bit-Quad-Based Euler Number Computing Algorithm

    Bin YAO  Hua WU  Yun YANG  Yuyan CHAO  Atsushi OHTA  Haruki KAWANAKA  Lifeng HE  

     
    LETTER-Pattern Recognition

      Page(s):
    1374-1378

    The Euler number of a binary image is an important topological property for pattern recognition, and can be calculated by counting certain bit-quads in the image. This paper proposes an efficient strategy for improving the bit-quad-based Euler number computing algorithm. By use of the information obtained when processing the previous bit quad, the number of times that pixels must be checked in processing a bit quad decreases from 4 to 2. Experiments demonstrate that an algorithm with our strategy significantly outperforms conventional Euler number computing algorithms.

  • Image Quality Assessment Based on Multi-Order Visual Comparison

    Fei ZHOU  Wen SUN  Qingmin LIAO  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1379-1381

    A new scheme based on multi-order visual comparison is proposed for full-reference image quality assessment. Inspired by the observation that various image derivatives have great but different effects on visual perception, we perform respective comparison on different orders of image derivatives. To obtain an overall image quality score, we adaptively integrate the results of different comparisons via a perception-inspired strategy. Experimental results on public databases demonstrate that the proposed method is more competitive than some state-of-the-art methods, benchmarked against subjective assessment given by human beings.

  • Image Contrast Enhancement Using Adaptive Slope

    Hwa-Soo WOO  Jong-Wha CHONG  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1382-1385

    In this paper, we propose an algorithm for contrast enhancement based on Adaptive Histogram Equalization (AHE) to improve image quality. Most histogram-based contrast enhancement methods have problems with excessive or low image contrast enhancement. This results in unnatural output images and the loss of visual information. The proposed method manipulates the slope of the input of the Probability Density Function (PDF) histogram. We also propose a pixel redistribution method using convolution to compensate for excess pixels after the slope modification procedure. Our method adaptively enhances the contrast of the input image and shows good simulation results compared with conventional methods.

  • An Improved White-RGB Color Filter Array Based CMOS Imaging System for Cell Phones in Low-Light Environments

    Chang-shuai WANG  Jong-wha CHONG  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1386-1389

    In this paper, a novel White-RGB (WRGB) color filter array-based imaging system for cell phone is presented to reduce noise and reproduce color in low illumination. The core process is based on adaptive diagonal color separation to recover color components from a white signal using diagonal reference blocks and location-based color ratio estimation in the luminance space. The experiments, which are compared with the RGB and state-of-the-art WRGB approaches, show that our imaging system performs well for various spatial frequency images and color restoration in low-light environments.

  • Texture Direction Based Optimization for Intra Prediction in HEVC

    Zhengcong WANG  Peng WANG  Hongguang ZHANG  Hongjun ZHANG  Shibao ZHENG  Li SONG  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1390-1393

    High Efficiency Video Coding (HEVC) is the latest video coding standard that is supported by JCT-VC. In this letter, an encoding algorithm for early termination of Coding Unit (CU) and Prediction Unit (PU) based on the texture direction is proposed for the HEVC intra prediction. Experimental results show that the proposed algorithm provides an average 40% total encoding time reduction with the negligible loss of rate-distortion.

  • Deformable Part-Based Model Transfer for Object Detection

    Zhiwei RUAN  Guijin WANG  Xinggang LIN  Jing-Hao XUE  Yong JIANG  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    1394-1397

    The transfer of prior knowledge from source domains can improve the performance of learning when the training data in a target domain are insufficient. In this paper we propose a new strategy to transfer deformable part models (DPMs) for object detection, using offline-trained auxiliary DPMs of similar categories as source models to improve the performance of the target object detector. A DPM presents an object by using a root filter and several part filters. We use these filters of the auxiliary DPMs as prior knowledge and adapt the filters to the target object. With a latent transfer learning method, appropriate local features are extracted for the transfer of part filters. Our experiments demonstrate that this strategy can lead to a detector superior to some state-of-the-art methods.

  • Adaptive Subscale Entropy Based Quantification of EEG

    Young-Seok CHOI  

     
    LETTER-Biological Engineering

      Page(s):
    1398-1401

    This letter presents a new entropy measure for electroencephalograms (EEGs), which reflects the underlying dynamics of EEG over multiple time scales. The motivation behind this study is that neurological signals such as EEG possess distinct dynamics over different spectral modes. To deal with the nonlinear and nonstationary nature of EEG, the recently developed empirical mode decomposition (EMD) is incorporated, allowing an EEG to be decomposed into its inherent spectral components, referred to as intrinsic mode functions (IMFs). By calculating Shannon entropy of IMFs in a time-dependent manner and summing them over adaptive multiple scales, the result is an adaptive subscale entropy measure of EEG. Simulation and experimental results show that the proposed entropy properly reveals the dynamical changes over multiple scales.