The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E89-D No.4  (Publication Date:2006/04/01)

    Special Section on Knowledge-Based Software Engineering
  • FOREWORD

    Toyohiko HIROTA  

     
    FOREWORD

      Page(s):
    1319-1320
  • Sizing Data-Intensive Systems from ER Model

    Hee Beng Kuan TAN  Yuan ZHAO  

     
    PAPER

      Page(s):
    1321-1326

    There is still much problem in sizing software despite the existence of well-known software sizing methods such as Function Point method. Many developers still continue to use ad-hoc methods or so called "expert" approaches. This is mainly due to the fact that the existing methods require much information that is difficult to identify or estimate in the early stage of a software project. The accuracy of ad-hoc and "expert" methods also has much problem. The entity-relationship (ER) model is widely used in conceptual modeling (requirements analysis) for data-intensive systems. The characteristic of a data-intensive system, and therefore the source code of its software, is actually well characterized by the ER diagram that models its data. This paper proposes a method for building software size model from extended ER diagram through the use of regression models. We have collected some real data from the industry to do a preliminary validation of the proposed method. The result of the validation is very encouraging.

  • Design and Implementation of a Software Inspection Support System for UML Diagrams

    Yoshihide OHGAME  Atsuo HAZEYAMA  

     
    PAPER

      Page(s):
    1327-1336

    Software inspection is a widely acknowledged effective quality improvement method in software development by detecting defects involved in software artifacts and removing them. In research on software inspection, constructing computer supported inspection systems is a major topic in the field. A lot of systems have been reported. However few inspection support systems for model diagrams, especially UML diagrams, have been emerged. We identified four key requirements an inspection support system for UML diagrams should have. They are as follows: 1) direct annotations are given to model diagrams, 2) version management is provided so that evolution of artifacts can be managed, 3) the whole inspection process should be supported, 4) horizontal and vertical readings are supported. This paper describes design and implementation of our inspection support system for UML diagrams to realize the four requirements.

  • Improvement of the Correctness of Scenarios with Rules

    Atsushi OHNISHI  

     
    PAPER

      Page(s):
    1337-1346

    Scenarios that describe concrete situations of software operation play an important role in software development, especially in requirements engineering. Since scenarios are informal, the correctness of scenarios is hard to be verified. The authors have developed a language for describing scenarios in which simple action traces are embellished. The purposes are to include typed frames based on a simple case grammar of actions and to describe the sequence among events. Based on this scenario language, this paper describes both (1) a correctness-checking method using rules to detect errors (lack of events, extra events, and wrong sequence among events) in a scenario and (2) a retrieval method of rules from rule DB that applicable to scenarios using pre and post- conditions.

  • A Model for Detecting Cost-Prone Classes Based on Mahalanobis-Taguchi Method

    Hirohisa AMAN  Naomi MOCHIDUKI  Hiroyuki YAMADA  

     
    PAPER

      Page(s):
    1347-1358

    In software development, comprehensive software reviews and testings are important activities to preserve high quality and to control maintenance cost. However it would be actually difficult to perform comprehensive software reviews and testings because of a lot of components, a lack of manpower and other realistic restrictions. To improve performances of reviews and testings in object-oriented software, this paper proposes a novel model for detecting cost-prone classes; the model is based on Mahalanobis-Taguchi method--an extended statistical discriminant method merging with a pattern recognition approach. Experimental results using a lot of Java software are provided to statistically demonstrate that the proposed model has a high ability for detecting cost-prone classes.

  • High-Volume Continuous XPath Querying in XML Message Brokers

    Hyunho LEE  Wonsuk LEE  

     
    PAPER

      Page(s):
    1359-1367

    The core technical issue in XML message brokers, which play a key role in exchanging information in ubiquitous environments, is processing a large set of continuous XPath queries over incoming XML streams. In this paper, a new system as an epochal solution for this issue is proposed. The system is designed to minimize the runtime workload of continuous query processing by transforming XPath expressions and XML streams into newly proposed data structures and matching them efficiently. Also, system performances are estimated both in terms of space and time, and verified through a variety of experimental studies, showing that the proposed system is practically linear-scalable and stable in terms of processing a set of XPath queries in a continuous and timely fashion.

  • A Graphical RDF-Based Meta-Model Management Tool

    Takeshi MORITA  Noriaki IZUMI  Naoki FUKUTA  Takahira YAMAGUCHI  

     
    PAPER

      Page(s):
    1368-1377

    We propose a tool to manage several sorts of relationships among RDF (Resource Description Framework) and RDFS (RDF Schema). Our tool consists of three main functions: graphical editing of RDF descriptions, graphical editing of RDFS descriptions, and meta-model management facilities. In this paper, we focus on the meta-model management, a key concept which is defined as the appropriate management of the correspondence between a model and its meta-model: especially, the class and property in the meta-model, and the type of RDF resource and property in the model. The above facilities are implemented based on the plug-in system. We provide basic plug-in modules for checking the consistency of RDFS classes and properties. The prototyping tool, called MR3 (Meta-Model Management based on RDFs Revision Reflection), is implemented by Java language. Through an experiment using MR3, we show how MR3 contributes to the Semantic Web paradigm from the standpoint of RDFs description management.

  • Supporting Application Framework Selection Based on Labeled Transition Systems

    Teruyoshi ZENMYO  Takashi KOBAYASHI  Motoshi SAEKI  

     
    PAPER

      Page(s):
    1378-1389

    Framework technology is one of the promising approaches to reuse effectively software and its key issues are 1) to select the suitable frameworks for the software requirements specification, and 2) to fill the suitable hot spots with application-specific codes (customization). This paper proposes a new technique for automated support of the above issues by using labeled transition systems (LTSs) together with metrics technique. We model the behavior of the frameworks and the system behavior specified in the requirements specification by using two LTSs respectively. By establishing bisimilar relationship between the two LTSs, we check whether the behavior of the framework can match to the requirements and explore how to fill its hot spots. This process is done by means of constructing a graph to extract the bisimilar relationships, and each path of the graph denotes one of the implementations of the requirements by the framework. We attach some measures to the LTS of the framework, such as the number of the hot spots to be filled and the number of the parameters to be set up when filling the corresponding hot spot. These measures are used to estimate developer's efforts in filling the hot spots for each implementation, i.e. for each path of the graph. The result of estimating the efforts guides the developers to select the implementation, and the structure of the application-specific codes to be filled in can be automatically generated from the selected implementation. Furthermore we discuss case studies in the area of Web application, where Struts and Turbine can be used.

  • Meta-Modeling Based Version Control System for Software Diagrams

    Takafumi ODA  Motoshi SAEKI  

     
    PAPER

      Page(s):
    1390-1402

    In iterative software development methodology, a version control system is used in order to record and manage modification histories of products such as source codes and models described in diagrams. However, conventional version control systems cannot manage the models as a logical unit because the systems mainly handle source codes. In this paper, we propose a version control technique for handling diagrammatical models as logical units. Then we illustrate the feasibility of our approach with the implementation of version control functions of a meta-CASE tool that is able to generate a modeling tool in order to deal with various diagrams.

  • Supporting Refactoring Activities Using Histories of Program Modification

    Shinpei HAYASHI  Motoshi SAEKI  Masahito KURIHARA  

     
    PAPER

      Page(s):
    1403-1412

    Refactoring is one of the promising techniques for improving program design by means of program transformation with preserving behavior, and is widely applied in practice. However, it is difficult for engineers to identify how and where to refactor programs, because proper knowledge and skills of a high order are required of them. In this paper, we propose the technique to instruct how and where to refactor a program by using a sequence of its modifications. We consider that the histories of program modifications reflect developers' intentions, and focusing on them allows us to provide suitable refactoring guides. Our technique can be automated by storing the correspondence of modification patterns to suitable refactoring operations. By implementing an automated supporting tool, we show its feasibility. The tool is implemented as a plug-in for Eclipse IDE. It selects refactoring operations by matching between a sequence of program modifications and modification patterns.

  • Goal-Oriented Methodology for Agent System Development

    Zhiqi SHEN  Chunyan MIAO  Robert GAY  Dongtao LI  

     
    PAPER

      Page(s):
    1413-1420

    The Goal-Orientation is one of the key features in agent systems. This paper proposes a new methodology for multi-agent system development based on Goal Net model. The methodology covers the whole life cycle of the agent system development, from requirement analysis, architecture design, detailed design to implementation. A Multi-Agent Development Environment (MADE) that facilitates the design and implementation of agent systems is presented. A case study on an agent-based e-learning system developed using the proposed methodology is illustrated in this paper.

  • A Flexible Connection Model for Software Components

    Hironori WASHIZAKI  Daiki HOSHI  Yoshiaki FUKAZAWA  

     
    PAPER

      Page(s):
    1421-1431

    A component connection enables a component to use the functionality of other components directly, without generating adapters or other mechanisms at run-time. In conventional component connection models, the connection between components, particularly third-party components, is very costly for code reuse because the component source code must be modified if the types of requester-side and provider-side are different. This paper proposes a new component model, built upon an existing component architecture, which abandons a component service type and connects components based on a method type collection of the provider and requester components. Our model enables flexible connections owing to relaxed component matching, in which the system that implements our model automatically converts values of parameters, return values, and exceptions between required methods and provided ones within a well-defined range. As a result of experimental evaluations, it is found that our model is superior to conventional models in terms of the component-use cost and the capability of changing connections.

  • Regular Section
  • Generalization of Sorting in Single Hop Wireless Networks

    Shyue-Horng SHIAU  Chang-Biau YANG  

     
    PAPER-Computation and Computational Models

      Page(s):
    1432-1439

    The generalized sorting problem is to find the first k largest elements among n input elements and to report them in a sorted order. In this paper, we propose a fast generalized sorting algorithm under the single hop wireless networks model with collision detection (WNCD). The algorithm is based on the maximum finding algorithm and the sorting algorithm. The key point of our algorithm is to use successful broadcasts to build broadcasting layers logically and then to distribute the data elements into those logic layers properly. Thus, the number of broadcast conflicts is reduced. We prove that the average time complexity required for our generalized sorting algorithm is Θ(k + log(n - k)). When k = 1, our generalized sorting algorithm does the work of finding maximum, and when k = n, it does the work of sorting. Thus, the analysis of our algorithm builds a connection between the two extremely special cases which are maximum finding and sorting.

  • Hybrid Evolutionary Soft-Computing Approach for Unknown System Identification

    Chunshien LI  Kuo-Hsiang CHENG  Zen-Shan CHANG  Jiann-Der LEE  

     
    PAPER-Computation and Computational Models

      Page(s):
    1440-1449

    A hybrid evolutionary neuro-fuzzy system (HENFS) is proposed in this paper, where the weighted Gaussian function (WGF) is used as the membership function for improved premise construction. With the WGF, different types of the membership functions (MFs) can be accommodated in the rule base of HENFS. A new hybrid algorithm of random optimization (RO) algorithm incorporated with the least square estimation (LSE) is presented. Based on the hybridization of RO-LSE, the proposed soft-computing approach overcomes the disadvantages of other widely used algorithms. The proposed HENFS is applied to chaos time series identification and industrial process modeling to verify its feasibility. Through the illustrations and comparisons the impressive performances for unknown system identification can be observed.

  • An Energy-Efficient Partitioned Instruction Cache Architecture for Embedded Processors

    CheolHong KIM  SungWoo CHUNG  ChuShik JHON  

     
    PAPER-Computer Systems

      Page(s):
    1450-1458

    Energy efficiency of cache memories is crucial in designing embedded processors. Reducing energy consumption in the instruction cache is especially important, since the instruction cache consumes a significant portion of total processor energy. This paper proposes a new instruction cache architecture, named Partitioned Instruction Cache (PI-Cache), for reducing dynamic energy consumption in the instruction cache by partitioning it to smaller (less power-consuming) sub-caches. When the proposed PI-Cache is accessed, only one sub-cache is accessed by utilizing the temporal/spatial locality of applications. In the meantime, other sub-caches are not accessed, leading to dynamic energy reduction. The PI-Cache also reduces dynamic energy consumption by eliminating the energy consumed in tag lookup and comparison. Moreover, the performance gap between the conventional instruction cache and the proposed PI-Cache becomes little when the physical cache access time is considered. We evaluated the energy efficiency by running a cycle accurate simulator, SimpleScalar, with power parameters obtained from CACTI. Simulation results show that the PI-Cache improves the energy-delay product by 20%-54% compared to the conventional direct-mapped instruction cache.

  • Generating Test Sequences from Statecharts for Concurrent Program Testing

    Heui-Seok SEO  In Sang CHUNG  Yong Rae KWON  

     
    PAPER-Software Engineering

      Page(s):
    1459-1469

    This paper presents an approach to specification-based testing of concurrent programs with representative test sequences generated from Statecharts. Representative test sequences are a subset of all possible interleavings of concurrent events that define the behaviors of a concurrent program. Because a program's correctness may be determined by checking whether a program implemented all behaviors in its specification or not, the program can be regarded as being correct if it can supply an alternative execution that has the same effects as the program's behavior with each representative test sequence. Based on this observation, we employ each representative test sequence as a seed to generate an automaton that accepts its equivalent sequences to reveal the same behavior. In order to effectively test a concurrent program, the automaton such generated accepts all sequences equivalent to the representative test sequence and it is used to control test execution. This paper describes an automated process of generating automata from a Statecharts specification and shows how the proposed approach works on Statecharts specifications through some examples.

  • Effectiveness of an Integrated CASE Tool for Productivity and Quality of Software Developments

    Michio TSUDA  Sadahiro ISHIKAWA  Osamu OHNO  Akira HARADA  Mayumi TAKAHASHI  Shinji KUSUMOTO  Katsuro INOUE  

     
    PAPER-Software Engineering

      Page(s):
    1470-1479

    This is commonly thought that CASE tools reduce programming efforts and increase development productivity. However, no paper has provide quantitative data supporting the matter. This paper discusses productivity improvement through the use of an integrated CASE tool system named EAGLE (Effective Approach to Achieving High Level Software Productivity), as shown by various data collected in Hitachi from the 1980s to the 2000s. We have evaluated productivity by using three metrics, l) program generation rate using reusable program skeletons and components, 2) fault density at two test phase, and 3) learning curve for the education of inexperienced programmers. We will show that productivity has been improved by the various facilities of EAGLE.

  • An Efficient Schema-Based Technique for Querying XML Data

    Dao Dinh KHA  Masatoshi YOSHIKAWA  

     
    PAPER-Database

      Page(s):
    1480-1489

    As data integration over the Web has become an increasing demand, there is a growing desire to use XML as a standard format for data exchange. For sharing their grammars efficiently, most of the XML documents in use are associated with a document structure description, such as DTD or XML schema. However, the document structure information is not utilized efficiently in previously proposed techniques of XML query processing. In this paper, we present a novel technique that reduces the disk I/O complexity of XML query processing. We design a schema-based numbering scheme called SPAR that incorporates both structure information and tag names extracted from DTD or XML schema. Based on SPAR, we develop a mechanism called VirtualJoin that significantly reduces disk I/O workload for processing XML queries. As shown by experiments, VirtualJoin outperforms many prior techniques.

  • A Memory Grouping Method for Reducing Memory BIST Logic of System-on-Chips

    Masahide MIYAZAKI  Tomokazu YONEDA  Hideo FUJIWARA  

     
    PAPER-Dependable Computing

      Page(s):
    1490-1497

    With the increasing demand for SoCs to include rich functionality, SoCs are being designed with hundreds of small memories with different sizes and frequencies. If memory BIST logics were individually added to these various memories, the area overhead would be very high. To reduce the overhead, memory BIST logic must therefore be shared. This paper proposes a memory-grouping method for memory BIST logic sharing. A memory-grouping problem is formulated and an algorithm to solve the problem is proposed. Experimental results show that the proposed method reduced the area of the memory BIST wrapper by up to 40.55%. The results also show that the ability to select from two types of connection methods produced a greater reduction in area than using a single connection method.

  • Image Authentication Based on Modular Embedding

    Moon Ho LEE  Valery KORZHIK  Guillermo MORALES-LUNA  Sergei LUSSE  Evgeny KURBATOV  

     
    PAPER-Application Information Security

      Page(s):
    1498-1506

    We consider a watermark application to assist in the integrity maintenance and verification of the associated images. There is a great benefit in using WM in the context of authentication since it does not require any additional storage space for supplementary metadata, in contrast with cryptographic signatures, for instance. However there is a fundamental problem in the case of exact authentication: How to embed a signature into a cover message in such a way that it would be possible to restore the watermarked cover image into its original state without any error? There are different approaches to solve this problem. We use the watermarking method consisting of modulo addition of a mark and investigate it in detail. Our contribution lies in investigating different modified techniques of both watermark embedding and detection in order to provide the best reliability of watermark authentication. The simulation results for different types of embedders and detectors in combination with the pictures of watermarked images are given.

  • Low Power Block-Based Watermarking Algorithm

    Yu-Ting PAI  Shanq-Jang RUAN  

     
    PAPER-Application Information Security

      Page(s):
    1507-1514

    In recent years, digital watermarking has become a popular technique for labeling digital images by hiding secret information which can protect the copyright. The goal of this paper is to develop a DCT-based watermarking algorithm for low power and high performance. Our energy-efficient technique focuses on reducing computation required on block-based permutation. Instead of using spacial coefficients proposed by Hsu and Wu's algorithm [1], we use DCT coefficients to pair blocks directly. The approach is implemented by C language and estimated power dissipation using Wattch toolset. The experimental results show that our approach not only reduces 99% energy consumption of pairing mechanism, but also increase the PSNR by 0.414 db for the best case. Moreover, the proposed approach is robust to a variety of signal distortions, such as JPEG, image cropping, sharpening, blurring, and intensity adjusting.

  • Affinity Based Lateral Interaction Artificial Immune System

    Hongwei DAI  Zheng TANG  Yu YANG  Hiroki TAMURA  

     
    PAPER-Human-computer Interaction

      Page(s):
    1515-1524

    Immune system protects living body from various attacks by foreign invades. Based on the immune response principles, we propose an improved lateral interaction artificial immune system model in this paper. Considering that the different epitopes on the surface of antigen can be recognized by a set of different paratopes expressed on the surface of immune cells, we build a neighborhood set that consists of immune cells with different affinities to a certain input antigen. We update all the weights of the immune cells located in neighborhood set according to their affinities. Simulations on noisy pattern recognition illustrate that the proposed artificial immune system model has stronger noise tolerance ability and is more effective at recognizing noisy patterns than that of our previous models.

  • Design of Fuzzy Controller of the Cycle-to-Cycle Control for Swing Phase of Hemiplegic Gait Induced by FES

    Achmad ARIFIN  Takashi WATANABE  Nozomu HOSHIMIYA  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Page(s):
    1525-1533

    The goal of this study was to design a practical fuzzy controller of the cycle-to-cycle control for multi-joint movements of swing phase of functional electrical stimulation (FES) induced gait. First, we designed three fuzzy controllers (a fixed fuzzy controller, a fuzzy controller with parameter adjustment based on the gradient descent method, and a fuzzy controller with parameter adjustment based on a fuzzy model) and two PID controllers (a fixed PID and an adaptive PID controllers) for controlling two-joint (knee and ankle) movements. Control capabilities of the designed controllers were tested in automatic generation of stimulation burst duration and in compensation of muscle fatigue through computer simulations using a musculo-skeletal model. The fuzzy controllers showed better responses than the PID controllers in the both control capabilities. The parameter adjustment based on the fuzzy model was shown to be effective when oscillating response was caused due to the inter-subject variability. Based on these results, we designed the fuzzy controller with the parameter adjustment realized using the fuzzy model for controlling three-joint (hip, knee, and ankle) movements. The controlled gait pattern obtained by computer simulation was not significantly different from the normal gait pattern and it could be qualitatively accepted in clinical FES gait control. The fuzzy controller designed for the cycle-to-cycle control for multi-joint movements during the swing phase of the FES gait was expected to be examined clinically.

  • A Linear Time Algorithm for Binary Fingerprint Image Denoising Using Distance Transform

    Xuefeng LIANG  Tetsuo ASANO  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    1534-1542

    Fingerprints are useful for biometric purposes because of their well known properties of distinctiveness and persistence over time. However, owing to skin conditions or incorrect finger pressure, original fingerprint images always contain noise. Especially, some of them contain useless components, which are often mistaken for the terminations that are an essential minutia of a fingerprint. Mathematical Morphology (MM) is a powerful tool in image processing. In this paper, we propose a linear time algorithm to eliminate impulsive noise and useless components, which employs generalized and ordinary morphological operators based on Euclidean distance transform. There are two contributions. The first is the simple and efficient MM method to eliminate impulsive noise, which can be restricted to a minimum number of pixels. We know the performance of MM is heavily dependent on structuring elements (SEs), but finding an optimal SE is a difficult and nontrivial task. So the second contribution is providing an automatic approach without any experiential parameter for choosing appropriate SEs to eliminate useless components. We have developed a novel algorithm for the binarization of fingerprint images [1]. The information of distance transform values can be obtained directly from the binarization phase. The results show that using this method on fingerprint images with impulsive noise and useless components is faster than existing denoising methods and achieves better quality than earlier methods.

  • Generating Category Hierarchy for Classifying Large Corpora

    Fumiyo FUKUMOTO  Yoshimi SUZUKI  

     
    PAPER-Natural Language Processing

      Page(s):
    1543-1554

    We address the problem of dealing with large collections of data, and investigate the use of automatically constructing domain specific category hierarchies to improve text classification. We use two well-known techniques, the partitioning clustering method called k-means and loss function, to create the category hierarchy. The k-means method involves iterating through the data that the system is permitted to classify during each iteration and construction of a hierarchical structure. In general, the number of clusters k is not given beforehand. Therefore, we used a loss function that measures the degree of disappointment in any differences between the true distribution over inputs and the learner's prediction to select the appropriate number of clusters k. Once the optimal number of k is selected, the procedure is repeated for each cluster. Our evaluation using the 1996 Reuters corpus, which consists of 806,791 documents, showed that automatically constructing hierarchies improves classification accuracy.

  • A Continuous Valued Neural Network with a New Evaluation Function of Degree of Unsatisfaction for Solving CSP

    Takahiro NAKANO  Masahiro NAGAMATU  

     
    PAPER-Biocybernetics, Neurocomputing

      Page(s):
    1555-1562

    We have proposed a neural network called the Lagrange programming neural network with polarized high-order connections (LPPH) for solving the satisfiability problem (SAT) of propositional calculus. The LPPH has gradient descent dynamics for variables and gradient ascent dynamics for Lagrange multipliers, which represent the weights of the clauses of the SAT. Each weight wr increases according to the degree of unsatisfaction of clause Cr. This causes changes in the energy landscape of the Lagrangian function, on which the values of the variables change in the gradient descent direction. It was proved that the LPPH is not trapped by any point that is not a solution of the SAT. Experimental results showed that the LPPH can find solutions faster than existing methods. In the LPPH dynamics, a function hr(x) calculates the degree of unsatisfaction of clause Cr via multiplication. However, this definition of hr(x) has a disadvantage when the number of literals in a clause is large. In the present paper, we propose a new definition of hr(x) in order to overcome this disadvantage using the "min" operator. In addition, we extend the LPPH to solve the constraint satisfaction problem (CSP). Our neural network can update all neurons simultaneously to solve the CSP. In contrast, conventional discrete methods for solving the CSP must update variables sequentially. This is advantageous for VLSI implementation.

  • Graphical Gaussian Modeling for Gene Association Structures Based on Expression Deviation Patterns Induced by Various Chemical Stimuli

    Tetsuya MATSUNO  Nobuaki TOMINAGA  Koji ARIZONO  Taisen IGUCHI  Yuji KOHARA  

     
    PAPER-Biological Engineering

      Page(s):
    1563-1574

    Activity patterns of metabolic subnetworks, each of which can be regarded as a biological function module, were focused on in order to clarify biological meanings of observed deviation patterns of gene expressions induced by various chemical stimuli. We tried to infer association structures of genes by applying the multivariate statistical method called graphical Gaussian modeling to the gene expression data in a subnetwork-wise manner. It can be expected that the obtained graphical models will provide reasonable relationships between gene expressions and macroscopic biological functions. In this study, the gene expression patterns in nematodes under various conditions (stresses by chemicals such as heavy metals and endocrine disrupters) were observed using DNA microarrays. The graphical models for metabolic subnetworks were obtained from these expression data. The obtained models (independence graph) represent gene association structures of cooperativities of genes. We compared each independence graph with a corresponding metabolic subnetwork. Then we obtained a pattern that is a set of characteristic values for these graphs, and found that the pattern of heavy metals differs considerably from that of endocrine disrupters. This implies that a set of characteristic values of the graphs can representative a macroscopic biological meaning.

  • Improvement of Authenticated Encryption Schemes with Message Linkages for Message Flows

    Min-Shiang HWANG  Jung-Wen LO  Shu-Yin HSIAO  Yen-Ping CHU  

     
    LETTER-Application Information Security

      Page(s):
    1575-1577

    An authenticated encryption scheme provides a mechanism of signing and encrypting simultaneously, and furthermore, the receiver can verify and decrypt the signature at the same time. Tseng et al. proposed two efficiently authenticated encryption schemes which can check the validity of the sent data before message recovery, but in fact their schemes cannot achieve completely the function. In this article, we point out the flaw and propose an improved scheme of revision.

  • A Block Smoothing-Based Method for Flicker Removal in Image Sequences

    Lei ZHOU  Qiang NI  Yuanhua ZHOU  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1578-1581

    An automatic and efficient algorithm for removal of intensity flicker is proposed. The novel repair process is founded on the block-based estimation and restoration algorithm with regard to luminance variation. It is easily realized and controlled to remove most intensity flicker and preserve the wanted effects, like fade in and fade out.

  • An Unsupervised Approach for Video Text Localization

    Jian WANG  Yuan-Hua ZHOU  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1582-1585

    A new video text localization approach is proposed. First, some pre-processing techniques, including color space conversion and histogram equalization, are applied to the input video frames to obtain the enhanced gray-scale images. Features are then extracted using wavelet transform to represent the texture property of text regions. Next, an unsupervised fuzzy c-means classifier is performed to discriminate candidate text pixels from background. Effective operations such as the morphological dilation operation and logical AND operation are applied for locating text blocks. A projection analysis technique is then employed to extract text lines. Finally, some geometric heuristics are used to remove noise regions and refine location of text lines. Experimental results indicate that the proposed approach is superior to other three representative approaches in term of total detection rate.

  • Novel Block Motion Estimation Based on Adaptive Search Patterns

    Byung-Gyu KIM  Seon-Tae KIM  Seok-Kyu SONG  Pyeong-Soo MAH  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1586-1591

    An improved algorithm for fast motion estimation based on the block matching algorithm (BMA) is presented for use in a block-based video coding system. To achieve enhanced motion estimation performance, we propose an adaptive search pattern length for each iteration for the current macro block (MB). In addition, search points that must be checked are determined by means of directional information from the error surface, thus reducing intermediate searches. The proposed algorithm is tested with several sequences and excellent performance is verified.

  • A Variable-Length Encoding Method to Prevent the Error Propagation Effect in Video Communication

    Linhua MA  Yilin CHANG  Jun LIU  Xinmin DU  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1592-1595

    A novel variable-length code (VLC), called alternate VLC (AVLC), is proposed, which employs two types of VLC to encode source symbols alternately. Its advantage is that it can not only stop the symbol error propagation effect, but also correct symbol insertion errors and symbol deletion errors, which is very important in video communication.

  • An Approach to Extracting Trunk from an Image

    Chin-Hung TENG  Yung-Sheng CHEN  Wen-Hsing HSU  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    1596-1600

    Rendering realistic trees is quite important for simulating a 3D natural scene. Separating the trunk from its background is the first step toward the 3D model construction of the tree. In this paper, a three-phase algorithm is developed to extract the trunk structure of the tree and hence segment the trunk from the image. Some experiments were conducted and results confirmed the feasibility of proposed algorithm.

  • A Definitional Question Answering System Based on Phrase Extraction Using Syntactic Patterns

    Kyoung-Soo HAN  Young-In SONG  Sang-Bum KIM  Hae-Chang RIM  

     
    LETTER-Natural Language Processing

      Page(s):
    1601-1605

    We propose a definitional question answering system that extracts phrases using syntactic patterns which are easily constructed manually and can reduce the coverage problem. Experimental results show that our phrase extraction system outperforms a sentence extraction system, especially for selecting concise answers, in terms of recall and precision, and indicate that the proper text unit of answer candidates and the final answer has a significant effect on the system performance.