The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E88-D No.12  (Publication Date:2005/12/01)

    IEICE/IEEE Joint Special Section on Autonomous Decentralized Systems
  • FOREWORD

    Masaki AIDA  

     
    FOREWORD

      Page(s):
    2621-2621
  • The Future of High-Speed Train

    Takashi ENDO  

     
    INVITED PAPER

      Page(s):
    2625-2629

    High-speed intercity railways have grown into profitable business, achieving a renaissance in rail transport. High-speed railways need constant updating to new systems if they are to be winners in this age of competing transportation modes. In view of that situation, JR East started an R&D project to achieve even faster speed--more than 300 km/h. A test train that can run at an operational speed of 360 km/h is under development, and JR East plans to commence high-speed tests in the summer of 2005.

  • Autonomic Peer-to-Peer Service Directory

    Tim Hsin-ting HU  Aruna SENEVIRATNE  

     
    PAPER

      Page(s):
    2630-2639

    Service registration and discovery are functionalities central to any service-oriented architecture, and they are often provided by centralized entities in today's systems. However, there are advantages of scalability, robustness, as well as distribution of control and cost by further decentralization of these functionalities to all the participants in the system. Peer-to-peer networks are great enablers toward this goal as they are designed to be scalable and autonomic; redundancy and automatic reconfiguarion are built into these systems, enabling peers to form and maintain the network autonomously. This article describes a fully decentralized service directory infrastructure built on top of the peer-to-peer protocol Chord. Service registration is performed implicitly by embedding semantic information into the peer identifiers, grouping peers by service categories and forming islands on the ring topology. Service discovery is performed by sending queries and anycast messages to peers registered in the appropriate islands. The routing protocol is further modified to take advantage of the island topology, with reputation mechanism and multi-path routing implemented to avoid the threat of misbehaving peers dropping transit messages in the system. Simulations were performed to assess the efficacy of both the new routing scheme and misbehavior avoidance.

  • Autonomous Semantic Grid: Principles of Autonomous Decentralized Systems for Grid Computing

    Muhammad Omair SHAFIQ  Hafiz Farooq AHMAD  Hiroki SUGURI  Arshad ALI  

     
    PAPER

      Page(s):
    2640-2650

    Grid computing is an open, heterogeneous and highly dynamic environment based on the principles of service oriented computing. It focuses on basic infrastructure for coordinated resource sharing among virtual organizations to achieve high performance and availability. However, use of existing Grid computing environment is quite complex and requires a lot of human intervention. In order to avoid this intervention, enhancements are required in bringing autonomy and semantics in existing Grid infrastructure. Semantics would act as glue for autonomy in the process of efficient resource discovery and utilization. Several ontologies and ontology languages have been proposed in this regard which not only have some shortcoming but also poses a sort of overhead for the Grid environment. On the other hand, agents are autonomous problem solving entities, and can negotiate semantically for interoperation with each other in dynamic environments. Inspired from the concept of Autonomous Decentralized Systems, we propose that the above mentioned goals can be achieved by integrating FIPA Multi Agent Systems with the Grid Service Architecture and hence to lay the foundation for Autonomous Semantic Grid. Autonomous Semantic Grid system architecture is aimed to provide an improved infrastructure by bringing autonomy, semantic interoperability and decentralization in the Grid computing for emerging applications. This paper then presents implementation details of first milestone toward Autonomous Semantic Grid realization based on a middleware, namely AgentWeb Gateway for integration of Multi Agent Systems and Grid Service Architecture. Evaluation of the system has also been performed over a number of application scenarios.

  • CIGMA: Active Inventory Service in Global E-Market Based on Efficient Catalog Management

    Su Myeon KIM  Seungwoo KANG  Heung-Kyu LEE  Junehwa SONG  

     
    PAPER

      Page(s):
    2651-2663

    A fully Internet-connected business environment is subject to frequent changes. To ordinary customers, online shopping under such a dynamic environment can be frustrating. We propose a new E-commerce service called the CIGMA to assist online customers under such an environment. The CIGMA provides catalog comparison and purchase mediation services over multiple shopping sites for ordinary online customers. The service is based on up-to-date information by reflecting the frequent changes in catalog information instantaneously. It also matches the desire of online customers for fast response. This paper describes the CIGMA along with its architecture and the implementation of a working prototype.

  • Evaluation of Hybrid Message-Delivering Scheme for Massive Mobile Agents' Intercommunication

    Gen HATTORI  Chihiro ONO  Kazunori MATSUMOTO  Fumiaki SUGAYA  

     
    PAPER

      Page(s):
    2664-2671

    Mobile agent technology is applied to enhance the remote network management of large-scale networks, and real-world oriented entertainment systems, and so forth. In order to communicate, the agents exchange messages mutually and migrate repeatedly among terminals. Although these systems efficiently accomplish the tasks by using a large quantity of mobile agents, they have a serious problem in that the number of messages between agents increases in proportion to the square of the number of agents. These systems have to reduce the communication costs, such as the number of hosts relaying messages; however, the conventional message-delivering schemes alone cannot keep the communication costs to a minimum under all conditions. To minimize the communication costs, we propose a hybrid message-delivering scheme which dynamically selects the optimal message-delivering schemes. Firstly, we evaluate the communication costs of conventional schemes, and we design the hybrid message-delivering scheme. Then we perform simulation evaluations to derive the threshold value for switching a scheme to minimize the communication costs.

  • A Coalition Formation Framework Based on Transitive Dependence

    Bo AN  Chunyan MIAO  Daijie CHENG  

     
    PAPER

      Page(s):
    2672-2680

    Coalition formation in multi-agent systems (MAS) is becoming increasingly important as it increases the ability of agents to execute tasks and maximize their payoffs. Dependence relations are regarded as the foundation of coalition formation. This paper proposes a novel dependence theory namely transitive dependence theory for dynamic coalition formation in multi-agent systems. Transitive dependence is an extension of direct dependence that supports an agent's reasoning about other social members during coalition formation. Based on the proposed transitive dependence theory, a dynamic coalition formation framework has been worked out which includes information gathering, transitive dependence based reasoning for coalition partners search and coalition resolution. The nested coalitions and how to deal with incomplete knowledge while forming coalitions are also discussed in the paper.

  • Behavioral Analysis of a Fault-Tolerant Software System with Rejuvenation

    Koichiro RINSAKA  Tadashi DOHI  

     
    PAPER

      Page(s):
    2681-2690

    In recent years, considerable attention has been devoted to continuously running software systems whose performance characteristics are smoothly degrading in time. Software aging often affects the performance of a software system and eventually causes it to fail. A novel approach to handle transient software failures due to software aging is called software rejuvenation, which can be regarded as a preventive and proactive solution that is particularly useful for counteracting the aging phenomenon. In this paper, we focus on a high assurance software system with fault-tolerance and preventive rejuvenation, and analyze the stochastic behavior of such a highly critical software system. More precisely, we consider a fault-tolerant software system with two-version redundant structure and random rejuvenation schedule, and evaluate quantitatively some dependability measures like the steady-state system availability and MTTF based on the familiar Markovian analysis. In numerical examples, we examine the dependence of two fault tolerant techniques; design and environment diversity techniques, on the system dependability measures.

  • Swiss Cheese Test Case Generation for Web Services Testing

    Wei-Tek TSAI  Xiao WEI  Yinong CHEN  Ray PAUL  Bingnan XIAO  

     
    PAPER

      Page(s):
    2691-2698

    Current Web services testing techniques are unable to assure the desired level of trustworthiness, which presents a barrier to WS applications in mission and business critical environments. This paper presents a framework that assures the trustworthiness of Web services. New assurance techniques are developed within the framework, including specification verification via completeness and consistency checking, test case generation, and automated Web services testing. Traditional test case generation methods only generate positive test cases that verify the functionality of software. The proposed Swiss Cheese test case generation method is designed to generate both positive and negative test cases that also reveal the vulnerability of Web services. This integrated development process is implemented in a case study. The experimental evaluation demonstrates the effectiveness of this approach. It also reveals that the Swiss Cheese negative testing detects even more faults than positive testing and thus significantly reduces the vulnerability of Web services.

  • Autonomous Decentralized High-Speed Processing Technology and the Application in an Integrated IC Card Fixed-Line and Wireless System

    Akio SHIIBASHI  

     
    PAPER

      Page(s):
    2699-2707

    There is "Processing speed improvement of the automatic fare collection gate (AFC gate)" as one of the important problems to correspond to the passengers getting on and off in high density transportation at the peak. On the other hand, reliability is indispensable to handle the ticket that is the note. Therefore, the ticket system that has both high-speed processing and high reliability is necessary and indispensable. For the passenger's convenience improvement and maintenance cost reduction, wireless IC card ticket system is hoped. However, the high-speed processing and the high reliability are ambivalent at this system because of wireless communications between an IC card and an AFC gate; the faster the AFC gate processes the ticket, the poorer the reliability gets. In this thesis, it proposes the autonomous decentralized processing technology to meet high-speed processing in wireless IC ticket system and the requirement of high reliability. "IC card" "AFC" and "Central server" are assumed to be an autonomous system. It proposes "Decentralized algorithm of the fare calculation by IC card and the AFC" to achieve high-speed processing. Moreover, "Autonomous, decentralized consistency technology" in each subsystem is shown for high-reliability. In addition, to make these the effective one, "Wireless communication area enhancing technology (touch & going method)" and "Command system for the data high speed processing" are shown. These technologies are introduced into the Suica system of East Japan Railway and the effectiveness has been proven.

  • The Design of Diagnosis System in Maglev Train

    Zhigang LIU  

     
    PAPER

      Page(s):
    2708-2714

    The diagnosis system of Maglev Train is one of most important parts, which can obtain kinds of status messages of electric and electronic devices in vehicle to ensure the whole train safety. In this paper, diagnosis system structure and diagnosis method are analyzed and discussed in detail. The disadvantages of diagnosis system are described. In virtue of the theory of ADS, some basic ideas of ADS are applied in new diagnosis system. The structure, component parts and diagnosis method of new diagnosis system are proposed, designed and discussed in detail. The analysis results show that new diagnosis not only embodies some ADS' ideas but also better meets the demands of Maglev Train Diagnosis System.

  • Regular Section
  • Primitive Inductive Theorems Bridge Implicit Induction Methods and Inductive Theorems in Higher-Order Rewriting

    Keiichirou KUSAKARI  Masahiko SAKAI  Toshiki SAKABE  

     
    PAPER-Computation and Computational Models

      Page(s):
    2715-2726

    Automated reasoning of inductive theorems is considered important in program verification. To verify inductive theorems automatically, several implicit induction methods like the inductionless induction and the rewriting induction methods have been proposed. In studying inductive theorems on higher-order rewritings, we found that the class of the theorems shown by known implicit induction methods does not coincide with that of inductive theorems, and the gap between them is a barrier in developing mechanized methods for disproving inductive theorems. This paper fills this gap by introducing the notion of primitive inductive theorems, and clarifying the relation between inductive theorems and primitive inductive theorems. Based on this relation, we achieve mechanized methods for proving and disproving inductive theorems.

  • A Grammatical Approach to the Alignment of Structure-Annotated Strings

    Shinnosuke SEKI  Satoshi KOBAYASHI  

     
    PAPER-Automata and Formal Language Theory

      Page(s):
    2727-2737

    In this paper, we are concerned with a structural ambiguity problem of tree adjoining grammars (TAGs), which is an essential problem when we try to model consensus structures of given set of ribonucleic acid (RNA) secondary structures by TAGs. RNA secondary structures can be represented as strings with structural information, and TAGs have a descriptive capability of this kind of strings, what we call structure-annotated strings. Thus, we can model RNA secondary structures by TAGs. It is sufficient to use existing alignment methods for just computing the optimal alignment between RNA secondary structures. However, when we also want to model the resulting alignment by grammars, if we adopt these existing methods, then we may fail in modeling the alignment result by grammars. Therefore, it is important to introduce a new alignment method whose alignment results can be appropriately modeled by grammars. In this paper, we will propose an alignment method based on TAG's derivations each corresponding to a given RNA secondary structure. For an RNA secondary structure, there exist a number of derivations of TAGs which correspond to the structure. From the grammatical point of view, the property of TAGs drives us to the question how we should choose a derivation from these candidates in order to obtain an optimal alignment. This is the structural ambiguity problem of TAGs, which will be mainly discussed in this paper. For dealing with this problem appropriately, we will propose an edit distance between two structure-annotated strings, and then present an algorithm which computes an optimal alignment based on the edit distance.

  • Classification of Sequential Circuits Based on τk Notation and Its Applications

    Chia Yee OOI  Thomas CLOUQUEUR  Hideo FUJIWARA  

     
    PAPER-VLSI Systems

      Page(s):
    2738-2747

    This paper introduces τk notation to be used to assess test generation complexity of classes of sequential circuits. Using τk notation, we reconsider and restate the time complexity of test generation for existing classes of acyclic sequential circuits. We also introduce a new DFT method called feedback shift register (FSR) scan design technique, which is extended from the scan design technique. Therefore, for a given sequential circuit, the corresponding FSR scan designed circuit has always equal or lower area overhead and test application time than the corresponding scan designed circuit. Furthermore, we identify some new classes of sequential circuits that contain some cyclic sequential circuits, which are τ-equivalent and τ2-bounded. These classes are the l-length-bounded testable circuits, l-length-bounded validity-identifiable circuits, t-time-bounded testable circuits and t-time-bounded validity-identifiable circuits. In addition, we provide two examples of circuits belonging to these classes, namely counter-cycle finite state machine realizations and state-shiftable finite state machine realizations. Instead of using a DFT method, a given sequential circuit described at the finite state machine (FSM) level can be synthesized using another test methodology called synthesis for testability (SFT) into a circuit that belongs to one of the easily testable classes of cyclic sequential circuits.

  • A Low Power-Consuming Embedded System Design by Reducing Memory Access Frequencies

    Ching-Wen CHEN  Chih-Hung CHANG  Chang-Jung KU  

     
    PAPER-Computer Systems

      Page(s):
    2748-2756

    When an embedded system is designed, system performance and power consumption have to be taken carefully into consideration. In this paper, we focus on reducing the number of memory access times in embedded systems to improve performance and save power. We use the locality of running programs to reduce the number of memory accesses in order to save power and maximize the performance of an embedded system. We use shorter code words to encode the instructions that are frequently executed and then pack continuous code words into a pseudo instruction. Once the decompression engine fetches one pseudo instruction, it can extract multiple instructions. Therefore, the number of memory access times can be efficiently reduced because of space locality. However, the number of the most frequently executed instructions is different due to the program size of different applications; that is, the number of memory access times increases when there are less encoded instructions in a pseudo instruction. This situation results in a degradation of system performance and power consumption. To solve this problem, we also propose the use of multiple reference tables. Multiple reference tables will result in the most frequently executed instructions having shorter encoded code words, thereby improving the performance and power of an embedded system. From our simulation results, our method reduces the memory access frequency by about 60% when a reference table with 256 instructions is used. In addition, when two reference tables that contain 256 instructions each are used, the memory access ratio is 10.69% less than the ratio resulting from one reference table with 512 instructions.

  • A Top-Down Approach to Quality Driven Architectural Engineering of Software Systems

    Kwanwoo LEE  

     
    PAPER-Software Engineering

      Page(s):
    2757-2766

    Designing a software architecture that satisfies multiple quality requirements is a difficult undertaking. This is mainly due to the fact that architects must be able to explore a broad range of architectural choices and analyze tradeoffs among them in light of multiple quality requirements. As the size and complexity of the system increase, architectural design space to be explored and analyzed becomes more complex. In order to systematically manage the complexity, this paper proposes a method that guides architects to explore and analyze architectural decisions in a top-down manner. In the method, architectural decisions that have global impacts on given quality requirements are first explored and analyzed and those that have local impacts are then taken into account in the context of the decisions made in the previous step. This approach can cope with the complexity of large-scale architectural design systematically, as architectural decisions are analyzed and made following the abstraction hierarchy of quality requirements. To illustrate the concepts and applicability of the proposed method, we have applied this method to the architectural design of the computer used for the continuous casting process by an iron and steel manufacturer.

  • Security against Inference Attacks on Negative Information in Object-Oriented Databases

    Yasunori ISHIHARA  Shuichiro AKO  Toru FUJIWARA  

     
    PAPER-Database

      Page(s):
    2767-2776

    Inference attacks mean that a user derives information on the execution results of unauthorized queries from the execution results of authorized queries. Most of the studies on inference attacks so far have focused on only inference of positive information (i.e., what value is the execution result of a given unauthorized query). However, negative information (i.e., what value is never the execution result of a given unauthorized query) is also sensitive in many cases. This paper presents the following results on the security against inference attacks on negative information in object-oriented databases. First, inference of negative information is formalized under a model of object-oriented databases called method schemas. Then, the following two types of security problems are defined: (1) Is a given database instance secure against inference attacks on given negative information? (2) Are all of the database instances of a given database schema secure against inference attacks on given negative information? It is shown that the first problem is decidable in polynomial time in the description size of the database instance while the second one is undecidable. A decidable sufficient condition for any database instance of a given database schema to be secure is also proposed. Finally, it is shown that for a monadic schema (i.e., every method has exactly one parameter), this sufficient condition is also a necessary one.

  • Scan Design for Two-Pattern Test without Extra Latches

    Kazuteru NAMBA  Hideo ITO  

     
    PAPER-Dependable Computing

      Page(s):
    2777-2785

    There are three well-known approaches to using scan design to apply two-pattern testing: broadside testing (functional justification), skewed-load testing and enhanced scan testing. The broadside and skewed-load testing use the standard scan design, and thus the area overheads are not high. However fault coverage is low. The enhanced scan testing uses the enhanced scan design. The design uses extra latches, and allows scan-in any two-pattern testing. While this method achieves high fault coverage, it causes high area overhead because of extra latches. This paper presents a new scan design where two-pattern testing with high fault coverage can be performed with area overhead as low as the standard scan design. The proposed scan-FFs are based on master-slave FFs. The input of each scan-FF is connected to the output of the master latch and not the slave latch of the previous FF. Every scan-FF maintains the output value during scan-shift operations.

  • A Coordinator for Workflow Management Systems with Information Access Control

    Shih-Chien CHOU  Chien-Jung WU  

     
    PAPER-Application Information Security

      Page(s):
    2786-2792

    This paper proposes a coordinator for workflow management systems (WFMSs). It is a basic module for developing WFMSs. It is also a coordinator to coordinate multiple WFMSs. The coordinator provides functions to facilitate executing workflows and to ensure secure access of workflow information. Facilitating workflow execution is well-known, but ensuring secure access of workflow information is identified as important only recently. Although many models ensure secure workflow information access, they fail to offer the features we need. We thus developed a new model for the control. This paper presents the coordinator its access control model.

  • Designing a Web-CAI System Incorporated with MATHEMATICA

    Changqing DING  Mitsuru SAKAI  Hiroyuki HASE  Masaaki YONEDA  

     
    PAPER-Educational Technology

      Page(s):
    2793-2801

    This paper describes an approach to extending the learning experience using a Web-CAI system incorporated with MATHEMATICA, which is an advanced calculating software widely used in science and engineering fields. This approach provides the possibility of extending access to the courses that students have learned at school. We can use variables in mathematical formulas so that different problems can be shown to students. At the same time, we can also use algebraic formulas. In addition, applying MATHEMATICA to the given process for the answer automatically makes the answer of the problem. And two types of the answer expressions are acceptable which are filling in text by keyboard and selecting by click. This paper presents the design for the system and its specific implementation, and the technical solving scheme. At the end of this paper, the learning evaluations and the problem-editing interface design are discussed.

  • Contour-Based Window Extraction Algorithm for Bare Printed Circuit Board Inspection

    Shih-Yuan HUANG  Chi-Wu MAO  Kuo-Sheng CHENG  

     
    PAPER-Pattern Recognition

      Page(s):
    2802-2810

    Pattern extraction is an indispensable step in bare printed circuit board (PCB) inspection and plays an important role in automatic inspection system design. A good approach for pattern definition and extraction will make the following PCB diagnosis easy and efficient. The window-based technique has great potential in PCB patterns extraction due to its simplicity. The conventional window-based pattern extraction methods, such as Small Seeds Window Extraction method (SSWE) and Large Seeds Window Extraction method (LSWE), have the problems of losing some useful copper traces and splitting slanted-lines into too many small similar windows. These methods introduce the difficulty and computation intensive in automatic inspection. In this paper, a novel method called Contour Based Window Extraction (CBWE) algorithm is proposed for improvement. In comparison with both SSWE and LSWE methods, the CBWE algorithm has several advantages in application. Firstly, all traces can be segmented and enclosed by a valid window. Secondly, the type of the entire horizontal or vertical line of copper trace is preserved. Thirdly, the number of the valid windows is less than that extracted by SSWE and LSWE. From the experimental results, the proposed CBWE algorithm is demonstrated to be very effective in basic pattern extraction from bare PCB image analysis.

  • Robust Speech Recognition Using Discrete-Mixture HMMs

    Tetsuo KOSAKA  Masaharu KATOH  Masaki KOHDA  

     
    PAPER-Speech and Hearing

      Page(s):
    2811-2818

    This paper introduces new methods of robust speech recognition using discrete-mixture HMMs (DMHMMs). The aim of this work is to develop robust speech recognition for adverse conditions that contain both stationary and non-stationary noise. In particular, we focus on the issue of impulsive noise, which is a major problem in practical speech recognition system. In this paper, two strategies were utilized to solve the problem. In the first strategy, adverse conditions are represented by an acoustic model. In this case, a large amount of training data and accurate acoustic models are required to present a variety of acoustic environments. This strategy is suitable for recognition in stationary or slow-varying noise conditions. The second is based on the idea that the corrupted frames are treated to reduce the adverse effect by compensation method. Since impulsive noise has a wide variety of features and its modeling is difficult, the second strategy is employed. In order to achieve those strategies, we propose two methods. Those methods are based on DMHMM framework which is one type of discrete HMM (DHMM). First, an estimation method of DMHMM parameters based on MAP is proposed aiming to improve trainability. The second is a method of compensating the observation probabilities of DMHMMs by threshold to reduce adverse effect of outlier values. Observation probabilities of impulsive noise tend to be much smaller than those of normal speech. The motivation in this approach is that flooring the observation probability reduces the adverse effect caused by impulsive noise. Experimental evaluations on Japanese LVCSR for read newspaper speech showed that the proposed method achieved the average error rate reduction of 48.5% in impulsive noise conditions. Also the experimental results in adverse conditions that contain both stationary and impulsive noises showed that the proposed method achieved the average error rate reduction of 28.1%.

  • A Fast Multi-Resolution Block Matching Algorithm for Multiple-Frame Motion Estimation

    Myung Jun KIM  Yun Gu LEE  Jong Beom RA  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2819-2827

    In this paper, we propose a fast multi-resolution block matching algorithm with three resolution levels (upper, middle, and lower levels) for multiple-frame motion estimation (MFME). The main concept of the algorithm is to perform a fast search while maintaining a PSNR performance similar to a full search block matching algorithm (FSBMA). The algorithm combines motion vector prediction using the spatial correlation of motion vectors and a multiple candidate search based on a multi-resolution search. To further reduce the computational complexity, we propose two temporal reduction schemes. To reduce the number of previous reference frames to be processed, the first scheme is applied to the upper level by using the information obtained from the search results of the spatio-temporally adjacent macroblocks (MBs) and the result from the current MB in the middle level of the first reference frame. The other scheme is applied to the lower level by using statistical information. Experimental results show that the proposed algorithm guarantees an average PSNR loss of less than 0.23 dB with dramatically reduced computational complexity as compared to the FSBMA. In particular, for sequences with fast motion or frame skipping, the proposed method provides a more prominent PSNR performance than those of existing fast schemes with a comparable computational complexity.

  • Optimal Sampling Operator for Signal Restoration in the Presence of Signal Space and Observation Space Noises

    Aqeel SYED  Hidemitsu OGAWA  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2828-2838

    The partial projection filter (PTPF) for a given observation operator provides an optimal signal restoration in the presence of both the signal space and observation space noises. However, restoration error by the filter still depends on the observation operator which consists of measurement and sampling processes. In this paper, we determine a sampling operator which minimizes the restoration error by the PTPF. We see that under some assumptions about noise statistics, the restoration error by the PTPF is divided into two terms corresponding to the error arising from the signal space noise and that from the observation space noise. It has been found that although the restoration error due to the signal space noise is independent of the sampling operator, the restoration error arising from the observation space noise can arbitrarily be decreased by increasing the number of sample points in the proposed sampling operator. An illustrative example of optimal sampling in the trigonometric polynomial space is also given.

  • JPEG 2000 Encoding Method for Reducing Tiling Artifacts

    Masayuki HASHIMOTO  Kenji MATSUO  Atsushi KOIKE  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2839-2848

    This paper proposes an effective JPEG 2000 encoding method for reducing tiling artifacts, which cause one of the biggest problems in JPEG 2000 encoders. Symmetric pixel extension is generally thought to be the main factor in causing artifacts. However this paper shows that differences in quantization accuracy between tiles are a more significant reason for tiling artifacts at middle or low bit rates. This paper also proposes an algorithm that predicts whether tiling artifacts will occur at a tile boundary in the rate control process and that locally improves quantization accuracy by the original post quantization control. This paper further proposes a method for reducing processing time which is yet another serious problem in the JPEG 2000 encoder. The method works by predicting truncation points using the entropy of wavelet transform coefficients prior to the arithmetic coding. These encoding methods require no additional processing in the decoder. The experiments confirmed that tiling artifacts were greatly reduced and that the coding process was considerably accelerated.

  • Variable Frame Skipping Scheme Based on Estimated Quality of Non-coded Frames at Decoder for Real-Time Video Coding

    Tien-Ying KUO  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2849-2856

    This paper proposes a block-based video encoder employing variable frame skipping (VFS) to improve the video quality in low bit rate channel. The basic idea of VFS mechanism is to decide and skip a suitable, non-fixed number of frames in temporal domain to reduce bit usage. The saved bits can be allocated to enhance the spatial quality of video. In literature, several methods of frame skipping decision have been proposed, but most of them only consider the similarities between neighboring coded frames as the decision criteria. Our proposed method takes into account the reconstruction of the skipped frames using motion-compensated frame interpolation at decoder. The proposed VFS models the reconstructed objective quality of the skipped frame and, therefore, can provide a fast estimate to the frame skipping at encoder. The proposed VFS can determine the suitable frame skipping in real time and provide the encoded video with better spatial-temporal bit allocation.

  • Scale-Adaptive Face Detection and Tracking in Real Time with SSR Filters and Support Vector Machine

    Shinjiro KAWATO  Nobuji TETSUTANI  Kenichi HOSAKA  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    2857-2863

    In this paper, we propose a method for detecting and tracking faces in video sequences in real time. It can be applied to a wide range of face scales. Our basic strategy for detection is fast extraction of face candidates with a Six-Segmented Rectangular (SSR) filter and face verification by a support vector machine. A motion cue is used in a simple way to avoid picking up false candidates in the background. In face tracking, the patterns of between-the-eyes are tracked while updating the matching template. To cope with various scales of faces, we use a series of approximately 1/ scale-down images, and an appropriate scale is selected according to the distance between the eyes. We tested our algorithm on 7146 video frames of a news broadcast featuring sign language at 320240 frame size, in which one or two persons appeared. Although gesturing hands often hid faces and interrupted tracking, 89% of faces were correctly tracked. We implemented the system on a PC with a Xeon 2.2-GHz CPU, running at 15 frames/second without any special hardware.

  • Efficient Space-Leaping Using Optimal Block Sets

    Sukhyun LIM  Byeong-Seok SHIN  

     
    PAPER-Computer Graphics

      Page(s):
    2864-2870

    There are several optimization techniques available for improving rendering speed of direct volume rendering. An acceleration method using the hierarchical min-max map requires little preprocessing and data storage while preserving image quality. However, this method introduces computational overhead because of unnecessary comparison and level shift between blocks. In this paper, we propose an efficient space-leaping method using optimal-sized blocks. To determine the size of blocks, our method partitions an image plane into several uniform grids and computes the minimum and the maximum depth values for each grid. We acquire optimal block sets suitable for individual rays from these values. Experimental results show that our method reduces rendering time when compared with the previous min-max octree method.

  • A Practical Approach to the Scheduling of Manufacturing System Using Fuzzy Optimization Technique

    Seung Kyu PARK  Kwang Bang WOO  

     
    LETTER-Computation and Computational Models

      Page(s):
    2871-2875

    This paper presents a fuzzy optimization based scheduling method for the manufacturing systems with uncertain production capacities. To address the uncertainties efficiently, the fuzzy optimization technique is used in defining the scheduling problem. Based on the symmetric approach of fuzzy optimization and Lagrangian relaxation technique, a practical fuzzy-optimization based algorithm is developed. The computational experiments based on the real factory data demonstrate that the proposed method provides robust scheduling to hedge against uncertainties.

  • Design and Evaluation of Hardware Pseudo-Random Number Generator MT19937

    Shiro KONUMA  Shuichi ICHIKAWA  

     
    LETTER-VLSI Systems

      Page(s):
    2876-2879

    MT19937 is a kind of Mersenne Twister, which is a pseudo-random number generator. This study presents new designs for a MT19937 circuit suitable for custom computing machinery for high-performance scientific simulations. Our designs can generate multiple random numbers per cycle (multi-port design). The estimated throughput of a 52-port design was 262 Gbps, which is 115 times higher than the software on a Pentium 4 (2.53 GHz) processor. Multi-port designs were proven to be more cost-effective than using multiple single-port designs. The initialization circuit can be included without performance loss in exchange for a slight increase of logic scale.

  • Adaptive Clustering Technique Using Genetic Algorithms

    Nam Hyun PARK  Chang Wook AHN  Rudrapatna S. RAMAKRISHNA  

     
    LETTER-Data Mining

      Page(s):
    2880-2882

    This paper proposes a genetically inspired adaptive clustering algorithm for numerical and categorical data sets. To this end, unique encoding method and fitness functions are developed. The algorithm automatically discovers the actual number of clusters and efficiently performs clustering without unduly compromising cluster-purity. Moreover, it outperforms existing clustering algorithms.

  • Application-Level Causally Ordered Broadcast for Large-Scale Group Communication

    ChaYoung KIM  JinHo AHN  ChongSun HWANG  

     
    LETTER-Dependable Computing

      Page(s):
    2883-2889

    Gossip-based reliable broadcast protocols with reasonably weak reliability properties scale well to large groups and degrade system performance gracefully even if node failure or message loss rates increase compared with traditional protocols. However, although many distributed applications require highly steady performance only by allowing causality to be used asynchronously, there is no existing gossip-based protocol offering causally ordered delivery property more lightweight than totally ordered delivery one. This paper presents an application-level broadcast algorithm to guarantee causally-ordered delivery semantics based on peer to peer interaction models for scalability, reasonable reliability and stable throughput. Processes propagate each message with a vector time stamp much like the spread of rumor in society for a fixed number of rounds. Upon receipt of these messages, correct processes immediately deliver the corresponding messages to the application layers in a causal order. Simulation results show that the proposed algorithm outperforms the existing ones in terms of delivery throughput.

  • Verifiable Oblivious Transfer Protocol

    Narn-Yih LEE  Chien-Chih WANG  

     
    LETTER-Application Information Security

      Page(s):
    2890-2892

    The Oblivious Transfer (OT), introduced by Rabin in 1981, has become an important and fundamental cryptography technique. An OT protocol should have two important characteristics: the sender's privacy and the chooser's privacy. The sender is a party who will deliver a secret to the chooser. The chooser is another party who acts as receiver to learn some information about the input from the sender. The chooser learns of certain information concerning the sender's input while the sender is not allowed to know what the chooser has learned. Moreover, the chooser cannot acquire any messages that he/she did not choose. Naor and Pinkas have recently proposed an efficient oblivious transfer protocol (EOT) that implementes 1-out-of-n protocol, but this EOT has a flaw: it cannot withstand "the same message attack." In this paper, we will improve Naor and Pinkas EOT and make it resistant to "the same message attack."

  • High Quality and Low Complexity Speech Analysis/Synthesis Based on Sinusoidal Representation

    Jianguo TAN  Wenjun ZHANG  Peilin LIU  

     
    LETTER-Speech and Hearing

      Page(s):
    2893-2896

    Sinusoidal representation has been widely applied to speech modification, low bit rate speech and audio coding. Usually, speech signal is analyzed and synthesized using the overlap-add algorithm or the peak-picking algorithm. But the overlap-add algorithm is well known for high computational complexity and the peak-picking algorithm cannot track the transient and syllabic variation well. In this letter, both algorithms are applied to speech analysis/synthesis. Peaks are picked in the curve of power spectral density for speech signal; the frequencies corresponding to these peaks are arranged according to the descending orders of their corresponding power spectral densities. These frequencies are regarded as the candidate frequencies to determine the corresponding amplitudes and initial phases according to the least mean square error criterion. The summation of the extracted sinusoidal components is used to successively approach the original speech signal. The results show that the proposed algorithm can track the transient and syllabic variation and can attain the good synthesized speech signal with low computational complexity.