The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E86-D No.10  (Publication Date:2003/10/01)

    Special Issue on Development of Advanced Computer Systems
  • FOREWORD

    Shinji TOMITA   

     
    FOREWORD

      Page(s):
    1945-1946
  • The Development of the Earth Simulator

    Shinichi HABATA  Mitsuo YOKOKAWA  Shigemune KITAWAKI  

     
    INVITED PAPER

      Page(s):
    1947-1954

    The Earth Simulator (ES), developed by the Japanese government's initiative "Earth Simulator project," is a highly parallel vector supercomputer system. In May 2002, the ES was proven to be the most powerful computer in the world by achieving 35.86 teraflops on the LINPACK benchmark and 26.58 teraflops for a global atmospheric circulation model with the spectral method. Three architectural features enabled these great achievements; vector processor, shared-memory and high-bandwidth non-blocking interconnection crossbar network. In this paper, an overview of the ES, the three architectural features and the result of performance evaluation are described particularly with its hardware realization of the interconnection among 640 processor nodes.

  • Design Development of SPARC64 V Microprocessor

    Mariko SAKAMOTO  Akira KATSUNO  Aiichiro INOUE  Takeo ASAKAWA  Kuniki MORITA  Tsuyoshi MOTOKURUMADA  Yasunori KIMURA  

     
    INVITED PAPER

      Page(s):
    1955-1965

    We developed a SPARC-V9 processor, the SPARC64 V. It has an operating frequency of 1.35 GHz and contains 191 million transistors fabricated using 0.13-µm CMOS technology with eight-layer copper metallization. SPECjbb2000 (CPU# 32) is 492683, highest on the market and 42% higher than the next highest system. SPEC CPU2000 performance is 858 for SPECint and 1228 for SPECfp. The processor is designed to provide the high system performance and high reliability required of enterprise server systems. It is also designed to address the performance requirements of high-performance computing. During our development of several generations of mainframe processors, we conducted many related experiments, and obtained enterprise server system (EPS) development skills, an understanding of EPS workload characteristics, and technology that provides high reliability, availability, and serviceability. We used those as bases of the new processor development. The approach quite effectively moves beyond differences between mainframe and SPARC systems. At the beginning of development and before the start of hardware design, we developed a software performance simulator so we could understand the performance impacts of created specifications, thereby enabling us to make appropriate decisions about hardware design. We took this approach to solve performance problems before tape-out and avoid spending additional time on design update and physical machine reconstruction. We were successful, completing the high-performance processor development on schedule and in a short time. This paper describes the SPARC64 V microprocessor and performance analyses for development of its design.

  • A High-Performance Tree-Block Pipelining Architecture for Separable 2-D Inverse Discrete Wavelet Transform

    Yeu-Horng SHIAU  Jer Min JOU  

     
    PAPER

      Page(s):
    1966-1975

    In this paper, a high-performance pipelining architecture for 2-D inverse discrete wavelet transform (IDWT) is proposed. We use a tree-block pipeline-scheduling scheme to increase computation performance and reduce temporary buffers. The scheme divides the input subbands into several wavelet blocks and processes these blocks one by one, so the size of buffers for storing temporal subbands is greatly reduced. After scheduling the data flow, we fold the computations of all wavelet blocks into the same low-pass and high-pass filters to achieve higher hardware utilization and minimize hardware cost, and pipeline these two filters efficiently to reach higher throughput rate. For the computations of N N-sample 2-D IDWT with filter length of size K, our architecture takes at most (2/3)N2 cycles and requires 2N(K-2) registers. In addition, each filter is designed regularly and modularly, so it is easily scalable for different filter lengths and different levels. Because of its small storage, regularity, and high performance, the architecture can be applied to time-critical image decompression.

  • A Low Power Baseband Processor for a Portable Dual Mode DECT/GSM Terminal

    Christos DROSOS  Chrissavgi DRE  Spyridon BLIONAS  Dimitrios SOUDRIS  

     
    PAPER

      Page(s):
    1976-1986

    The architecture and implementation of a novel processor suitable wireless terminal applications, is introduced. The wireless terminal is based on the novel dual-mode baseband processor for DECT and GSM, which supports both heterodyne and direct conversion terminal architectures and is capable to undertake all baseband signal processing, and an innovative direct conversion low power modulator/demodulator for DECT and GSM. The state of the art design methodologies for embedded applications and innovative low-power design steps followed for a single chip solution. The performance of the implemented dual mode direct conversion wireless terminal was tested and measured for compliance to the standards. The developed innovative terminal fulfils all the requirements and specifications imposed by the DECT and GSM standards.

  • Architecture and Evaluation of a Third-Generation RHiNET Switch for High-Performance Parallel Computing

    Hiroaki NISHI  Shinji NISHIMURA  Katsuyoshi HARASAWA  Tomohiro KUDOH  Hideharu AMANO  

     
    PAPER

      Page(s):
    1987-1995

    RHiNET-3/SW is the third-generation switch used in the RHiNET-3 system. It provides both low-latency processing and flexible connection due to its use of a credit-based flow-control mechanism, topology-free routing, and deadlock-free routing. The aggregate throughput of RHiNET-3/SW is 80 Gbps, and the latency is 140 ns. RHiNET-3/SW also provides a hop-by-hop retransmission mechanism. Simulation demonstrated that the effective throughput at a node in a 64-node torus RHiNET-3 system is equivalent to the effective throughput of a 64-bit 33-MHz PCI bus and that the performance of RHiNET-3/SW almost equals or exceeds the best performance of RHiNET-2/SW, the second-generation switch. Although credit-based flow control requires 26% more gates than rate-based flow control to manage the virtual channels (VCs), it requires less VC memory than rate-based flow control. Moreover, its use in a network system reduces latency and increases the maximum throughput compared to rate-based flow control.

  • Performance Evaluation of Instruction Set Architecture of MBP-Light in JUMP-1

    Noriaki SUZUKI  Hideharu AMANO  

     
    PAPER

      Page(s):
    1996-2005

    The instruction set architecture of MBP-light, a dedicated processor for the DSM (Distributed Shared Memory) management of JUMP-1 is analyzed with a real prototype. The Buffer-Register Architecture proposed for MBP-core improves performance with 5.64% in the home cluster and 6.27% in a remote cluster. Only a special instruction for hashing cluster address is efficient and improves the performance with 2.80%, but other special instructions are almost useless. It appears that the dominant operations in the DSM management program were handling packet queues assigned into the local cluster. Thus, common RISC instructions, especially load/store instructions, are frequently used. Separating instruction and data memory improves performance with 33%. The results suggest that another alternative which provides separate on-chip cache and instructions dedicated for packet queue management is advantageous.

  • ReVolver/C40: A Scalable Parallel Computer for Volume Rendering--Design and Implementation--

    Shin-ichiro MORI  Tomoaki TSUMURA  Masahiro GOSHIMA  Yasuhiko NAKASHIMA  Hiroshi NAKASHIMA  Shinji TOMITA  

     
    PAPER

      Page(s):
    2006-2015

    This paper describes the architecture of ReVolver/C40 a scalable parallel machine for volume rendering and its prototype implementation. The most important feature of ReVolver/C40 is view-independent real time rendering of translucent 3D object by using perspective projection. In order to realize this feature, the authors propose a parallel volume memory architecture based on the principal axis oriented sampling method and parallel treble volume memory. This paper also discusses the implementation issues of ReVolver/C40 where various kinds of parallelism extracted to achieve high-perfromance rendering are explained. The prototype systems had been developed and their performance evaluation results are explained. As the results of the evaluation of the prototype systems, ReVolver/C40 with 32 parallel volume memory is estimated to achieve more than 10 frame per second for 2563 volume data on 2562 screen by using perspective projection. The authors also review the development of ReVolver/C40 from several view points.

  • REX: A Reconfigurable Experimental System for Evaluating Parallel Computer Systems

    Yuetsu KODAMA  Toshihiro KATASHITA  Kenji SAYANO  

     
    PAPER

      Page(s):
    2016-2024

    REX is a reconfigurable experimental system for evaluating and developing parallel computer systems. It consists of large-scale FPGAs, and enables the systems to be reconfigured from their processors to the network topology in order to support their evaluation and development. We evaluated REX using several implementations of parallel computer systems, and showed that it had enough scalability of gates, memory throughput and network throughput. We also showed that REX was an effective tool because of its emulation speed and reconfigurability to develop systems.

  • Pot: A General Purpose Monitor for Parallel Computers

    Yuso KANAMORI  Oki MINABE  Masaki WAKABAYASHI  Hideharu AMANO  

     
    PAPER

      Page(s):
    2025-2033

    At the initial stage of developing parallel machines, a software monitor, which manages communication between host computers, program loading and debugging, is necessary. However, it is often a cumbersome job to develop such a monitoring system especially when the target takes a parallel architecture. To solve this problem, we developed an integrated monitor system called "Pot". "Pot" consists of a system runs on the host computer and simple code on a target machine. In order to reduce the development costs, the program on a target machine is as simple as possible while "Pot" on the host computer itself provides various functions for system development.

  • Software TLB Management for Embedded Systems

    Yukikazu NAKAMOTO  

     
    LETTER

      Page(s):
    2034-2039

    The virtual memory functions in real-time operating systems have been used in embedded systems. Recent RISC processors provide virtual memory supports through software-managed Translation Lookaside Buffer (TLB) in software. In real-time aspects of the embedded systems, managing TLB entries is the most important because overhead at TLB miss time gives a great effect to overall performance of the system. In this paper, we propose several TLB management algorithms in MIPS processors. In the algorithms, a replaced TLB entry is randomly chosen or managed. We analyze the algorithms by comparing overheads at task switching times and TLB miss times.

  • An Integrated Approach for Implementing Imprecise Computations

    Hidenori KOBAYASHI  Nobuyuki YAMASAKI  

     
    PAPER

      Page(s):
    2040-2048

    The imprecise computation model is one of the flexible computation models used to construct real-time systems. It is especially useful when the worst case execution times are difficult to estimate or the execution times vary widely. Although there are several ways to implement this model, they have not attained much attentions of real-world application programmers to date due to their unrealistic assumptions and high dependency on the execution environment. In this paper, we present an integrated approach for implementing the imprecise computation model. In particular, our research covers three aspects. First, we present a new imprecise computation model which consists of a mandatory part, an optional part, and another mandatory part called wind-up part. This wind-up part allows application programmers to explicitly incorporate into their programs the exact operations needed for safe degradation of performance when there is a shortage in resources. Second, we describe a scheduling algorithm called Mandatory-First with Wind-up Part (M-FWP) which is based on the Earliest Deadline First strategy. This algorithm, unlike scheduling algorithms developed for the classical imprecise computation model, is capable of scheduling a mandatory portion after an optional portion. Third, we present a dynamic priority server method for an efficient implementation of the M-FWP algorithm. We also show that the number of the proposed server at most needed per node is one. In order to estimate the performance of the proposed approach, we have implemented a real-time operating system called RT-Frontier. The experimental analyses have proven its ability to implement tasks based on the imprecise computation model without requiring any knowledge on the execution time of the optional part. Moreover, it also showed performance gain over the traditional checkpointing technique.

  • Normalizing Syntactic Structures Using Part-of-Speech Tags and Binary Rules

    Seongyong KIM  Kong-Joo LEE  Key-Sun CHOI  

     
    PAPER

      Page(s):
    2049-2056

    We propose a normalization scheme of syntactic structures using a binary phrase structure grammar with composite labels. The normalization adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree, so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labelling algorithm. Since the proposed normalization scheme is binary and uses only part-of-speech information, it can readily be used to compare the results of different syntactic analyses independently of their syntactic description and can be applied to other languages as well. It can also be used for syntactic analysis, which performs higher than the previous syntactic description for Korean corpus. We implement a tool that transforms a syntactic description into normalized one based on this proposed scheme. It can help construct a unified syntactic corpus and extract syntactic information from various types of syntactic corpus in a uniform way.

  • IEICE/IEEE Joint Special Issue on Assurance Systems and Networks
  • FOREWORD

    Yoshiaki KAKUDA  

     
    FOREWORD

      Page(s):
    2057-2058
  • Railways and Space -- Their Assurance System

    Shuichiro YAMANOUCHI  

     
    INVITED PAPER

      Page(s):
    2063-2069

    Assurance improvement is the most challenging task both in railway and space areas, even though their required technologies are different. It can be said that the railways have been achieving the extremely high assurance level through their long history and experience, while the space area has not accumulated enough experience in addition to their requirement of highly advanced technology. As a result, the latter places more emphasis on theoretical analyses for ensuring the assurance. This paper introduces the two different approaches of railway and space areas toward assurance, and compares the two methods.

  • JR East Contact-less IC Card Automatic Fare Collection System "Suica"

    Yasutomo SHIRAKAWA  Akio SHIIBASHI  

     
    INVITED PAPER

      Page(s):
    2070-2076

    Suica is our contact-less IC card's nickname: Super Urban Intelligent CArd. There are two types of IC Card: One for Suica IO (SF) Card and the other for Suica Commuter Pass, which has a function of stored fare card and commuter pass. There are 6.54 million Suica holders (about 3.33 million Suica Season Pass holders and 3.21 million Suica IO Card holders) as of 16, June 2003.

  • QoS Certification of Real-Time Distributed Computing Systems: Issues and Promising Approaches

    K.H. (Kane) KIM  

     
    INVITED PAPER

      Page(s):
    2077-2086

    The general public is expected to demand in not too distant future instituting more stringent certification procedures for computing parts of traditional and new-generation safety-critical application systems. Such quality-of-service (QoS) certification processes will not and can not rely solely on the testing approach. Design-time guaranteeing of timely service capabilities of various subsystems is an inevitable part of such processes. Although some promising developments in this area have been occurring in recent years, the technological challenges yet to be overcome are enormous. This paper is a summary of the author's perspective on the remaining challenges and promising directions for tackling them.

  • Autonomous Integration and Optimal Allocation of Heterogeneous Information Services for High-Assurance in Distributed Information Service System

    Xiaodong LU  Kinji MORI  

     
    PAPER-Agent-Based Systems

      Page(s):
    2087-2094

    Information service provision and utilization is an important infrastructure in the high-assurance distributed information service system. In order to cope with the rapidly evolving situations of providers' and users' heterogeneous requirements, one autonomous information service system has been proposed, called Faded Information Field (FIF). FIF is a distributed information service system architecture, sustained by push/pull mobile agents, through a recursive demand-oriented provision of the most popular information closer to the users to make a tradeoff between the cost of service allocation and access. In this system, users' requests are autonomously driven by pull mobile agents in charge of finding the relevant service. In the case of a mono-service request, the system is designed to reduce the time needed for users to access the information and to preserve the consistency of the replicas. However, when the user requests joint selection of multiple services, synchronization of atomic actions and timeliness have to be assured by the system. In this paper, the relationship that exists among the contents, properties and access ratios of information services is clarified. Based on these factors, the ratio of correlation and degree of satisfaction are defined and the autonomous integration and optimal allocation of information services for heterogeneous FIFs to provide one-stop service for users' multi-service requirements are proposed. The effectiveness of the proposed technology is shown through evaluation and the results show that the integrated services can reduce the total users access time and increase services consumption compared with separate systems.

  • Assuring Interoperability of Heterogeneous Multi-Agent Systems

    Hiroki SUGURI  Eiichiro KODAMA  Masatoshi MIYAZAKI  

     
    PAPER-Agent-Based Systems

      Page(s):
    2095-2103

    In order for the agent-based applications to be truly autonomous and decentralized, heterogeneous multi-agent systems themselves must communicate and interoperate with each other. To solve the problem, we take a two-step approach. First, message-level interoperability is realized by a gateway agent that interconnects heterogeneous multi-agent systems. Second, higher-level interoperation of conversations, which consist of bi-directional streams of messages, is achieved by dynamically negotiating the interaction protocols. We demonstrate the concept, technique and implementation of integrating multi-agent systems and show how the method improves the assurance of real-world applications in autonomous decentralized systems.

  • Loosely-Consistency Management Technology in Distributed Database Systems for Assurance

    Carlos PEREZ LEGUIZAMO  Dake WANG  Kinji MORI  

     
    PAPER-Agent-Based Systems

      Page(s):
    2104-2113

    To meet the highly competitive and dynamic needs in the market, an e-Business company needs to flexibly integrate its heterogeneous database systems together, e.g., the integration of makers and retailers in a Supply Chain Management System (SCM). The customers demand one-click response and also their access requirements change too frequently. Moreover, different retailers and makers in a SCM, being the autonomous entities, have their own specific requirements for stock-cost and opportunity-loss, depending on their local situation that is also changing with time. Under this background, the integrated DBs of the SCM are required to provide real-time response, heterogeneity satisfaction and flexibility to adapt to changing requirements. The conventional approach of strict consistency leads to low response and less flexibility due to the strong interdependence of the systems. In this paper, Autonomous Decentralized Database System has been proposed as an application-oriented database technology based on the concept of autonomy and loose-consistency among the distributed DB systems thus providing real-time, flexibility and high availability. The autonomy in the system has been achieved by defining a data attribute, Allowable Volume, within which each component DB has autonomy to update the data in real-time. Moreover, the system adapts to the dynamically changing heterogeneous access requirements at each DB by managing the distribution of AV among different DBs through an active coordination mechanism. Due to the dynamic and unpredictable environment, the component DBs are provided with complete autonomy for their local and coordination decisions, thus diminishing the interdependency and improving the response time. As the system consists of loosely-connected subsystems, it also has high availability. Therefore, the proposed system provides highly decentralized architecture with flexibility and high availability. The performance of the system has been shown significantly effective by simulating the internet based SCM system, from the communication-cost and response time point of view.

  • Testing for High Assurance System by FSM

    Juichi TAKAHASHI  Yoshiaki KAKUDA  

     
    PAPER-Testing

      Page(s):
    2114-2120

    Software and its systems are more complicated than a decade ago, and the systems are used for mission critical business, flight control and so on which often require high assurance systems. In this circumstance, we often use black-box testing. The question now arises that black-box testing does not generate numerical value of testing result but empirical. Thus, in this research, we develop and enhance FSM (Finite State Machine) testing method which can produce code coverage rate as numerical value. Our developed FSM testing by code coverage focuses on not only software system behavior but also data. We found higher code coverage rate, which indicates quality of system, by this method than existing black box testing method.

  • The Theory of Software Reliability Corroboration

    Bojan CUKIC  Erdogan GUNEL  Harshinder SINGH  Lan GUO  

     
    PAPER-Testing

      Page(s):
    2121-2129

    Software certification is a notoriously difficult problem. From software reliability engineering perspective, certification process must provide evidence that the program meets or exceeds the required level of reliability. When certifying the reliability of a high assurance system very few, if any, failures are observed by testing. In statistical estimation theory the probability of an event is estimated by determining the proportion of the times it occurs in a fixed number of trials. In absence of failures, the number of required certification tests becomes impractically large. We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the hypothesized probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.

  • Scenario-Based Web Services Testing with Distributed Agents

    Wei-Tek TSAI  Ray PAUL  Lian YU  Akihiro SAIMI  Zhibin CAO  

     
    PAPER-Testing

      Page(s):
    2130-2144

    Web Services (WS) have received significant attention recently. Delivering Quality of Service (QoS) on the Internet is a critical and significant challenge for WS community. This article proposes a Web Services Testing Framework (WSTF) for WS participates to perform WS testing. WSTF provides three main distributed components: test master, test agents and test monitor. Test master manages scenarios and generates test scripts. It initiates WS testing by sending test scripts to test agents. Test agents dynamically bind and invoke the WS. Test monitors capture synchronous/asynchronous messages sent and received, attach timestamp, and trace state change information. The benefit to use WSTF is that the user only needs to specify system scenarios based on the system requirements without needing to write test code. To validate the proposed approach, this paper used the framework to test a supply-chain system implemented using WS.

  • Autonomous Step-by-Step System Construction Technique Based on Assurance Evaluation

    Kazuo KERA  Keisuke BEKKI  Kinji MORI  

     
    PAPER-Reliability and Availability

      Page(s):
    2145-2153

    The recent real time systems have the needs of system expandability with heterogeneous functions and operations. High assurance system is very important for such systems. In order to realize the high assurance system, we research the autonomous step-by-step construction technique based on assurance evaluation. In this paper we propose the average functional reliability as the best index to indicate the assurance performance for system construction. We also propose the autonomous step-by-step construction technique to decide the construction sequence to maximize the assurance performance.

  • Scalable Evolution of Highly Available Systems

    Jason O. HALLSTROM  William M. LEAL  Anish ARORA  

     
    PAPER-Reliability and Availability

      Page(s):
    2154-2164

    The demand for highly available software systems has increased dramatically over the past several years. Such systems must be developed using a discipline that supports unanticipated runtime evolution. We characterize the desiderata of a programming model that provides such support, and describe the design and implementation of an architecture satisfying these criteria. The Dynamic Reconfiguration Subsystem (DRSS) is an interceptor-based open container architecture that supports the development of highly available systems by enabling the scalable, dynamic deployment of cross-cutting software modifications. We have implemented a prototype of DRSS using Microsoft's .NET Framework.

  • Fine-Grained Shock Models to Rejuvenate Software Systems

    Hiroki FUJIO  Hiroyuki OKAMURA  Tadashi DOHI  

     
    LETTER

      Page(s):
    2165-2171

    The software rejuvenation is a proactive fault management technique for operational software systems which age due to the error conditions that accrue with time and/or load, and is important for high assurance systems design. In this paper, fine-grained shock models are developed to determine the optimal rejuvenation policies which maximize the system availability. We introduce three kinds of rejuvenation schemes and calculate the optimal software rejuvenation schedules maximizing the system availability for respective schemes. The stochastic models with three rejuvenation policies are extentions of Bobbio et al. (1998, 2001) and represent the failure phenomenon due to the exhaustion of the software resources caused by the memory leak, the fragmentation, etc. Numerical examples are devoted to compare three control schemes quantitatively.

  • Regular Section
  • Topological Properties of Bi-Rotator Graphs

    Hon-Ren LIN  Chiun-Chieh HSU  

     
    PAPER-Theory/Models of Computation

      Page(s):
    2172-2178

    This paper presents a new interconnection network called bi-rotator graph. It is originated from the rotator graph. The rotator graph has many unidirectional edges and the bi-rotator graph is constructed by making edges of the rotator graph bidirectional. The bidirectional edges can help to reduce the average routing distance and increase the flexibility of applications. Therefore, we propose the bi-rotator graph as an alternative to the rotator graph. In this paper, we will first illustrate how to construct the bi-rotator graph and present the node-to-node routing algorithm. Next, we will propose the algorithm for building Hamiltonian cycle, which demonstrates that the bi-rotator graph is Hamiltonian. Finally, we provide a dilation-one algorithm for embedding arbitrary size of cycle into the bi-rotator graph and show that the bi-rotator graph is Hamiltonian-connected.

  • A Flexible Architecture for Digital Signal Processing

    Wichai BOONKUMKLAO  Yoshikazu MIYANAGA  Kobchai DEJHAN  

     
    PAPER-VLSI Systems

      Page(s):
    2179-2186

    In this paper, we introduce a flexible design for intellectual property(IP) which has become important to design system LSI. The proposed IPs which have high flexibility for user requirement. The design priority is determined by setting parameters as the number of arithmetic unit, internal bitlength, clock speed and so on. The design time can thus be reduced. Designed IP is based on the reconfigurable architecture in which many structures can be dynamically selected. This paper shows a implementation of Frequency Response Masking digital filter(FRM) and Principal Components Analysis(PCA) using a reconfigurable architecture. We show the method to realize the designed circuit and the results of experiments using field programmable gate array(FPGA).

  • Sufficient Conditions for Update Operations on Object-Oriented Databases to Preserve the Security against Inference Attacks

    Yasunori ISHIHARA  Kengo MORI  Toru FUJIWARA  

     
    PAPER-Databases

      Page(s):
    2187-2197

    Detecting the possibility of inference attacks is necessary in order to keep a database secure. Inference attacks mean that a user tries to infer the result of an unauthorized queries to the user. For method schemas, which are a formal model of object-oriented databases, it is known that the security problem against inference attacks is decidable in polynomial time in the size of a given database instance. However, when the database instance or authorization has slightly been updated, it is not desirable to check the entire database again for efficiency. In this paper, we propose several sufficient conditions for update operations to preserve the security. Furthermore, we show that some of the proposed sufficient conditions can be decided much more efficiently than the entire security check. Thus, the sufficient conditions are useful for incremental security checking.

  • A Technique for Constructing Dependable Internet Server Cluster

    Mamoru OHARA  Masayuki ARAI  Satoshi FUKUMOTO  Kazuhiko IWASAKI  

     
    PAPER-Fault Tolerance

      Page(s):
    2198-2208

    An approach is proposed for constructing a dependable server cluster composed only of server nodes with all nodes running the same algorithm. The cluster propagates an IP multicast address as the server address, and clients multicast requests to the cluster. A local proxy running on each client machine enables conventional client software designed for unicasting to communicate with the cluster without having to be modified. Evaluation of a prototype system providing domain name service showed that a cluster using this technique has high dependability with acceptable performance degradation.

  • The Correlation Deduction Method for Intrusion Decision Based on Heterogeneous Sensors

    Minsoo KIM  Bong-Nam NOH  

     
    PAPER-Applications of Information Security Techniques

      Page(s):
    2209-2217

    An Anomaly detection sensor, to detect an abnormal use of system resources or an abnormal behavior of authorized users, uses various measures and decides on the basis of threshold value. However, it has high false alarm rate, and it make it hard to merchandise. Also, it is not easy to have a threshold which is suitable for installation environment. In this paper, we propose a method to automatic generation of proper threshold of each sensor, and the threshold is applied for an integrated decision. Also, we propose a computing method for a correlation of heterogeneous detection sensors. As we use the correlation to integrate and decide the opinions of each sensor, false positive can be greatly reduced.

  • Memory-Enhanced MMSE Decoding in Vector Quantization

    Heng-Iang HSU  Wen-Whei CHANG  Xiaobei LIU  Soo Ngee KOH  

     
    PAPER-Speech and Hearing

      Page(s):
    2218-2222

    An approach to minimum mean-squared error (MMSE) decoding for vector quantization over channels with memory is presented. The decoder is based on the Gilbert channel model that allows the exploitation of both intra- and inter-block correlation of bit error sequences. We also develop a recursive algorithm for computing the a posteriori probability of a transmitted index sequence, and illustrate its performance in quantization of Gauss-Markov sources under noisy channel conditions.

  • Analysis and Synthesis of Pitch Contour of Thai Tone Using Fujisaki's Model

    Pusadee SERESANGTAKUL  Tomio TAKARA  

     
    PAPER-Speech and Hearing

      Page(s):
    2223-2230

    We have developed Thai speech synthesis by rule using cepstral parameters. In order to synthesize the pitch contour of Thai tones, we have applied an extension of Fujisaki's model. A mid tone is unique for Thai when compared to Chinese. For the extension of Fujisaki's model to Thai tones, we assumed that the mid tone is neutral and we adopted its phrase component as the phrase components for all tones. According to our study on the pitch contour of five Thai tones using this model, the result shows that the command pattern for the local F0 components needs both positive and negative commands. Listening tests showed that the intelligibility of the Thai tones measured in terms of error rate were 0.0%, 0.7% and 2.7% for analysis/synthesis, Fujisaki's model and the polynomial model, respectively. Therefore, it is shown that the extension of Fujisaki's model is effective for Thai.

  • Method to Generate Images for a Motion-Base in an Immersive Display Environment

    Toshio MORIYA  Haruo TAKEDA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Page(s):
    2231-2239

    We propose an image generation method for an immersive multi-screen environment that contains a motion ride. To allow a player to look around freely in a virtual world, a method to generate an arbitrary direction image is required, and this technology has already been established. In our environment, displayed images must also be updated according to the movement of the motion ride in order to keep a consistency between the player's viewpoint and the virtual world. In this paper, we indicate that this updating process can be performed by the similar method to generate looking-around images and the same data format can be applicable. Then we discuss the format in terms of the data size and the amount of calculations need to consider the performance in our display environment, and we propose new image formats which improve on the widely-used formats such as the perspective, or the fish-eye format.

  • Color Image Segmentation Using a Gaussian Mixture Model and a Mean Field Annealing EM Algorithm

    Jong-Hyun PARK  Wan-Hyun CHO  Soon-Young PARK  

     
    PAPER-Image Processing, Image Pattern Recognition

      Page(s):
    2240-2248

    In this paper we present an unsupervised color image segmentation algorithm based on statistical models. We have adopted the Gaussian mixture model to represent the distribution of color feature vectors. A novel deterministic annealing EM and mean field theory from statistical mechanics are used to compute the posterior probability distribution of each pixel and estimate the parameters of the Gaussian Mixture Model. We describe the noncontexture segmentation algorithm that uses a deterministic annealing approach and the contexture segmentation algorithm that uses the mean field theory. The experimental results show that the deterministic annealing EM and mean field theory provide a global optimal solution for the maximum likelihood estimators and that these algorithms can efficiently segment the real image.