Hiroaki AKUTSU Ko ARAI
Lanxi LIU Pengpeng YANG Suwen DU Sani M. ABDULLAHI
Xiaoguang TU Zhi HE Gui FU Jianhua LIU Mian ZHONG Chao ZHOU Xia LEI Juhang YIN Yi HUANG Yu WANG
Yingying LU Cheng LU Yuan ZONG Feng ZHOU Chuangao TANG
Jialong LI Takuto YAMAUCHI Takanori HIRANO Jinyu CAI Kenji TEI
Wei LEI Yue ZHANG Hanfeng XIE Zebin CHEN Zengping CHEN Weixing LI
David CLARINO Naoya ASADA Atsushi MATSUO Shigeru YAMASHITA
Takashi YOKOTA Kanemitsu OOTSU
Xiaokang Jin Benben Huang Hao Sheng Yao Wu
Tomoki MIYAMOTO
Ken WATANABE Katsuhide FUJITA
Masashi UNOKI Kai LI Anuwat CHAIWONGYEN Quoc-Huy NGUYEN Khalid ZAMAN
Takaharu TSUBOYAMA Ryota TAKAHASHI Motoi IWATA Koichi KISE
Chi ZHANG Li TAO Toshihiko YAMASAKI
Ann Jelyn TIEMPO Yong-Jin JEONG
Haruhisa KATO Yoshitaka KIDANI Kei KAWAMURA
Jiakun LI Jiajian LI Yanjun SHI Hui LIAN Haifan WU
Gyuyeong KIM
Hyun KWON Jun LEE
Fan LI Enze YANG Chao LI Shuoyan LIU Haodong WANG
Guangjin Ouyang Yong Guo Yu Lu Fang He
Yuyao LIU Qingyong LI Shi BAO Wen WANG
Cong PANG Ye NI Jia Ming CHENG Lin ZHOU Li ZHAO
Nikolay FEDOROV Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Yukasa MURAKAMI Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Kazuya KAKIZAKI Kazuto FUKUCHI Jun SAKUMA
Yitong WANG Htoo Htoo Sandi KYAW Kunihiro FUJIYOSHI Keiichi KANEKO
Waqas NAWAZ Muhammad UZAIR Kifayat ULLAH KHAN Iram FATIMA
Haeyoung Lee
Ji XI Pengxu JIANG Yue XIE Wei JIANG Hao DING
Weiwei JING Zhonghua LI
Sena LEE Chaeyoung KIM Hoorin PARK
Akira ITO Yoshiaki TAKAHASHI
Rindo NAKANISHI Yoshiaki TAKATA Hiroyuki SEKI
Chuzo IWAMOTO Ryo TAKAISHI
Chih-Ping Wang Duen-Ren Liu
Yuya TAKADA Rikuto MOCHIDA Miya NAKAJIMA Syun-suke KADOYA Daisuke SANO Tsuyoshi KATO
Yi Huo Yun Ge
Rikuto MOCHIDA Miya NAKAJIMA Haruki ONO Takahiro ANDO Tsuyoshi KATO
Koichi FUJII Tomomi MATSUI
Yaotong SONG Zhipeng LIU Zhiming ZHANG Jun TANG Zhenyu LEI Shangce GAO
Souhei TAKAGI Takuya KOJIMA Hideharu AMANO Morihiro KUGA Masahiro IIDA
Jun ZHOU Masaaki KONDO
Tetsuya MANABE Wataru UNUMA
Kazuyuki AMANO
Takumi SHIOTA Tonan KAMATA Ryuhei UEHARA
Hitoshi MURAKAMI Yutaro YAMAGUCHI
Jingjing Liu Chuanyang Liu Yiquan Wu Zuo Sun
Zhenglong YANG Weihao DENG Guozhong WANG Tao FAN Yixi LUO
Yoshiaki TAKATA Akira ONISHI Ryoma SENDA Hiroyuki SEKI
Dinesh DAULTANI Masayuki TANAKA Masatoshi OKUTOMI Kazuki ENDO
Kento KIMURA Tomohiro HARAMIISHI Kazuyuki AMANO Shin-ichi NAKANO
Ryotaro MITSUBOSHI Kohei HATANO Eiji TAKIMOTO
Genta INOUE Daiki OKONOGI Satoru JIMBO Thiem Van CHU Masato MOTOMURA Kazushi KAWAMURA
Hikaru USAMI Yusuke KAMEDA
Yinan YANG
Takumi INABA Takatsugu ONO Koji INOUE Satoshi KAWAKAMI
Fengshan ZHAO Qin LIU Takeshi IKENAGA
Naohito MATSUMOTO Kazuhiro KURITA Masashi KIYOMI
Tomohiro KOBAYASHI Tomomi MATSUI
Shin-ichi NAKANO
Ming PAN
Shinichi HABATA Mitsuo YOKOKAWA Shigemune KITAWAKI
The Earth Simulator (ES), developed by the Japanese government's initiative "Earth Simulator project," is a highly parallel vector supercomputer system. In May 2002, the ES was proven to be the most powerful computer in the world by achieving 35.86 teraflops on the LINPACK benchmark and 26.58 teraflops for a global atmospheric circulation model with the spectral method. Three architectural features enabled these great achievements; vector processor, shared-memory and high-bandwidth non-blocking interconnection crossbar network. In this paper, an overview of the ES, the three architectural features and the result of performance evaluation are described particularly with its hardware realization of the interconnection among 640 processor nodes.
Mariko SAKAMOTO Akira KATSUNO Aiichiro INOUE Takeo ASAKAWA Kuniki MORITA Tsuyoshi MOTOKURUMADA Yasunori KIMURA
We developed a SPARC-V9 processor, the SPARC64 V. It has an operating frequency of 1.35 GHz and contains 191 million transistors fabricated using 0.13-µm CMOS technology with eight-layer copper metallization. SPECjbb2000 (CPU# 32) is 492683, highest on the market and 42% higher than the next highest system. SPEC CPU2000 performance is 858 for SPECint and 1228 for SPECfp. The processor is designed to provide the high system performance and high reliability required of enterprise server systems. It is also designed to address the performance requirements of high-performance computing. During our development of several generations of mainframe processors, we conducted many related experiments, and obtained enterprise server system (EPS) development skills, an understanding of EPS workload characteristics, and technology that provides high reliability, availability, and serviceability. We used those as bases of the new processor development. The approach quite effectively moves beyond differences between mainframe and SPARC systems. At the beginning of development and before the start of hardware design, we developed a software performance simulator so we could understand the performance impacts of created specifications, thereby enabling us to make appropriate decisions about hardware design. We took this approach to solve performance problems before tape-out and avoid spending additional time on design update and physical machine reconstruction. We were successful, completing the high-performance processor development on schedule and in a short time. This paper describes the SPARC64 V microprocessor and performance analyses for development of its design.
In this paper, a high-performance pipelining architecture for 2-D inverse discrete wavelet transform (IDWT) is proposed. We use a tree-block pipeline-scheduling scheme to increase computation performance and reduce temporary buffers. The scheme divides the input subbands into several wavelet blocks and processes these blocks one by one, so the size of buffers for storing temporal subbands is greatly reduced. After scheduling the data flow, we fold the computations of all wavelet blocks into the same low-pass and high-pass filters to achieve higher hardware utilization and minimize hardware cost, and pipeline these two filters efficiently to reach higher throughput rate. For the computations of N
Christos DROSOS Chrissavgi DRE Spyridon BLIONAS Dimitrios SOUDRIS
The architecture and implementation of a novel processor suitable wireless terminal applications, is introduced. The wireless terminal is based on the novel dual-mode baseband processor for DECT and GSM, which supports both heterodyne and direct conversion terminal architectures and is capable to undertake all baseband signal processing, and an innovative direct conversion low power modulator/demodulator for DECT and GSM. The state of the art design methodologies for embedded applications and innovative low-power design steps followed for a single chip solution. The performance of the implemented dual mode direct conversion wireless terminal was tested and measured for compliance to the standards. The developed innovative terminal fulfils all the requirements and specifications imposed by the DECT and GSM standards.
Hiroaki NISHI Shinji NISHIMURA Katsuyoshi HARASAWA Tomohiro KUDOH Hideharu AMANO
RHiNET-3/SW is the third-generation switch used in the RHiNET-3 system. It provides both low-latency processing and flexible connection due to its use of a credit-based flow-control mechanism, topology-free routing, and deadlock-free routing. The aggregate throughput of RHiNET-3/SW is 80 Gbps, and the latency is 140 ns. RHiNET-3/SW also provides a hop-by-hop retransmission mechanism. Simulation demonstrated that the effective throughput at a node in a 64-node torus RHiNET-3 system is equivalent to the effective throughput of a 64-bit 33-MHz PCI bus and that the performance of RHiNET-3/SW almost equals or exceeds the best performance of RHiNET-2/SW, the second-generation switch. Although credit-based flow control requires 26% more gates than rate-based flow control to manage the virtual channels (VCs), it requires less VC memory than rate-based flow control. Moreover, its use in a network system reduces latency and increases the maximum throughput compared to rate-based flow control.
The instruction set architecture of MBP-light, a dedicated processor for the DSM (Distributed Shared Memory) management of JUMP-1 is analyzed with a real prototype. The Buffer-Register Architecture proposed for MBP-core improves performance with 5.64% in the home cluster and 6.27% in a remote cluster. Only a special instruction for hashing cluster address is efficient and improves the performance with 2.80%, but other special instructions are almost useless. It appears that the dominant operations in the DSM management program were handling packet queues assigned into the local cluster. Thus, common RISC instructions, especially load/store instructions, are frequently used. Separating instruction and data memory improves performance with 33%. The results suggest that another alternative which provides separate on-chip cache and instructions dedicated for packet queue management is advantageous.
Shin-ichiro MORI Tomoaki TSUMURA Masahiro GOSHIMA Yasuhiko NAKASHIMA Hiroshi NAKASHIMA Shinji TOMITA
This paper describes the architecture of ReVolver/C40 a scalable parallel machine for volume rendering and its prototype implementation. The most important feature of ReVolver/C40 is view-independent real time rendering of translucent 3D object by using perspective projection. In order to realize this feature, the authors propose a parallel volume memory architecture based on the principal axis oriented sampling method and parallel treble volume memory. This paper also discusses the implementation issues of ReVolver/C40 where various kinds of parallelism extracted to achieve high-perfromance rendering are explained. The prototype systems had been developed and their performance evaluation results are explained. As the results of the evaluation of the prototype systems, ReVolver/C40 with 32 parallel volume memory is estimated to achieve more than 10 frame per second for 2563 volume data on 2562 screen by using perspective projection. The authors also review the development of ReVolver/C40 from several view points.
Yuetsu KODAMA Toshihiro KATASHITA Kenji SAYANO
REX is a reconfigurable experimental system for evaluating and developing parallel computer systems. It consists of large-scale FPGAs, and enables the systems to be reconfigured from their processors to the network topology in order to support their evaluation and development. We evaluated REX using several implementations of parallel computer systems, and showed that it had enough scalability of gates, memory throughput and network throughput. We also showed that REX was an effective tool because of its emulation speed and reconfigurability to develop systems.
Yuso KANAMORI Oki MINABE Masaki WAKABAYASHI Hideharu AMANO
At the initial stage of developing parallel machines, a software monitor, which manages communication between host computers, program loading and debugging, is necessary. However, it is often a cumbersome job to develop such a monitoring system especially when the target takes a parallel architecture. To solve this problem, we developed an integrated monitor system called "Pot". "Pot" consists of a system runs on the host computer and simple code on a target machine. In order to reduce the development costs, the program on a target machine is as simple as possible while "Pot" on the host computer itself provides various functions for system development.
The virtual memory functions in real-time operating systems have been used in embedded systems. Recent RISC processors provide virtual memory supports through software-managed Translation Lookaside Buffer (TLB) in software. In real-time aspects of the embedded systems, managing TLB entries is the most important because overhead at TLB miss time gives a great effect to overall performance of the system. In this paper, we propose several TLB management algorithms in MIPS processors. In the algorithms, a replaced TLB entry is randomly chosen or managed. We analyze the algorithms by comparing overheads at task switching times and TLB miss times.
Hidenori KOBAYASHI Nobuyuki YAMASAKI
The imprecise computation model is one of the flexible computation models used to construct real-time systems. It is especially useful when the worst case execution times are difficult to estimate or the execution times vary widely. Although there are several ways to implement this model, they have not attained much attentions of real-world application programmers to date due to their unrealistic assumptions and high dependency on the execution environment. In this paper, we present an integrated approach for implementing the imprecise computation model. In particular, our research covers three aspects. First, we present a new imprecise computation model which consists of a mandatory part, an optional part, and another mandatory part called wind-up part. This wind-up part allows application programmers to explicitly incorporate into their programs the exact operations needed for safe degradation of performance when there is a shortage in resources. Second, we describe a scheduling algorithm called Mandatory-First with Wind-up Part (M-FWP) which is based on the Earliest Deadline First strategy. This algorithm, unlike scheduling algorithms developed for the classical imprecise computation model, is capable of scheduling a mandatory portion after an optional portion. Third, we present a dynamic priority server method for an efficient implementation of the M-FWP algorithm. We also show that the number of the proposed server at most needed per node is one. In order to estimate the performance of the proposed approach, we have implemented a real-time operating system called RT-Frontier. The experimental analyses have proven its ability to implement tasks based on the imprecise computation model without requiring any knowledge on the execution time of the optional part. Moreover, it also showed performance gain over the traditional checkpointing technique.
Seongyong KIM Kong-Joo LEE Key-Sun CHOI
We propose a normalization scheme of syntactic structures using a binary phrase structure grammar with composite labels. The normalization adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree, so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labelling algorithm. Since the proposed normalization scheme is binary and uses only part-of-speech information, it can readily be used to compare the results of different syntactic analyses independently of their syntactic description and can be applied to other languages as well. It can also be used for syntactic analysis, which performs higher than the previous syntactic description for Korean corpus. We implement a tool that transforms a syntactic description into normalized one based on this proposed scheme. It can help construct a unified syntactic corpus and extract syntactic information from various types of syntactic corpus in a uniform way.
Assurance improvement is the most challenging task both in railway and space areas, even though their required technologies are different. It can be said that the railways have been achieving the extremely high assurance level through their long history and experience, while the space area has not accumulated enough experience in addition to their requirement of highly advanced technology. As a result, the latter places more emphasis on theoretical analyses for ensuring the assurance. This paper introduces the two different approaches of railway and space areas toward assurance, and compares the two methods.
Yasutomo SHIRAKAWA Akio SHIIBASHI
Suica is our contact-less IC card's nickname: Super Urban Intelligent CArd. There are two types of IC Card: One for Suica IO (SF) Card and the other for Suica Commuter Pass, which has a function of stored fare card and commuter pass. There are 6.54 million Suica holders (about 3.33 million Suica Season Pass holders and 3.21 million Suica IO Card holders) as of 16, June 2003.
The general public is expected to demand in not too distant future instituting more stringent certification procedures for computing parts of traditional and new-generation safety-critical application systems. Such quality-of-service (QoS) certification processes will not and can not rely solely on the testing approach. Design-time guaranteeing of timely service capabilities of various subsystems is an inevitable part of such processes. Although some promising developments in this area have been occurring in recent years, the technological challenges yet to be overcome are enormous. This paper is a summary of the author's perspective on the remaining challenges and promising directions for tackling them.
Information service provision and utilization is an important infrastructure in the high-assurance distributed information service system. In order to cope with the rapidly evolving situations of providers' and users' heterogeneous requirements, one autonomous information service system has been proposed, called Faded Information Field (FIF). FIF is a distributed information service system architecture, sustained by push/pull mobile agents, through a recursive demand-oriented provision of the most popular information closer to the users to make a tradeoff between the cost of service allocation and access. In this system, users' requests are autonomously driven by pull mobile agents in charge of finding the relevant service. In the case of a mono-service request, the system is designed to reduce the time needed for users to access the information and to preserve the consistency of the replicas. However, when the user requests joint selection of multiple services, synchronization of atomic actions and timeliness have to be assured by the system. In this paper, the relationship that exists among the contents, properties and access ratios of information services is clarified. Based on these factors, the ratio of correlation and degree of satisfaction are defined and the autonomous integration and optimal allocation of information services for heterogeneous FIFs to provide one-stop service for users' multi-service requirements are proposed. The effectiveness of the proposed technology is shown through evaluation and the results show that the integrated services can reduce the total users access time and increase services consumption compared with separate systems.
Hiroki SUGURI Eiichiro KODAMA Masatoshi MIYAZAKI
In order for the agent-based applications to be truly autonomous and decentralized, heterogeneous multi-agent systems themselves must communicate and interoperate with each other. To solve the problem, we take a two-step approach. First, message-level interoperability is realized by a gateway agent that interconnects heterogeneous multi-agent systems. Second, higher-level interoperation of conversations, which consist of bi-directional streams of messages, is achieved by dynamically negotiating the interaction protocols. We demonstrate the concept, technique and implementation of integrating multi-agent systems and show how the method improves the assurance of real-world applications in autonomous decentralized systems.
Carlos PEREZ LEGUIZAMO Dake WANG Kinji MORI
To meet the highly competitive and dynamic needs in the market, an e-Business company needs to flexibly integrate its heterogeneous database systems together, e.g., the integration of makers and retailers in a Supply Chain Management System (SCM). The customers demand one-click response and also their access requirements change too frequently. Moreover, different retailers and makers in a SCM, being the autonomous entities, have their own specific requirements for stock-cost and opportunity-loss, depending on their local situation that is also changing with time. Under this background, the integrated DBs of the SCM are required to provide real-time response, heterogeneity satisfaction and flexibility to adapt to changing requirements. The conventional approach of strict consistency leads to low response and less flexibility due to the strong interdependence of the systems. In this paper, Autonomous Decentralized Database System has been proposed as an application-oriented database technology based on the concept of autonomy and loose-consistency among the distributed DB systems thus providing real-time, flexibility and high availability. The autonomy in the system has been achieved by defining a data attribute, Allowable Volume, within which each component DB has autonomy to update the data in real-time. Moreover, the system adapts to the dynamically changing heterogeneous access requirements at each DB by managing the distribution of AV among different DBs through an active coordination mechanism. Due to the dynamic and unpredictable environment, the component DBs are provided with complete autonomy for their local and coordination decisions, thus diminishing the interdependency and improving the response time. As the system consists of loosely-connected subsystems, it also has high availability. Therefore, the proposed system provides highly decentralized architecture with flexibility and high availability. The performance of the system has been shown significantly effective by simulating the internet based SCM system, from the communication-cost and response time point of view.
Juichi TAKAHASHI Yoshiaki KAKUDA
Software and its systems are more complicated than a decade ago, and the systems are used for mission critical business, flight control and so on which often require high assurance systems. In this circumstance, we often use black-box testing. The question now arises that black-box testing does not generate numerical value of testing result but empirical. Thus, in this research, we develop and enhance FSM (Finite State Machine) testing method which can produce code coverage rate as numerical value. Our developed FSM testing by code coverage focuses on not only software system behavior but also data. We found higher code coverage rate, which indicates quality of system, by this method than existing black box testing method.
Bojan CUKIC Erdogan GUNEL Harshinder SINGH Lan GUO
Software certification is a notoriously difficult problem. From software reliability engineering perspective, certification process must provide evidence that the program meets or exceeds the required level of reliability. When certifying the reliability of a high assurance system very few, if any, failures are observed by testing. In statistical estimation theory the probability of an event is estimated by determining the proportion of the times it occurs in a fixed number of trials. In absence of failures, the number of required certification tests becomes impractically large. We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the hypothesized probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.
Wei-Tek TSAI Ray PAUL Lian YU Akihiro SAIMI Zhibin CAO
Web Services (WS) have received significant attention recently. Delivering Quality of Service (QoS) on the Internet is a critical and significant challenge for WS community. This article proposes a Web Services Testing Framework (WSTF) for WS participates to perform WS testing. WSTF provides three main distributed components: test master, test agents and test monitor. Test master manages scenarios and generates test scripts. It initiates WS testing by sending test scripts to test agents. Test agents dynamically bind and invoke the WS. Test monitors capture synchronous/asynchronous messages sent and received, attach timestamp, and trace state change information. The benefit to use WSTF is that the user only needs to specify system scenarios based on the system requirements without needing to write test code. To validate the proposed approach, this paper used the framework to test a supply-chain system implemented using WS.
Kazuo KERA Keisuke BEKKI Kinji MORI
The recent real time systems have the needs of system expandability with heterogeneous functions and operations. High assurance system is very important for such systems. In order to realize the high assurance system, we research the autonomous step-by-step construction technique based on assurance evaluation. In this paper we propose the average functional reliability as the best index to indicate the assurance performance for system construction. We also propose the autonomous step-by-step construction technique to decide the construction sequence to maximize the assurance performance.
Jason O. HALLSTROM William M. LEAL Anish ARORA
The demand for highly available software systems has increased dramatically over the past several years. Such systems must be developed using a discipline that supports unanticipated runtime evolution. We characterize the desiderata of a programming model that provides such support, and describe the design and implementation of an architecture satisfying these criteria. The Dynamic Reconfiguration Subsystem (DRSS) is an interceptor-based open container architecture that supports the development of highly available systems by enabling the scalable, dynamic deployment of cross-cutting software modifications. We have implemented a prototype of DRSS using Microsoft's .NET Framework.
Hiroki FUJIO Hiroyuki OKAMURA Tadashi DOHI
The software rejuvenation is a proactive fault management technique for operational software systems which age due to the error conditions that accrue with time and/or load, and is important for high assurance systems design. In this paper, fine-grained shock models are developed to determine the optimal rejuvenation policies which maximize the system availability. We introduce three kinds of rejuvenation schemes and calculate the optimal software rejuvenation schedules maximizing the system availability for respective schemes. The stochastic models with three rejuvenation policies are extentions of Bobbio et al. (1998, 2001) and represent the failure phenomenon due to the exhaustion of the software resources caused by the memory leak, the fragmentation, etc. Numerical examples are devoted to compare three control schemes quantitatively.
This paper presents a new interconnection network called bi-rotator graph. It is originated from the rotator graph. The rotator graph has many unidirectional edges and the bi-rotator graph is constructed by making edges of the rotator graph bidirectional. The bidirectional edges can help to reduce the average routing distance and increase the flexibility of applications. Therefore, we propose the bi-rotator graph as an alternative to the rotator graph. In this paper, we will first illustrate how to construct the bi-rotator graph and present the node-to-node routing algorithm. Next, we will propose the algorithm for building Hamiltonian cycle, which demonstrates that the bi-rotator graph is Hamiltonian. Finally, we provide a dilation-one algorithm for embedding arbitrary size of cycle into the bi-rotator graph and show that the bi-rotator graph is Hamiltonian-connected.
Wichai BOONKUMKLAO Yoshikazu MIYANAGA Kobchai DEJHAN
In this paper, we introduce a flexible design for intellectual property(IP) which has become important to design system LSI. The proposed IPs which have high flexibility for user requirement. The design priority is determined by setting parameters as the number of arithmetic unit, internal bitlength, clock speed and so on. The design time can thus be reduced. Designed IP is based on the reconfigurable architecture in which many structures can be dynamically selected. This paper shows a implementation of Frequency Response Masking digital filter(FRM) and Principal Components Analysis(PCA) using a reconfigurable architecture. We show the method to realize the designed circuit and the results of experiments using field programmable gate array(FPGA).
Yasunori ISHIHARA Kengo MORI Toru FUJIWARA
Detecting the possibility of inference attacks is necessary in order to keep a database secure. Inference attacks mean that a user tries to infer the result of an unauthorized queries to the user. For method schemas, which are a formal model of object-oriented databases, it is known that the security problem against inference attacks is decidable in polynomial time in the size of a given database instance. However, when the database instance or authorization has slightly been updated, it is not desirable to check the entire database again for efficiency. In this paper, we propose several sufficient conditions for update operations to preserve the security. Furthermore, we show that some of the proposed sufficient conditions can be decided much more efficiently than the entire security check. Thus, the sufficient conditions are useful for incremental security checking.
Mamoru OHARA Masayuki ARAI Satoshi FUKUMOTO Kazuhiko IWASAKI
An approach is proposed for constructing a dependable server cluster composed only of server nodes with all nodes running the same algorithm. The cluster propagates an IP multicast address as the server address, and clients multicast requests to the cluster. A local proxy running on each client machine enables conventional client software designed for unicasting to communicate with the cluster without having to be modified. Evaluation of a prototype system providing domain name service showed that a cluster using this technique has high dependability with acceptable performance degradation.
An Anomaly detection sensor, to detect an abnormal use of system resources or an abnormal behavior of authorized users, uses various measures and decides on the basis of threshold value. However, it has high false alarm rate, and it make it hard to merchandise. Also, it is not easy to have a threshold which is suitable for installation environment. In this paper, we propose a method to automatic generation of proper threshold of each sensor, and the threshold is applied for an integrated decision. Also, we propose a computing method for a correlation of heterogeneous detection sensors. As we use the correlation to integrate and decide the opinions of each sensor, false positive can be greatly reduced.
Heng-Iang HSU Wen-Whei CHANG Xiaobei LIU Soo Ngee KOH
An approach to minimum mean-squared error (MMSE) decoding for vector quantization over channels with memory is presented. The decoder is based on the Gilbert channel model that allows the exploitation of both intra- and inter-block correlation of bit error sequences. We also develop a recursive algorithm for computing the a posteriori probability of a transmitted index sequence, and illustrate its performance in quantization of Gauss-Markov sources under noisy channel conditions.
Pusadee SERESANGTAKUL Tomio TAKARA
We have developed Thai speech synthesis by rule using cepstral parameters. In order to synthesize the pitch contour of Thai tones, we have applied an extension of Fujisaki's model. A mid tone is unique for Thai when compared to Chinese. For the extension of Fujisaki's model to Thai tones, we assumed that the mid tone is neutral and we adopted its phrase component as the phrase components for all tones. According to our study on the pitch contour of five Thai tones using this model, the result shows that the command pattern for the local F0 components needs both positive and negative commands. Listening tests showed that the intelligibility of the Thai tones measured in terms of error rate were 0.0%, 0.7% and 2.7% for analysis/synthesis, Fujisaki's model and the polynomial model, respectively. Therefore, it is shown that the extension of Fujisaki's model is effective for Thai.
We propose an image generation method for an immersive multi-screen environment that contains a motion ride. To allow a player to look around freely in a virtual world, a method to generate an arbitrary direction image is required, and this technology has already been established. In our environment, displayed images must also be updated according to the movement of the motion ride in order to keep a consistency between the player's viewpoint and the virtual world. In this paper, we indicate that this updating process can be performed by the similar method to generate looking-around images and the same data format can be applicable. Then we discuss the format in terms of the data size and the amount of calculations need to consider the performance in our display environment, and we propose new image formats which improve on the widely-used formats such as the perspective, or the fish-eye format.
Jong-Hyun PARK Wan-Hyun CHO Soon-Young PARK
In this paper we present an unsupervised color image segmentation algorithm based on statistical models. We have adopted the Gaussian mixture model to represent the distribution of color feature vectors. A novel deterministic annealing EM and mean field theory from statistical mechanics are used to compute the posterior probability distribution of each pixel and estimate the parameters of the Gaussian Mixture Model. We describe the noncontexture segmentation algorithm that uses a deterministic annealing approach and the contexture segmentation algorithm that uses the mean field theory. The experimental results show that the deterministic annealing EM and mean field theory provide a global optimal solution for the maximum likelihood estimators and that these algorithms can efficiently segment the real image.