The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

3081-3100hit(3945hit)

  • A Reconfiguration Algorithm for Memory Arrays Containing Faulty Spares

    Keiichi HANDA  Kazuhito HARUKI  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1123-1130

    Reconfiguration of memory arrays using spare lines is known to be an NP-complete problem. In this paper, we present an algorithm that reconfigures a memory array without any faults by using spare lines effectively even if they contain faulty elements. First, the reconfiguration problem is transformed to an equivalent covering problem in which faulty elements are covered by imaginary fault-free spare lines. Next, the covering problem is heuristically solved by using the Dulmange-Mendelsohn decomposition. The experiments for recently designed memory arrays show that the proposed algorithm is fast and practical.

  • Design and Analysis of a Packet Concentrator

    Yiu-Wing LEUNG  

     
    PAPER-Switching

      Vol:
    E83-B No:5
      Page(s):
    1115-1121

    Packet concentrators are used in many high-speed computer communication systems such as fast packet switches. In these systems, the time available for concentration is very short. It is therefore desirable to realize the packet concentrators as hardware chips for fast concentration. The knockout concentrator was proposed for hardware realization. In this paper, we improve this concentrator to reduce the probability of packet loss, and the improved concentrator is called wraparound knockout concentrator. This concentrator has several wraparound paths within it, and it does not require any additional pin per chip. After contention among the packets in a slot, each winner goes to a distinct output, some losers circulate along the wraparound paths for contention in the subsequent slot, and the remaining losers are discarded. In this manner, some losers are not discarded immediately and they still have the chance to go to the outputs in the subsequent slot, thereby reducing the probability of packet loss. We analyze the number of logic gates required and the probability of packet loss. The numerical results show that if the proposed concentrator has a few wraparound paths, the probability of packet loss can already be reduced by orders of magnitude.

  • On the Concept of "Stability" in Asynchronous Distributed Decision-Making Systems

    Tony S. LEE  Sumit GHOSH  

     
    PAPER-Real Time Control

      Vol:
    E83-B No:5
      Page(s):
    1023-1038

    Asynchronous, distributed, decision-making (ADDM) systems constitute a special class of distributed problems and are characterized as large, complex systems wherein the principal elements are the geographically-dispersed entities that communicate among themselves, asynchronously, through message passing and are permitted autonomy in local decision-making. A fundamental property of ADDM systems is stability that refers to their behavior under representative perturbations to their operating environments, given that such systems are intended to be real, complex, and to some extent, mission critical systems, and are subject to unexpected changes in their operating conditions. ADDM systems are closely related to autonomous decentralized systems (ADS) in the principal elements, the difference being that the characteristics and boundaries of ADDM systems are defined rigorously. This paper introduces the concept of stability in ADDM systems and proposes an intuitive yet practical and usable definition that is inspired by those used in Control Systems and Physics. A comprehensive stability analysis on an accurate simulation model will provide the necessary assurance, with a high level of confidence, that the system will perform adequately. An ADDM system is defined as a stable system if it returns to a steady-state in finite time, following perturbation, provided that it is initiated in a steady-state. Equilibrium or steady-state is defined through placing bounds on the measured error in the system. Where the final steady-state is equivalent to the initial one, a system is referred to as strongly stable. If the final steady-state is potentially worse then the initial one, a system is deemed marginally stable. When a system fails to return to steady-state following the perturbation, it is unstable. The perturbations are classified as either changes in the input pattern or changes in one or more environmental characteristics of the system such as hardware failures. Thus, the key elements in the study of stability include steady-state, perturbations, and stability. Since the development of rigorous analytical models for most ADDM systems is difficult, if not impossible, the definitions of the key elements, proposed in this paper, constitute a general framework to investigate stability. For a given ADDM system, the definitions are based on the performance indices that must be judiciously identified by the system architect and are likely to be unique. While a comprehensive study of all possible perturbations is too complex and time consuming, this paper focuses on a key subset of perturbations that are important and are likely to occur with greater frequency. To facilitate the understanding of stability in representative real-world systems, this paper reports the analysis of two basic manifestations of ADDM systems that have been reported in the literature --(i) a decentralized military command and control problem, MFAD, and (ii) a novel distributed algorithm with soft reservation for efficient scheduling and congestion mitigation in railway networks, RYNSORD. Stability analysis of MFAD and RYNSORD yields key stable and unstable conditions.

  • Speech Enhancement Using Nonlinear Microphone Array Based on Noise Adaptive Complementary Beamforming

    Hiroshi SARUWATARI  Shoji KAJITA  Kazuya TAKEDA  Fumitada ITAKURA  

     
    PAPER-Engineering Acoustics

      Vol:
    E83-A No:5
      Page(s):
    866-876

    This paper describes an improved complementary beamforming microphone array based on the new noise adaptation algorithm. Complementary beamforming is based on two types of beamformers designed to obtain complementary directivity patterns with respect to each other. In this system, during a pause in the target speech, two directivity patterns of the beamformers are adapted to the noise directions of arrival so that the expectation values of each noise power spectrum are minimized in the array output. Using this technique, we can realize the directional nulls for each noise even when the number of sound sources exceeds that of microphones. To evaluate the effectiveness, speech enhancement experiments and speech recognition experiments are performed based on computer simulations with a two-element array and three sound sources under various noise conditions. In comparison with the conventional adaptive beamformer and the conventional spectral subtraction method cascaded with the adaptive beamformer, it is shown that (1) the proposed array improves the signal-to-noise ratio (SNR) of degraded speech by more than 6 dB when the interfering noise is two speakers with the input SNR of below 0 dB, (2) the proposed array improves the SNR by about 2 dB when the interfering noise is bubble noise, and (3) an improvement in the recognition rate of more than 18% is obtained when the interfering noise is two speakers or two overlapped signals of some speakers under the condition that the input SNR is 10 dB.

  • A Distributed Approach against Computer Viruses Inspired by the Immune System

    Takeshi OKAMOTO  Yoshiteru ISHIDA  

     
    PAPER-Communication and Computer Architecture/Assurance Systems

      Vol:
    E83-B No:5
      Page(s):
    908-915

    More than forty thousands computer viruses have appeared so far since the first virus. Six computer viruses on average appear every day. Enormous expansion of the computer network opened a thread of explosive spread of computer viruses. In this paper, we propose a distributed approach against computer virus using the computer network that allows distributed and agent-based approach. Our system is composed of an immunity-based system similar to the biological immune system and recovery system similar to the recovery mechanism by cell division. The immunity-based system recognizes "non-self" (which includes computer viruses) using the "self" information. The immunity-based system uses agents similar to an antibody, a natural killer cell and a helper T-cell. The recover system uses a copy agent which sends an uninfected copy to infected computer on LAN, or receives from uninfected computer on LAN. We implemented a prototype with JAVATM known as a multi-platform language. In experiments, we confirmed that the proposed system works against some of existing computer viruses that can infect programs for MS-DOSTM.

  • An Automatic Signature Scheme Using a Compiler in Distributed Systems

    Whe-Dar LIN  Jinn-Ke JAN  

     
    PAPER-Communication and Computer Architecture/Assurance Systems

      Vol:
    E83-B No:5
      Page(s):
    935-941

    A novel protocol scheme is proposed here to compile a program or run a software package. It is a modification where a file can be detected by checking the consistency of the original file with its accompanying digital signature. When an executable program is created it may get infected with some viruses before the signature is attached to it. The infection cannot be detected by signature verification and the origin of the infection cannot be specified either. We propose a signature scheme that let one can sign right in atomic step after the creation of an executable program. Our security-related and cryptographic protocol is used to establish secure communication over insecure open networks and distributed systems. When a server compiles a source program, the compiler automatically creates both the executable program and its signature. Thus no virus can infect the executable programs without being detected. In our proposed signature scheme, the server signature is created a set of proxy secret integers, which is calculated from a compiler maker's secret key. Each server compiler is possessed by its corresponding client user and it is used only when a server secret value is fed into it. The infections of files can be detected by the ordinary server digital signatures. The proposed signature scheme together with the digital signature against infection in the preprocessing step enables us to specify the origin of the infection. Besides that, we also provide the message recovery capability to recover the original file to save the infected files. The most natural extension of this novel protocol scheme is a server-based signature that integrated together with application packages will allow client and the server to commit themselves to one another.

  • Verification of a Microcomputer Program Specification Embedded in a Reactive System

    Yasunori ISHIHARA  Kiichiro NINOMIYA  Hiroyuki SEKI  Daisuke TAKAHARA  Yutaka YAMADA  Shigesada OMOTO  

     
    PAPER-Software Engineering

      Vol:
    E83-D No:5
      Page(s):
    1082-1091

    This paper proposes a model checking method for microcomputer programs. To deal with the state explosion problem, we adopt a compositional verification approach. Based on the proposed method, a microcomputer program for a real-life air-conditioner is verified. The program is large enough to cause state explosion. Among fourteen typical properties of the program, five properties are successfully verified by our method.

  • High-Availability Scheme for Shared Servers of Cluster Systems Using Commands Transfer

    Yuzuru MAYA  Soichi ISONO  Akira OHTSUJI  

     
    PAPER-Computer Systems

      Vol:
    E83-D No:5
      Page(s):
    1073-1081

    For cluster systems consisting of multiple processing nodes and shared servers which consist of an on-line and a backup shared server, we propose a hot-standby scheme for shared servers. In this scheme for shared servers, when the on-line shared server receives a command from a node, it sends only an update command and its data identifier to the backup shared server. Both the on-line and the backup shared server execute the update command independently, and the command result of the on-line shared server is identical to that of the backup shared server. When the on-line shared server fails, the backup reconstructs the shared data by using its own shared data and the user data from each node. We evaluated the system recovery time and the performance overhead for this hot-standby scheme. It enables the performance overhead to be ignored, and the system recovery time to be shortened to 20 seconds in cluster systems.

  • CM1: Communication Monitor for Applications Adaptive to Execution Environments

    Takamichi TATEOKA  Hideki SUNAHARA  Fumio TERAOKA  Yoshikatsu TADA  

     
    PAPER

      Vol:
    E83-D No:5
      Page(s):
    1020-1027

    In this paper, we propose an architecture for environmental information services that make it possible for applications to adapt in dynamically changing environments. These services provide information necessary for applications to adapt to its environment. Unlike other information services, the information provided includes not only raw information but also abstract or policy-applied information. The variety of information enables applications to choose suitable level of information. A simple adaptive application can use highly abstract information instead of detailed raw information required by complicated adaptive applications. The policy-applied information enables adaptive applications to share decisions by the user and cooperate among them. Applications can efficiently adapt to changes in the environment since our architecture provides notification of these changes. This notification does not disturb applications since the conditions for the notification are controlled by each application. We apply the proposed architecture to a mobile internetworking environment and present a prototype implementation of environmental information services called CM1. We also discuss our primary evaluation of CM1 with the Personal File System, which is a network file system with dynamic adaptation features, for mobile internetworking environments.

  • Complete Exchange Algorithms in Wormhole-Routed Torus Networks

    Si-Gwan KIM  Seung Ryoul MAENG  Jung Wan CHO  

     
    PAPER-Algorithms

      Vol:
    E83-D No:4
      Page(s):
    766-776

    We present efficient all-to-all personalized communication algorithms for a 2D torus in wormhole-routed networks. Our complete exchange algorithms reduce the number of start-up by a factor of up to 2, which is a good metric for network performance in wormhole networks. Our algorithms divide the whole network into 22 networks, giving two contention-free networks with N/2N/2. After specially designated nodes called master nodes have collected messages, whose destinations are the rest of the basic cells, only master nodes perform complete exchange with a reduced network size. When finished with this complete exchange among master nodes, these nodes distribute messages to the rest of the master nodes, which results in the desired complete exchange. Then, we present a modified algorithm that further reduces the data transmission time sacrificing the start-up time. After we present our new algorithms, we analyze time complexities and compare several algorithms. We show that our practical algorithm is efficient by a factor of 2 in the required start-up time which means that our algorithms are suitable for wormhole-routed networks.

  • A Fast Method of Calculating High-Order Backward LP Coefficients for Wideband CELP Coders

    Masahiro SERIZAWA  Kazunori OZAWA  Atsushi MURASHIMA  

     
    PAPER-Speech and Hearing

      Vol:
    E83-D No:4
      Page(s):
    870-875

    This paper proposes a fast method of calculating high-order backward Linear Prediction (LP) coefficients for wideband Code Excited LP (CELP) coders operating at around 16 kbit/s. The fast calculation is achieved by a recursive calculation for the high-order autocorrelation of the decoded signal. The recursive calculation can be employed thanks to a novel method of converting the autocorrelation of the decoded signal to that of the residual signal. High-order backward LP coefficients are computed from the autocorrelation of the residual signal using the Levinson-Durbin (LD) procedure. The conversion approximately performs inverse-filtering using LP coefficients representing a corresponding envelope spectrum. Due to the recursive calculation, the proposed fast calculation method achieves 30% to 45% reduction in computations to calculate the high-order backward LP coefficients compared to the conventional method. Subjective tests show that a wideband Multi-Pulse based CELP (MP-CELP) coder at 16 kbit/s with the proposed method achieves comparable coding quality to that with the conventional one with 35% reduction in computations needed for calculation of the backward LP coefficients.

  • Creating Virtual Environment Based on Video Data with Forward Motion

    Xiaohua ZHANG  Hiroki TAKAHASHI  Masayuki NAKAJIMA  

     
    PAPER-Multimedia Pattern Processing

      Vol:
    E83-D No:4
      Page(s):
    931-936

    The construction of photo-realistic 3D scenes from video data is an active and competitive area of research in the fields of computer vision, image processing and computer graphics. In this paper we address our recent work in this area. Unlike most methods of 3D scene construction, we consider the generation of virtual environments from video sequence with a video-cam's forward motion. Each frame is decomposed into sub-images, which are registered correspondingly using the Levenberg-Marquardt iterative algorithm to estimate motion parameters. The registered sub-images are correspondingly pasted together to form a pseudo-3D space. By controlling the position and direction, the virtual camera can walk through this virtual space to generate novel 2D views to acquire an immersive impression. Even if the virtual camera goes deep into this virtual environment, it can still obtain a novel view while maintaining relatively high resolution.

  • NP-Completeness of Reallocation Problems with Restricted Block Volume

    Hiroyoshi MIWA  Hiro ITO  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    590-597

    A reallocation problem is defined as determining whether blocks can be moved from their current boxes to their destination boxes in the number of moves less than or equal to a given positive integer. This problem is defined simply and has many practical applications. We previously proved the following results: The reallocation problem such that the block volume is restricted to one volume unit for all blocks can be solved in linear time. But the reallocation problem such that the block volume is not restricted is NP-complete, even if no blocks have bypass boxes. But the computational complexity of the reallocation problems such that the range of the block volume is restricted has not yet been known. We prove that the reallocation problem is NP-complete even if the block volume is restricted to one or two volume units for all blocks and no blocks have bypass boxes. Furthermore, we prove that the reallocation problem is NP-complete, even if there are only two blocks such that each block has the volume units of fixed positive integer greater or equal than two, the volume of the other blocks is restricted to one volume unit, and no blocks have bypass boxes. Next, we consider such a block that must stays in a same box both in the initial state and in the final state but can be moved to another box for making room for other blocks. We prove that the reallocation problem such that an instance has such blocks is also NP-complete even if the block volume is restricted to one volume unit for all blocks.

  • Server-Based Maintenance Approach for Computer Classroom Workstations

    Chiung-San LEE  

     
    PAPER-Software Systems

      Vol:
    E83-D No:4
      Page(s):
    807-814

    This paper presents a server-based approach to maintaining software integrity for all computer classroom workstations. The approach takes several advantages: (1) applicable to the FAT (file allocation table) and NTFS file systems, (2) renovating all workstations to workable state, (3) quickly adding or removing software systems to or from all workstations for teachers conducting new courses, and (4) automatically changing computer name and IP (Internet Protocol) address to an appointed one. The basic concept of the server-based maintenance approach is to install whole software systems, including operating system and applications, on a normal workstation, to make one image copy of the workstation's hard disk and store it onto network server, and to restore the image file from the server to the remaining workstations. In order to change computer name and IP automatically, this paper presents a searching heuristic for finding their locations in the image file. The heuristic is modified from Boyer-Moore (BM) algorithm and can improve the BM algorithm's performance over 9%.

  • Knowledge-Based Software Composition Using Rough Set Theory

    Yoshiyuki SHINKAWA  Masao J. MATSUMOTO  

     
    PAPER-Theory and Methodology

      Vol:
    E83-D No:4
      Page(s):
    691-700

    Software Composition is one of the major concerns in component based software development (CBSD). In this paper, we present a formal approach to construct software systems from requirements models using available components. We focus on the knowledge resides in the requirements and the components in order to deal with those heterogeneous concepts. This approach consists of three steps. The first step is selecting adaptable components to the requirements model. The requirements and the components are transformed into the form of Σ algebra, and the component adaptability is evaluated by Σ homomorphism. Rough Set Theory (RST) is used to make carriers of two Σ algebras common, which are derived from the requirements and the components. The second step is identifying the control structure of the requirements. Decision tables are used for representing the knowledge on the requirements, and RST is used to optimize the control structure. The third step is to implement the control structure as glue codes which would perform the components appropriately. This approach mainly focuses on enterprise back-office applications in this paper, however, it can be easily applied to other domains, since it assumes the requirements to be expressed in Colored Petri Nets (CPN), and CPN can express various problem domains other than enterprise back-office applications.

  • Detection of Conserved Domains in Protein Sequences Using a Maximum-Density Subgraph Algorithm

    Hideo MATSUDA  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    713-721

    In this paper, we propose a method for detecting conserved domains from a set of amino acid sequences that belong to a protein family. This method detects the domains as follows: first, generate fixed-length subsequences from the sequences; second, construct a weighted graph that connects any two of the subsequences (vertices) having higher similarity than a pre-defined threshold; third, search for the maximum-density subgraph for each connected component of the graph; finally, explore conserved domains in the sequences by combining the results of the previous step. From the performance results obtained by applying the method to several protein families that have complex conserved domains, we found that our method was able to detect those domains even though some domains were weakly conserved.

  • A Parallel Algorithm for Constructing Strongly Convex Superhulls of Points

    Carla Denise CASTANHO  Wei CHEN  Koichi WADA  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    722-732

    Let S be a set of n points in the plane and CH(S) be the convex hull of S. We consider the problem of constructing an approximate convex hull which contains CH(S) with strong convexity. An ε-convex δ-superhull of S is a convex polygon P satisfying the following conditions: (1) P has at most O(n) vertices, (2) P contains S, (3) no vertex of P lies farther than δ outside CH(S), and (4) P remains convex even if its vertices are perturbed by as much as ε. The parameters ε and δ represent the strength of convexity of P and the degree of approximation of P to CH(S), respectively. This paper presents the first parallel method for the problem. We show that an ε-convex (8+42)ε-superhull of S can be constructed in O(log n) time using O(n) processors, or in O(log n) time using O(n/log n) processors if S is sorted, in the EREW-PRAM model. We implement the algorithm and find that the average performance is even much better: the results are more strongly convex and much more approximate to CH(S) than the theoretical analysis shows.

  • Architecture of a VOD System with Proxy Servers

    Kyung-Ah AHN  Hoon CHOI  Won-Ok KIM  

     
    PAPER-Multimedia Systems

      Vol:
    E83-B No:4
      Page(s):
    850-857

    We present an architecture of a VOD system employing proxy servers. The proposed VOD system provides efficient and reliable VOD services and solves the problems caused by traditional VOD systems of centralized, hierarchical or distributed architecture. The proxy servers are placed between video servers and user systems. The proxy server is a small size video server that has not only caching function but also intelligence such as VCR-like video stream control or navigation of other proxy/video servers to search for a selected video program. Using a VOD system of the proposed architecture, the VOD services can be provided to more users because it reduces the workload of video servers and network traffic. We provide the performance model of the system. Service availability is also analyzed. The proposed architecture shows better performance and availability than the traditional VOD architectures.

  • Wavelet-Based Broadband Beamformers with Dynamic Subband Selection

    Yung-Yi WANG  Wen-Hsien FANG  

     
    PAPER-Antenna and Propagation

      Vol:
    E83-B No:4
      Page(s):
    819-826

    In this paper, we present a new approach for the design of partially adaptive broadband beamformers with the generalized sidelobe canceller (GSC) as an underlying structure. The approach designs the blocking matrix involved by utilizing a set of P-regular, M-band wavelet filters, whose vanishing moment property is shown to meet the requirement of a blocking matrix in the GSC structure. Furthermore, basing on the subband decomposition property of these wavelet filters, we introduce a new dynamic subband selection scheme succeeding the blocking matrix. The scheme only retains the principal subband components of the blocking matrix outputs based on a prescribed statistical hypothesis test and thus further reduces the dimension of weights in adaptive processing. As such, the overall computational complexity, which is mainly dictated by the dimension of adaptive weights, is substantially reduced. The furnished simulations show that this new approach offers comparable performance as the existing fully adaptive beamformers but with reduced computations.

  • Approximating the Maximum Weight of Linear Codes is APX-Complete

    Toshiya ITOH  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    606-613

    The minimum distance of a linear code C is a useful metric property for estimating the error correction upper bound of C and the maximum likelihood decoding of a linear code C is also of practical importance and of theoretical interest. These problems are known to be NP-hard to approximate within any constant relative error to the optimum. As a problem related to the above, we consider the maximization problem MAX-WEIGHT: Given a generator matrix of a linear code C, find a codeword c C with its weight as close to the maximum weight of C as possible. It is shown that MAX-WEIGHT PTAS unless P=NP, however, no nontrivial approximation upper and lower bounds are known. In this paper, we investigate the complexity of MAX-WEIGHT to make the approximation upper and lower bounds more precise, and show that (1) MAX-WEIGHT is APX-complete; (2) MAX-WEIGHT is approximable within relative error 1/2 to the optimum; (3) MAX-WEIGHT is not approximable within relative error 1/10 to the optimum unless P=NP.

3081-3100hit(3945hit)