The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

2981-3000hit(3945hit)

  • Complexity of the Type-Consistency Problem for Acyclic Object-Oriented Database Schemas

    Shougo SHIMIZU  Yasunori ISHIHARA  Junji YOKOUCHI  Minoru ITO  

     
    PAPER-Databases

      Vol:
    E84-D No:5
      Page(s):
    623-634

    Method invocation mechanism is one of the essential features in object-oriented programming languages. This mechanism contributes to data encapsulation and code reuse, but there is a risk of runtime type errors. In the case of object-oriented databases (OODBs), a runtime error causes rollback. Therefore, it is desirable to ensure that a given OODB schema is consistent, i.e., no runtime error occurs during the execution of queries under any database instance of the OODB schema. This paper discusses the computational complexity of the type-consistency problem. As a model of OODB schemas, we adopt update schemas introduced by Hull et al., which have all of the basic features of OODBs such as class hierarchy, inheritance, complex objects, and so on. The type-consistency problem for update schemas is known to be undecidable. We introduce a subclass of update schemas, called acyclic schemas, and show that the type-consistency problem for acyclic schemas is in coNEXPTIME. Furthermore, we show that the problem for recursion-free acyclic schemas is coNEXPTIME-hard and the problem for retrieval acyclic schemas is PSPACE-complete.

  • On Detecting Digital Line Components in a Binary Image

    Tetsuo ASANO  Koji OBOKATA  Takeshi TOKUYAMA  

     
    PAPER

      Vol:
    E84-A No:5
      Page(s):
    1120-1129

    This paper addresses the problem of detecting digital line components in a given binary image consisting of n black dots arranged over N N integer grids. The most popular method in computer vision for this purpose is the one called Hough Transform which transforms each black point to a sinusoidal curve to detect digital line components by voting on the dual plane. We start with a definition of a line component to be detected and present several different algorithms based on the definition. The one extreme is the conventional algorithm based on voting on the subdivided dual plane while the other is the one based on topological walk on an arrangement of sinusoidal curves defined by the Hough transform. Some intermediate algorithm based on half-planar range counting is also presented. Finally, we discuss how to incorporate several practical conditions associated with minimum density and restricted maximality.

  • A Digit-Recurrence Algorithm for Cube Rooting

    Naofumi TAKAGI  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E84-A No:5
      Page(s):
    1309-1314

    A digit-recurrence algorithm for cube rooting is proposed. In cube rooting, the digit-recurrence equation of the residual includes the square of the partial result of the cube root. In the proposed algorithm, the square of the partial result is kept, and the square, as well as the residual, is updated by addition/subtraction, shift, and multiplication by one or two digits. Different specific versions of the algorithm are possible, depending on the radix, the digit set of the cube root, and etc. Any version of the algorithm can be implemented as a sequential (folded) circuit or a combinational (unfolded) circuit, which is suitable for VLSI realization.

  • Round Optimal Parallel Algorithms for the Convex Hull of Sorted Points

    Naoki OSHIGE  Akihiro FUJIWARA  

     
    PAPER

      Vol:
    E84-A No:5
      Page(s):
    1152-1160

    In this paper, we present deterministic parallel algorithms for the convex hull of sorted points and their application to a related problem. The algorithms are proposed for the coarse grained multicomputer (CGM) model. We first propose a cost optimal parallel algorithm for computing the problem with a constant number of communication rounds for n/p p2, where n is the size of an input and p is the number of processors. Next we propose a cost optimal algorithm, which is more complicated, for n/p pε, where 0 < ε < 2. From the above two results, we can compute the convex hull of sorted points with O(n/p) computation time and a constant number of communication rounds for n/p pε, where ε > 0. Finally we show an application of our convex hull algorithms. We solve the convex layers for d lines in O((n log n)/p) computation time with a constant number of communication rounds. The algorithm is also cost optimal for the problem.

  • Bandwidth Allocation Considering Priorities among Multimedia Components in Mobile Networks

    Shigeki SHIOKAWA  Shuji TASAKA  

     
    PAPER-Wireless Communication Technology

      Vol:
    E84-B No:5
      Page(s):
    1344-1355

    This paper proposes a bandwidth allocation scheme which improves degradation of communication quality due to handoffs in mobile multimedia networks. In general, a multimedia call consists of several component calls. For example, a video phone call consists of a voice call and a video call. In realistic environments, each component call included in one multimedia call may have different requirements for quality-of-service (QoS) from each other, and priorities among these component calls often exist with respect to importance for communications. When the available bandwidth is not enough for a handoff call, the proposed scheme eliminates a low priority component call and defers bandwidth allocation for a component call whose delay related QoS is not strict. Moreover, in the allocation, the scheme gives priority to new calls and handoff calls over a deferred call and also performs bandwidth reallocation to eliminated component calls. By computer simulation, we evaluate the performance such as call dropping probability and show effectiveness of the proposed scheme.

  • Composing Collaborative Component Systems Using Colored Petri Nets

    Yoshiyuki SHINKAWA  Masao J. MATSUMOTO  

     
    PAPER

      Vol:
    E84-A No:5
      Page(s):
    1209-1217

    Adaptation of software components to the requirements is one of the key concerns in Component Based Software Development (CBSD). In this paper, we propose a formal approach to compose component based systems which are adaptable to the requirements. We focus on the functional aspects of software components and requirements, which are expressed in S-sorted functions. Those S-sorted functions are transformed into Colored Petri Nets (CPN) models in order to evaluate connectivity between the components, and to evaluate adaptability of composed systems to the requirements. The connectivity is measured based on colors or data types in CPN, while the adaptability is measured based on functional equivalency. We introduce simple glue codes to connect the components each other. The paper focuses on business applications, however the proposed approach can be applied to any other domains as far as the functional adaptability is concerned.

  • Superior Noise Performance and Wide Dynamic Range Erbium Doped Fiber Amplifiers Employing Variable Attenuation Slope Compensator

    Haruo NAKAJI  Motoki KAKUI  Hitoshi HATAYAMA  Chisai HIROSE  Hiroyuki KURATA  Masayuki NISHIMURA  

     
    PAPER-Optical Fibers and Cables

      Vol:
    E84-C No:5
      Page(s):
    598-604

    In order to realize automatic-level-controlled (ALC) erbium doped fiber amplifiers (EDFAs) with both wide dynamic range and good noise performance, we propose EDFAs employing the automatic power control (APC) scheme and a variable attenuation slope compensator (VASC). The VASC consists of two asymmetrical Mach-Zehnder interferometers (MZIs) concatenated in series and thermo optic (TO) heaters are attached to the arms of each MZIs. By adjusting the electric power supplied to the TO heaters, an almost linear attenuation slope can be varied by plus minus 5 dB or more over the operational wavelength band of 30 nm. The EDFA employing the APC scheme and the VASC has exhibited a dynamic range as large as 20 dB with the output power variation as small as 0.7 dB, which is as good as that of the EDFA employing the APC scheme and a variable optical attenuator (VOA). The noise figure (NF) of the EDFA employing the VASC was degraded about 4.1 dB with increasing the input power by 20 dB, while it was degraded about 7.3 dB with increasing the input power by only 15 dB in the EDFA employing the VOA. The EDFA employing the VASC can realize the ALC operation over a wider dynamic range with reduced noise figure degradation. In the EDFA employing the VASC, the power excursion was suppressed to less than 1.1 dB, when the input signal level was changed between -23 dBm/ch and -18 dBm/ch with the rise/fall time of 8 ms.

  • Superior Noise Performance and Wide Dynamic Range Erbium Doped Fiber Amplifiers Employing Variable Attenuation Slope Compensator

    Haruo NAKAJI  Motoki KAKUI  Hitoshi HATAYAMA  Chisai HIROSE  Hiroyuki KURATA  Masayuki NISHIMURA  

     
    PAPER-Optical Fibers and Cables

      Vol:
    E84-B No:5
      Page(s):
    1224-1230

    In order to realize automatic-level-controlled (ALC) erbium doped fiber amplifiers (EDFAs) with both wide dynamic range and good noise performance, we propose EDFAs employing the automatic power control (APC) scheme and a variable attenuation slope compensator (VASC). The VASC consists of two asymmetrical Mach-Zehnder interferometers (MZIs) concatenated in series and thermo optic (TO) heaters are attached to the arms of each MZIs. By adjusting the electric power supplied to the TO heaters, an almost linear attenuation slope can be varied by plus minus 5 dB or more over the operational wavelength band of 30 nm. The EDFA employing the APC scheme and the VASC has exhibited a dynamic range as large as 20 dB with the output power variation as small as 0.7 dB, which is as good as that of the EDFA employing the APC scheme and a variable optical attenuator (VOA). The noise figure (NF) of the EDFA employing the VASC was degraded about 4.1 dB with increasing the input power by 20 dB, while it was degraded about 7.3 dB with increasing the input power by only 15 dB in the EDFA employing the VOA. The EDFA employing the VASC can realize the ALC operation over a wider dynamic range with reduced noise figure degradation. In the EDFA employing the VASC, the power excursion was suppressed to less than 1.1 dB, when the input signal level was changed between -23 dBm/ch and -18 dBm/ch with the rise/fall time of 8 ms.

  • Route Optimization by the Use of Two Care-of Addresses in Hierarchical Mobile IPv6

    Youn-Hee HAN  Joon-Min GIL  Chong-Sun HWANG  Young-Sik JEONG  

     
    PAPER

      Vol:
    E84-B No:4
      Page(s):
    892-902

    The IETF Mobile IPv6 enables any IPv6 node to both cache the Care-of Address associated with a mobile node's home address, and to directly send packets addressed to a mobile node at the Care-of Address using the IPv6 routing header. Support for optimizing the route is built in as a fundamental part of the protocol. Several hierarchical schemes have been presented recently on top of the Mobile IPv6. These schemes separate micro-mobility from macro-mobility and exploit a mobile node's locality. They can reduce the number of signaling messages sent to a home network and improve hand-off performance. However, existing hierarchical schemes do not achieve route optimization. When external correspondent nodes send packets to a mobile node, these packets are intercepted by an intermediate mobility agent encapsulated and routed to the mobile node. In this paper, we propose a new hierarchical scheme that enables any correspondent node to cache two Care-of Addresses; the mobile node's temporary address and the intermediate mobility agent's address. Also, we introduce two lifetimes managing the two Care-of Addresses. Until the lifetime associated with the mobile node's temporary address expires, a correspondent node can send packets directly to the mobile node. If the lifetime expires but the lifetime associated with the intermediate mobility agent's address has not expired, the correspondent node sends packets to the intermediate mobility agent. This proposal can reduce delay in packet delivery and optimize routing. Furthermore, based on the mobility of a mobile node, we introduce more reduced frequency of binding update and longer period of the lifetimes than the existing hierarchical schemes. Therefore, our proposal can reduce the binding update bandwidth as well as the packet delivery bandwidth lower than those of the IETF IPv6 and the existing hierarchical schemes.

  • Heart Sound Recognition by New Methods Using the Full Cardiac Cycled Sound Data

    Sang Min LEE  In Young KIM  Seung Hong HONG  

     
    PAPER-Medical Engineering

      Vol:
    E84-D No:4
      Page(s):
    521-529

    Recently many researches concerning heart sound analysis are being processed with development of digital signal processing and electronic components. But there are few researches about recognition of heart sound, especially full cardiac cycled heart sound. In this paper, three new recognition methods about full cardiac cycled heart sound were proposed. The first method recognizes the characteristics of heart sound by integrating important peaks and analyzing statistical variables in time domain. The second method builds a database by principal components analysis on training heart sound set in time domain. This database is used to recognize new input of heart sound. The third method builds the same sort of the database not in time domain but in time-frequency domain. We classify the heart sounds into seven classes such as normal (NO) class, pre-systolic murmur (PS) class, early systolic murmur (ES) class, late systolic murmur (LS) class, early diastolic murmur (ED) class, late diastolic murmur (LD) class and continuous murmur (CM) class. As a result, we could verify that the third method is better efficient to recognize the characteristics of heart sound than the others and also than any precedent research. The recognition rates of the third method are 100% for NO, 80% for PS and ES, 67% for LS, 93 for ED, 80% for LD and 30% for CM.

  • Lossless and Near-Lossless Color Image Coding Using Edge Adaptive Quantization

    Takayuki NAKACHI  Tatsuya FUJII  

     
    PAPER-Coding Theory

      Vol:
    E84-A No:4
      Page(s):
    1064-1073

    This paper proposes a unified coding algorithm for the lossless and near-lossless compression of still color images. The proposed unified color image coding scheme can control the Peak Signal-to-Noise Ratio (PSNR) of the reconstructed image while the level of distortion on the RGB plane is suppressed to within a preset magnitude. In order to control the PSNR, the distortion level is adaptively changed at each pixel. An adaptive quantizer to control the distortion is designed on the basis of psychovisual criteria. Finally, experiments on Super High Definition (SHD) images show the effectiveness of the proposed algorithm.

  • Burst Error Recovery for VF Arithmetic Coding

    Hongyuan CHEN  Masato KITAKAMI  Eiji FUJIWARA  

     
    PAPER-Coding Theory

      Vol:
    E84-A No:4
      Page(s):
    1050-1063

    One of the disadvantages of compressed data is their vulnerability, that is, even a single corrupted bit in compressed data may destroy the decompressed data completely. Therefore, Variable-to-Fixed length Arithmetic Coding, or VFAC, with error detecting capability is discussed. However, implementable error recovery method for compressed data has never been proposed. This paper proposes Burst Error Recovery Variable-to-Fixed length Arithmetic Coding, or BERVFAC, as well as Error Detecting Variable-to-Fixed length Arithmetic Coding, or EDVFAC. Both VFAC schemes achieve VF coding by inserting the internal states of the decompressor into compressed data. The internal states consist of width and offset of the sub-interval corresponding to the decompressed symbol and are also used for error detection. Convolutional operations are applied to encoding and decoding in order to propagate errors and improve error control capability. The proposed EDVFAC and BERVFAC are evaluated by theoretical analysis and computer simulations. The simulation results show that more than 99.99% of errors can be detected by EDVFAC. For BERVFAC, over 99.95% of l-burst errors can be corrected for l 32 and greater than 99.99% of other errors can be detected. The simulation results also show that the time-overhead necessary to decode the BERVFAC is about 12% when 10% of the received words are erroneous.

  • Direction of Arrival Estimation Using Nonlinear Microphone Array

    Hidekazu KAMIYANAGIDA  Hiroshi SARUWATARI  Kazuya TAKEDA  Fumitada ITAKURA  Kiyohiro SHIKANO  

     
    PAPER

      Vol:
    E84-A No:4
      Page(s):
    999-1010

    This paper describes a new method for estimating the direction of arrival (DOA) using a nonlinear microphone array system based on complementary beamforming. Complementary beamforming is based on two types of beamformers designed to obtain complementary directivity patterns with respect to each other. In this system, since the resultant directivity pattern is proportional to the product of these directivity patterns, the proposed method can be used to estimate DOAs of 2(K-1) sound sources with K-element microphone array. First, DOA-estimation experiments are performed using both computer simulation and actual devices in real acoustic environments. The results clarify that DOA estimation for two sound sources can be accomplished by the proposed method with two microphones. Also, by comparing the resolutions of DOA estimation by the proposed method and by the conventional minimum variance method, we can show that the performance of the proposed method is superior to that of the minimum variance method under all reverberant conditions.

  • A Pipeline Chip for Quasi Arithmetic Coding

    Yair WISEMAN  

     
    PAPER-Digital Signal Processing

      Vol:
    E84-A No:4
      Page(s):
    1034-1041

    A combination of a software and a systolic hardware implementation for the Quasi Arithmetic compression algorithm is presented. The hardware is implemented as a pipeline hardware implementation. The implementation doesn't change the the algorithm. It just split it into two parts. The combination of parallel software and pipeline hardware can give very fast compression without decline of the compression efficiency.

  • Improvised Assembly Mechanism for Component-Based Mobile Applications

    Masahiro MOCHIZUKI  Hideyuki TOKUDA  

     
    PAPER

      Vol:
    E84-B No:4
      Page(s):
    910-919

    We propose a mechanism to facilitate the development of component-based mobile applications with adaptive behaviors. The design principles and communication patterns of legacy software systems will greatly change in a forthcoming environment, where a variety of computing devices become embedded in home and office environments, users move around with/without portable computing devices, and all the devices are interconnected through wired/wireless networks. In the proposed mechanism, Improvised Assembly Mechanism (IAM), we realize functionality to compose an application in an ad hoc manner and to achieve the adaptation of applications by adding, replacing, supplementing, and relocating components at system runtime according to various environmental changes such as the locational changes of computing devices and users. The mechanism is implemented as a built-in functionality of the Soul component, which is one of the fundamental elements in the Possession model.

  • PRIME ARQ: A Novel ARQ Scheme for High-Speed Wireless ATM

    Atsushi OHTA  Masafumi YOSHIOKA  Masahiro UMEHIRA  

     
    PAPER

      Vol:
    E84-B No:3
      Page(s):
    474-483

    Automatic repeat request (ARQ) for wireless ATM (WATM) operating at 20 Mbit/s or higher is required to achieve high throughput performance as well as high transmission quality, i.e., low CLR (cell loss ratio). Selective Repeat (SR) and Go-Back-N (GBN) are typical ARQ schemes. Though SR-ARQ is superior to GBN-ARQ in throughput performance, the implementation complexity of SR-ARQ's control procedures is disadvantageous to its application to high-speed wireless systems. In addition, when PDU (protocol data unit) length on wireless link is short, the capacity for ARQ control messages can be significantly large. GBN-ARQ, on the other hand, cannot avoid serious throughput degradation due to fairly high BER caused by multipath fading and shadowing, though its implementation is simple. To solve the above-mentioned problems, this paper proposes a novel ARQ scheme named PRIME-ARQ (Partial selective Repeat superIMposEd on gbn ARQ). PRIME-ARQ achieves high throughput performance, almost equal to selective repeat ARQ, with a simple algorithm resulting in reduced implementation complexity for high speed operation. This paper describes the design, implementation, and performance of the proposed PRIME-ARQ. In addition, it shows the experimental results using an experimental PRIME-ARQ hardware processor and proto-type AWA equipment.

  • Filter Banks with Nonlinear Lifting Steps for Lossless Image Compression

    Masahiro OKUDA  Sanjit K. MITRA  Masaaki IKEHARA  Shin-ichi TAKAHASHI  

     
    PAPER-Digital Signal Processing

      Vol:
    E84-A No:3
      Page(s):
    797-801

    Most natural images are well modeled as smoothed areas segmented by edges. The smooth areas can be well represented by a wavelet transform with high regularity and with fewer coefficients which requires highpass filters with some vanishing moments. However for the regions around edges, short highpass filters are preferable. In one recently proposed approach, this problem was solved by switching filter banks using longer filters for smoothed areas of the images and shorter filters for areas with edges. This approach was applied to lossy image coding resulting in a reduction of ringing artifacts. As edges were predicted using neighboring pixels, the nonlinear transforms made the decorrelation more flexible. In this paper we propose a time-varying filterbank and apply it to lossless image coding. In this scheme, we estimate the standard deviation of the neighboring pixels of the current pixel by solving the maximum likelihood problem. The filterbank is switched between three filter banks, depending on the estimated standard deviation.

  • A High-Speed, Highly-Reliable Network Switch for Parallel Computing System Using Optical Interconnection

    Shinji NISHIMURA  Tomohiro KUDOH  Hiroaki NISHI  Koji TASHO  Katsuyoshi HARASAWA  Shigeto AKUTSU  Shuji FUKUDA  Yasutaka SHIKICHI  

     
    PAPER-Optical Interconnection Systems

      Vol:
    E84-C No:3
      Page(s):
    288-294

    RHiNET-2/SW is a network switch for the RHiNET-2 parallel computing system. RHiNET-2/SW enables high-speed and long-distance data transmission between PC nodes for parallel computing. In RHiNET-2/SW, a one-chip CMOS switch-LSI and eight pairs of 800-Mbit/s 12-channel parallel optical interconnection modules are mounted into a single compact board. This switch allows high-speed 8-Gbit/s/port parallel optical data transmission over a distance of up to 100 m, and the aggregate throughput is 64 Gbit/s/board. The CMOS-ASIC switching LSI enables high-throughput (64 Gbit/s) packet switching with a single chip. The parallel optical interconnection modules enable high-speed and low-latency data transmission over a long distance. The structure and layout of the printed circuit board is optimized for high-speed, high-density device implementation to overcome electrical problems such as signal propagation-loss and crosstalk. All of the electrical interfaces are composed of high-speed CMOS-LVDS logic (800 Mbit/s/pin). We evaluated the reliability of the optical I/O port through long-term data transmission. No errors were detected during 50 hours of continuous data transmission at a data rate of 800 Mbit/s 10 bits (BER: < 2.44 10-14). This test result shows that RHiNET-2/SW can provide high-throughput, long-transmission-length, and highly reliable data transmission in a practical parallel computing system.

  • Motion Estimation and Compensation Hardware Architecture for a Scene-Adaptive Algorithm on a Single-Chip MPEG-2 Video Encoder

    Koyo NITTA  Toshihiro MINAMI  Toshio KONDO  Takeshi OGURA  

     
    PAPER-VLSI Systems

      Vol:
    E84-D No:3
      Page(s):
    317-325

    This paper describes a unique motion estimation and compensation (ME/MC) hardware architecture for a scene-adaptive algorithm. By statistically analyzing the characteristics of the scene being encoded and controlling the encoding parameters according to the scene, the quality of the decoded image can be enhanced. The most significant feature of the architecture is that the two modules for ME/MC can work independently. Since a time interval can be inserted between the operations of the two modules, a scene-adaptive algorithm can be implemented in the architecture. The ME/MC architecture is loaded on a single-chip MPEG-2 video encoder.

  • A Search Algorithm for Bases of Calderbank-Shor-Steane Type Quantum Error-Correcting Codes

    Kin-ichiroh TOKIWA  Hatsukazu TANAKA  

     
    PAPER-Coding Theory

      Vol:
    E84-A No:3
      Page(s):
    860-865

    Recently, Vatan, Roychowdhury and Anantram have presented two types of revised versions of the Calderbank-Shor-Steane code construction, and have also provided an exhaustive procedure for determining bases of quantum error-correcting codes. In this paper, we investigate the revised versions given by Vatan et al., and point out that there is no essential difference between them. In addition, we propose an efficient algorithm for searching for bases of quantum error-correcting codes. The proposed algorithm is based on some fundamental properties of classical linear codes, and has much lower complexity than Vatan et al.'s procedure.

2981-3000hit(3945hit)