The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

3261-3280hit(3945hit)

  • A Performance Comparison of Single-Stream and Multi-Stream Approaches to Live Media Synchronization

    Shuji TASAKA  Yutaka ISHIBASHI  

     
    PAPER-Media Management

      Vol:
    E81-B No:11
      Page(s):
    1988-1997

    This paper presents a performance comparison between the single-stream and the multi-stream approaches to lip synchronization of live media (voice and video). The former transmits a single transport stream of interleaved voice and video, while the latter treats the two media as separate transport streams. Each approach has an option not to exert the synchronization control at the destination, which leads to four basic schemes. On an interconnected ATM-wireless LAN, we implemented the four basic schemes with RTP/RTCP on top of UDP and two variants which exercise dynamic resolution control of JPEG video. Making the performance measurement of the six schemes, we compare them to identify and evaluate advantages and disadvantages of each approach. We then show that the performance difference between the two approaches is small and that the dynamic resolution control improves the synchronization quality.

  • Maximum Order Complexity for the Minimum Changes of an M-Sequence

    Satoshi UEHARA  Tsutomu MORIUCHI  Kyoki IMAMURA  

     
    PAPER-Information Security

      Vol:
    E81-A No:11
      Page(s):
    2407-2411

    The maximum order complexity (MOC) of a sequence is a very natural generalization of the well-known linear complexity (LC) by allowing nonlinear feedback functions for the feedback shift register which generates a given sequence. It is expected that MOC is effective to reduce such an instability of LC as an extreme increase caused by the minimum changes of a periodic sequence, i. e. , one-symbol substitution, one-symbol insertion or one-symbol deletion per each period. In this paper we will give the bounds (lower and upper bounds) of MOC for the minimum changes of an m-sequence over GF(q) with period qn-1, which shows that MOC is much more natural than LC as a measure for the randomness of sequences in this case.

  • Efficient Hybrid Allocation of Processor Registers for Compiling Telephone Call Control Programs

    Norio SATO  

     
    PAPER-Communication Software

      Vol:
    E81-B No:10
      Page(s):
    1868-1880

    An efficient hybrid scheme has been developed for optimizing register allocation applicable to CISC and RISC processors, which is crucial for maximizing their execution speed. Graph-coloring at the function level is combined with a powerful local register assigner. This assigner uses accurate program flows and access patterns of variables, and optimizes a wider local range, called an extended basic-block (EBB), than other optimizing compilers. The EBB is a set of basic-blocks that constitute a tree-shaped control flow, which is suitable for the large nested branches that frequently appear in embedded system-control programs, such as those for telephone call processing. The coloring at the function level involves only the live-ranges of program variables that span EBBs. The interference graph is therefore very small even for large functions, so it can be constructed quickly. Instead of iterative live-range splitting or spilling, the unallocated live-ranges are optimized by the EBB-based register assigner, so neither load/store insertion nor code motion is needed. This facilitates generating reliable code and debug symbols. The information provided for the EBB-based assigner facilitates the priority-based heuristics, fine-grained interference checking, and deferred coloring, all of which increase the colorability. Using a thread-support package for CHILL as a sample program, performance measurement showed that local variables are successfully located in registers, and the reduction of static cycles is about 20-30%. Further improvements include using double registers and improving debuggability.

  • A Nonlinear Distortion Compensation on Layered Multicarrier Modulation Using Iterative Restoration

    Shoichiro YAMASAKI  Hirokazu TANAKA  

     
    PAPER-Spread Spectrum System

      Vol:
    E81-A No:10
      Page(s):
    2109-2116

    A multicarrier modulation called orthogonal frequency division multiplex (OFDM) is attracting attention as a transmission scheme which is robust against multipath propagation. A major disadvantage of OFDM is that it is sensitive to nonlinear distortion due to its wide transmission amplitude range. The scope of this study is to cope with the nonlinear problem. We propose a nonlinear distortion compensation scheme using an iterative method which has been applied to an image signal restoration.

  • Evaluation of Microwave Complex Conductivities of YBa2Cu3Ox Thin Films

    Keiji YOSHIDA  Tetsuo ADOU  Shido NISHIOKA  Yutaka KANDA  Hisashi SHIMAKAGE  Zhen WANG  

     
    PAPER-High-Frequency Properties of Thin Films

      Vol:
    E81-C No:10
      Page(s):
    1565-1572

    The complex conductivities of high Tc superconducting YBa2Cu3Ox thin films have been studied using the coplanar waveguide resonator technique. In order to evaluate the magnetic penetration depth precisely, we measured the temperature dependence of the resonant frequency and compared it with the numerical results self-consistently. The observed temperature dependence of the complex conductivities is shown to be able to distinguish the effects of the weaklink from the intrinsic property of the grain of an epitaxial thin film and demonstrate the weakly coupled grain model of YBa2Cu3Ox thin films.

  • On Symbol Error Probability of DC Component Suppressing Systems

    Akiomi KUNISA  Nobuo ITOH  

     
    LETTER-Coding Theory

      Vol:
    E81-A No:10
      Page(s):
    2174-2179

    The DC component suppressing method, called Guided Scrambling (GS), has been proposed, where a source bit stream within a data block is subjected to several kinds of scrambling and a RLL (Run Length Limited) coding to make the selection set of channel bit streams, then the one having the least DC component is selected. Typically, this technique uses a convolutional operation or GF (Galois field) conversion. A review of their respective symbol error properties has revealed important findings. In the former case, the RS (Reed-Solomon) decoding capability is reduced because error propagation occurs in descrambling. In the latter case, error propagation of a data block length occurs when erroneous conversion data occurs after RS decoding. This paper introduces expressions for determining the decoded symbol error probabilities of the two schemes based on these properties. The paper also discusses the difference in code rates between the two schemes on the basis of the result of calculation using such expressions.

  • Asymptotic Optimality of the Block Sorting Data Compression Algorithm

    Mitsuharu ARIMURA  Hirosuke YAMAMOTO  

     
    PAPER-Source Coding

      Vol:
    E81-A No:10
      Page(s):
    2117-2122

    In this paper the performance of the Block Sorting algorithm proposed by Burrows and Wheeler is evaluated theoretically. It is proved that the Block Sorting algorithm is asymptotically optimal for stationary ergodic finite order Markov sources. Our proof is based on the facts that symbols with the same Markov state (or context) in an original data sequence are grouped together in the output sequence obtained by Burrows-Wheeler transform, and the codeword length of each group can be bounded by a function described with the frequencies of symbols included in the group.

  • Redundant Exception Check Elimination by Assertions

    Norio SATO  

     
    PAPER-Communication Software

      Vol:
    E81-B No:10
      Page(s):
    1881-1893

    Exception handling is not only useful for increasing program readability, but also provides an effective means to check and locate errors, so it increases productivity in large-scale program development. Some typical and frequent program errors, such as out-of-range indexing, null dereferencing, and narrowing violations, cause exceptions that are otherwise unlikely to be caught. Moreover, the absence of a catcher for exceptions thrown by API procedures also causes uncaught exceptions. This paper discusses how the exception handling mechanism should be supported by the compiler together with the operating system and debugging facilities. This mechanism is implemented in the compiler by inserting inline check code and accompanying propagation code. One drawback to this approach is the runtime overhead imposed by the inline check code, which should therefore be optimized. However, there has been little discussion of appropriate optimization techniques and efficiency in the literature. Therefore, a new solution is proposed that formulates the optimization problem as a common assertion elimination (CAE). Assertions consist of check code and useful branch conditions. The latter are effective to remove redundant check code. The redundancy can be checked and removed precisely with a forward iterative data flow analysis. Even in performance-sensitive applications such as telecommunications software, figures obtained by a CHILL optimizing compiler indicate that CAE optimizes the code well enough to be competitive with check suppressed code.

  • An Estimation by Interval Analysis of Region Guaranteeing Existence of a Solution Path in Homotopy Method

    Mitsunori MAKINO  

     
    PAPER-Numerical Analysis

      Vol:
    E81-A No:9
      Page(s):
    1886-1891

    Related with accuracy, computational complexity and so on, quality of computing for the so-called homotopy method has been discussed recently. In this paper, we shall propose an estimation method with interval analysis of region in which unique solution path of the homotopy equation is guaranteed to exist, when it is applied to a certain class of uniquely solvable nonlinear equations. By the estimation, we can estimate the region a posteriori, and estimate a priori an upper bound of the region.

  • Design of Kronecker and Combination Sequences and Comparison of Their Correlation, CDMA and Information Security Properties

    Kari H. A. KARKKAINEN  Pentti A. LEPPANEN  

     
    PAPER-Mobile Communication

      Vol:
    E81-B No:9
      Page(s):
    1770-1778

    Two families of rapidly synchronizable spreading codes are compared using the same component codes. The influence of component code choice is also discussed. It is concluded that correlation, code-division multiple-access (CDMA) and information security (measured by the value of linear complexity) properties of Kronecker sequences are considerably better than those of Combination sequences. Combination sequences cannot be recommended for CDMA use unless the number of active users is few. CDMA performance of Kronecker sequences is almost comparable with that of linear pseudonoise (PN) code families of equal length when a Gold or Kasami code is used as the innermost code and the Barker code is used as the outermost code to guarantee satisfactory correlation and CDMA properties. Kronecker sequences possess a considerably higher value of linear complexity than those of the corresponding non-linear Geffe and majority logic type combination sequences. This implies they are highly non-linear codes due to the Kronecker product construction method. It is also observed that the Geffe type Boolean combiner resulted in better correlation and CDMA performance than with majority logic. The use of the purely linear exclusive-or combiner for considerable reduction of code synchronization time is not found recommendable although it results in good CDMA performance.

  • A Method of Proving the Existence of Simple Turning Points of Two-Point Boundary Value Problems Based on the Numerical Computation with Guaranteed Accuracy

    Takao SOMA  Shin'ichi OISHI  Yuchi KANZAWA  Kazuo HORIUCHI  

     
    PAPER-Numerical Analysis

      Vol:
    E81-A No:9
      Page(s):
    1892-1897

    This paper is concerned with the validation of simple turning points of two-point boundary value problems of nonlinear ordinary differential equations. Usually it is hard to validate approximate solutions of turning points numerically because of it's singularity. In this paper, it is pointed out that applying the infinite dimensional Krawcyzk-based interval validation method to enlarged system, the existence of simple turning points can be verified. Taking an example, the result of validation is also presented.

  • Information Integration Architecture for Agent-Based Computer Supported Cooperative Work System

    Shigeki NAGAYA  Yoshiaki ITOH  Takashi ENDO  Jiro KIYAMA  Susumu SEKI  Ryuichi OKA  

     
    PAPER

      Vol:
    E81-D No:9
      Page(s):
    976-987

    We propose an information integration architecture for a man-machine interface to construct a new agent-based Computer Supported Cooperative Work (CSCW) system. The system acts as a clerk in cooperative work giving users the advantage of using cooperative work space. The system allows users to do their work in the style of an ordinary meeting because spontaneous expressions of speech and gestures by users are detected by sensors so that they can be integrated with a task model at several levels to create suitable responses in a man-machine interface. As a result, users can dedicate themselves to mutually understand other meeting members with no awareness of direction to the CSCW system. In this paper, we describe the whole system and its information integration architecture for the man-machine interface including, the principle of functions, the current status of the system and future directions.

  • An Improved Recursive Decomposition Ordering for Higher-Order Rewrite Systems

    Munehiro IWAMI  Masahiko SAKAI  Yoshihito TOYAMA  

     
    PAPER-Automata,Languages and Theory of Computing

      Vol:
    E81-D No:9
      Page(s):
    988-996

    Simplification orderings, like the recursive path ordering and the improved recursive decomposition ordering, are widely used for proving the termination property of term rewriting systems. The improved recursive decomposition ordering is known as the most powerful simplification ordering. Recently Jouannaud and Rubio extended the recursive path ordering to higher-order rewrite systems by introducing an ordering on type structure. In this paper we extend the improved recursive decomposition ordering for proving termination of higher-order rewrite systems. The key idea of our ordering is a new concept of pseudo-terminal occurrences.

  • Evaluating Dialogue Strategies under Communication Errors Using Computer-to-Computer Simulation

    Taro WATANABE  Masahiro ARAKI  Shuji DOSHITA  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E81-D No:9
      Page(s):
    1025-1033

    In this paper, experimental results of evaluating dialogue strategies of confirmation with a noisy channel are presented. First, the types of errors in task-oriented dialogues are investigated and classified as communication, dialogue, knowledge, problem solving, or objective errors. Since the errors are of different levels, the methods for recovering from errors must be examined separately. We have investigated that the dialogue and knowledge errors generated by communication errors can be recovered through system confirmation with the user. In addition, we examined that the manner in which a system initiates dialogue, namely, dialogue strategies, might influence the cooperativity of their interactions depending on the frequency of confirmations and the amount of information conveyed. Furthermore, the choice of dialogue strategies will be influenced by the rate of occurrence of communication errors in a communication channel and related to the properties of the task, for example, the difficulty in achieving a goal or the frequency of the movement of initiatives. To verify these hypotheses, we prepared a testbed task, the Group Scheduling Task, and examined it through a computer-to-computer dialogue simulation in which one system took the part of a scheduling system and the other system acted as a user. In this simulation, erroneous input for the scheduling system was also developed. The user system was designed to act randomly so that it could simulate a real human user, while the scheduling system was devised to strictly follow a particular dialogue strategy of confirmation. The experimental results showed that a certain amount of confirmation was required to overcome errors when the rate of occurrence of communication errors was high, but that excessive confirmation did not serve to resolve errors, depending on the task involved.

  • Planar Projection Stereopsis Method for Road Extraction

    Kazunori ONOGUCHI  Nobuyuki TAKEDA  Mutsumi WATANABE  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:9
      Page(s):
    1006-1018

    This paper presents a method which can effectively acquire free space on a plane for moving forward in safety by using height information of objects. This method can be applied to free space extraction on a road, and, in short, it is a road extraction method for an autonomous vehicle. Since a road area can be assumed to be a sequence of flat planes in front of a vehicle, it is effective to apply the inverse perspective projection model to the ground plane. However, conventional methods using this model have a drawback in that some areas on the road plane are wrongly detected as obstacle areas since these methods are sensitive to the error of the camera geometry with respect to the assumed plane. In order to overcome this drawback, the proposed approach named the Planar Projection Stereopsis (PPS) method supplies, to the road extraction method using the inverse perspective projection model, a contrivance for removing these erroneous areas effectively. Since PPS uses the inverse perspective projection model, both left and right images are projected to the road plane and obstacle areas are detected by examining the difference between these projected images. Because detected obstacle areas include a lot of erroneous areas, PPS examines the shapes of the obstacle areas and eliminates falsely detected areas on the road plane by using the following properties: obstacles whose heights are different from the road plane are projected to the shapes falling backward from the location where the obstacles touch the road plane; and the length of shapes falling backward depends on the location of obstacles in relation to the stereoscopic cameras and the height of obstacles in relation to the road plane. Experimental results for real road scenes have shown the effectiveness of the proposed method. The quantitative evaluation of the results has shown that on average 89. 3% of the real road area can be extracted and the average of the falsely extracted ratio is 1. 4%. Since the road area can be extracted by simple projection of images and subtraction of projected images from a set of stereo images, our method can be applied to real-time operation.

  • Processor Pipeline Design for Fast Network Message Handling in RWC-1 Multiprocessor

    Hiroshi MATSUOKA  Kazuaki OKAMOTO  Hideo HIRONO  Mitsuhisa SATO  Takashi YOKOTA  Shuichi SAKAI  

     
    PAPER

      Vol:
    E81-C No:9
      Page(s):
    1391-1397

    In this paper we describe the pipeline design and enhanced hardware for fast message handling in a RICA-1 processor, a processing element (PE) in the RWC-1 multiprocessor. The RWC-1 is based on the reduced inter-processor communication architecture (RICA), in which communications are combined with computation in the processor pipeline. The pipeline is enhanced with hardware mechanisms to support fine-grain parallel execution. The data paths of the RICA-1 super-scalar processor are commonly used for communication as well as instruction execution to minimize its implementation cost. A 128-PE system has been built on January 1998, and it is currently used for hardware debugging, software development and performance evaluation.

  • Plastic Cell Architecture: A Scalable Device Architecture for General-Purpose Reconfigurable Computing

    Kouichi NAGAMI  Kiyoshi OGURI  Tsunemichi SHIOZAWA  Hideyuki ITO  Ryusuke KONISHI  

     
    PAPER

      Vol:
    E81-C No:9
      Page(s):
    1431-1437

    We propose an architectural reference of programmable devices that we call Plastic Cell Architecture (PCA). PCA is a reference for implementing a device with autonomous reconfigurability, which we also introduce in this paper. This reconfigurability is a further step toward new reconfigurable computing, which introduces variable- and programmable-grained parallelism to wired logic computing. This computing follows the Object-Oriented paradigm: it regards configured circuits as objects. These objects will be described in a new hardware description language dealing with the semantics of dynamic module instantiation. PCA is the fusion of SRAM-based FPGAs and cellular automata (CA), where the CA are dedicated to support run time activities of objects. This paper mainly focus on autonomous reconfigurability and PCA. The following discussions examine a research direction towards general-purpose reconfigurable computing.

  • Network Access Control for DHCP Environment

    Kazumasa KOBAYASHI  Suguru YAMAGUCHI  

     
    PAPER-Communication Networks and Services

      Vol:
    E81-B No:9
      Page(s):
    1718-1723

    In the IETF, discussions on the authentication method of the Dynamic Host Configuration Protocol (DHCP) message are active and several methods have been proposed. These related specifications were published and circulated as the IETF Internet-Drafts. However, they still have several drawbacks. One of the major drawbacks is that any user can reuse addresses illegally. A user can use an expired address that was allocated to a host. This kind of "illegal use" of the addresses managed by the DHCP server may cause serious security problems. In order to solve them, we propose a new access control method to be used as the DHCP message authentication mechanism. Furthermore, we have designed and developed the DAG (DHCP Access Control Gateway) according to our method. The DAG serves as a gateway that allows only network accesses from clients with the address legally allocated by the DHCP server. This provides secure DHCP service if DHCP servers do not have an authentication mechanism, which is most likely to occur. If a DHCP server has such an authentication scheme as being proposed in IETF Internet-Draft, the DAG can offer a way to enable only a specific client to access the network.

  • 40 Gbit/s Single-Channel Soliton Transmission Using Periodic Dispersion Compensation

    Itsuro MORITA  Masatoshi SUZUKI  Noboru EDAGAWA  Keiji TANAKA  Shu YAMAMOTO  

     
    PAPER

      Vol:
    E81-C No:8
      Page(s):
    1309-1315

    The effectiveness of periodic dispersion compensation on single-channel 40 Gbit/s soliton transmission system was experimentally investigated. This technique requires just the dispersion compensation fibers and wideband optical filters in the transmission line, which has no difficulty to be used in the practical system. By using polarization-division-multiplexing together with periodic dispersion compensation, single-channel 40 Gbit/s transmission over 4700 km was demonstrated. Single-polarization 40 Gbit/s transmission experiments, which are more suitable for system implementation and compatible with WDM were also conducted. We investigated the transmission characteristics and pulse dynamics in different dispersion maps and in the optimized dispersion map, single-channel, single-polarization 40 Gbit/s transmission over 6300 km was successfully demonstrated.

  • A Characterization of Infinite Binary Sequences with Partial Randomness

    Hiroaki NAGOYA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E81-D No:8
      Page(s):
    801-805

    K-randomness and Martin-Lof randomness are among many formalizations of randomness of infinite sequences, and these two are known to be equivalent. We can naturally modify the former to the definition of partial randomness. However, it is not obvious how to modify the latter to the definition of partial randomness. In this paper, we show that we can modify Martin-Lof randomness to a definition of partial randomness that is equivalent to the definition obtained by naturally modifying K-randomness. The basic idea is to modify the notion of measures used in the definition of Martin-Lof tests.

3261-3280hit(3945hit)