The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] dependency(56hit)

21-40hit(56hit)

  • A Scalable Communication-Induced Checkpointing Algorithm for Distributed Systems

    Alberto CALIXTO SIMON  Saul E. POMARES HERNANDEZ  Jose Roberto PEREZ CRUZ  Pilar GOMEZ-GIL  Khalil DRIRA  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:4
      Page(s):
    886-896

    Communication-induced checkpointing (CIC) has two main advantages: first, it allows processes in a distributed computation to take asynchronous checkpoints, and secondly, it avoids the domino effect. To achieve these, CIC algorithms piggyback information on the application messages and take forced local checkpoints when they recognize potentially dangerous patterns. The main disadvantages of CIC algorithms are the amount of overhead per message and the induced storage overhead. In this paper we present a communication-induced checkpointing algorithm called Scalable Fully-Informed (S-FI) that attacks the problem of message overhead. For this, our algorithm modifies the Fully-Informed algorithm by integrating it with the immediate dependency principle. The S-FI algorithm was simulated and the result shows that the algorithm is scalable since the message overhead presents an under-linear growth as the number of processes and/or the message density increase.

  • Static Dependency Pair Method in Rewriting Systems for Functional Programs with Product, Algebraic Data, and ML-Polymorphic Types

    Keiichirou KUSAKARI  

     
    PAPER

      Vol:
    E96-D No:3
      Page(s):
    472-480

    For simply-typed term rewriting systems (STRSs) and higher-order rewrite systems (HRSs) a la Nipkow, we proposed a method for proving termination, namely the static dependency pair method. The method combines the dependency pair method introduced for first-order rewrite systems with the notion of strong computability introduced for typed λ-calculi. This method analyzes a static recursive structure based on definition dependency. By solving suitable constraints generated by the analysis, we can prove termination. In this paper, we extend the method to rewriting systems for functional programs (RFPs) with product, algebraic data, and ML-polymorphic types. Although the type system in STRSs contains only product and simple types and the type system in HRSs contains only simple types, our RFPs allow product types, type constructors (algebraic data types), and type variables (ML-polymorphic types). Hence, our RFPs are more representative of existing functional programs than STRSs and HRSs. Therefore, our result makes a large contribution to applying theoretical rewriting techniques to actual problems, that is, to proving the termination of existing functional programs.

  • Dependency Chart Parsing Algorithm Based on Ternary-Span Combination

    Meixun JIN  Yong-Hun LEE  Jong-Hyeok LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E96-D No:1
      Page(s):
    93-101

    This paper presents a new span-based dependency chart parsing algorithm that models the relations between the left and right dependents of a head. Such relations cannot be modeled in existing span-based algorithms, despite their popularity in dependency corpora. We address this problem through ternary-span combination during the subtree derivation. By modeling the relations between the left and right dependents of a head, our proposed algorithm provides a better capability of coordination disambiguation when the conjunction is annotated as the head of the left and right conjuncts. This eventually leads to state-of-the-art performance of dependency parsing on the Chinese data of the CoNLL shared task.

  • Decidability of the Security against Inference Attacks Using a Functional Dependency on XML Databases

    Kenji HASHIMOTO  Hiroto KAWAI  Yasunori ISHIHARA  Toru FUJIWARA  

     
    PAPER-Database Security

      Vol:
    E95-D No:5
      Page(s):
    1365-1374

    This paper discusses verification of the security against inference attacks on XML databases in the presence of a functional dependency. So far, we have provided the verification method for k-secrecy, which is a metric for the security against inference attacks on databases. Intuitively, k-secrecy means that the number of candidates of sensitive data (i.e., the result of unauthorized query) of a given database instance cannot be narrowed down to k-1 by using available information such as authorized queries and their results. In this paper, we consider a functional dependency on database instances as one of the available information. Functional dependencies help attackers to reduce the number of the candidates for the sensitive information. The verification method we have provided cannot be naively extended to the k-secrecy problem with a functional dependency. The method requires that the candidate set can be captured by a tree automaton, but the candidate set when a functional dependency is considered cannot be always captured by any tree automaton. We show that the ∞-secrecy problem in the presence of a functional dependency is decidable when a given unauthorized query is represented by a deterministic topdown tree transducer, without explicitly computing the candidate set.

  • Conflict-Based Checking the Integrity of Linux Package Dependencies

    Yuqing LAN  Mingxia KUANG  Wenbin ZHOU  

     
    PAPER-Software Engineering

      Vol:
    E94-D No:12
      Page(s):
    2431-2439

    A Linux operating system release is composed of a large number of software packages, with complex dependencies. The management of dependency relationship is the foundation of building and maintaining a Linux operating system release, and checking the integrity of the dependencies is the key of the dependency management. The widespread adoption of Linux operating systems in many areas of the information technology society has drawn the attention on the issues regarding how to check the integrity of complexity dependencies of Linux packages and how to manage a huge number of packages in a consistent and effective way. Linux distributions have already provided the tools for managing the tasks of installing, removing and upgrading the packages they were made of. A number of tools have been provided to handle these tasks on the client side. However, there is a lack of tools that could help the distribution editors to maintain the integrity of Linux package dependencies on the server side. In this paper we present a method based on conflict to check the integrity of Linux package dependencies. From the perspective of conflict, this method achieves the goal to check the integrity of package dependencies on the server side by removing the conflict associating with the packages. Our contribution provides an effective and automatic way to support distribution editors in handling those issues. Experiments using this method are very successful in checking the integrity of package dependencies in Linux software distributions.

  • A 530 Mpixels/s Intra Prediction Architecture for Ultra High Definition H.264/AVC Encoder

    Gang HE  Dajiang ZHOU  Jinjia ZHOU  Tianruo ZHANG  Satoshi GOTO  

     
    PAPER

      Vol:
    E94-C No:4
      Page(s):
    419-427

    Intra coding in H.264/AVC significantly enhances video compression efficiency. However, due to the high data dependency of intra prediction in H.264, both pipelining and parallel processing techniques are limited to be applied. Moreover, it is difficult to get high hardware utilization and throughput because of the long block/MB-level reconstruction loops. This paper proposes a high-performance intra prediction architecture that can support H.264/AVC high profile. The proposed MB/block co-reordering can avoid data dependency and improve pipeline utilization. Therefore, the timing constraint of real-time 40962160 encoding can be achieved with negligible quality loss. 1616 prediction engine and 88 prediction engine work parallel for prediction and coefficients generating. A reordering interlaced reconstruction is also designed for fully pipelined architecture. It takes only 160 cycles to process one macroblock (MB). Hardware utilization of prediction and reconstruction modules is almost 100%. Furthermore, PE-reusable 88 intra predictor and hybrid SAD & SATD mode decision are proposed to save hardware cost. The design is implemented by 90 nm CMOS technology with 113.2 k gates and can encode 40962160 video sequences at 60 fps with operation frequency of 332 MHz.

  • Linear Time Calculation of On-Chip Power Distribution Network Capacitance Considering State-Dependence

    Shiho HAGIWARA  Koh YAMANAGA  Ryo TAKAHASHI  Kazuya MASU  Takashi SATO  

     
    PAPER-Device and Circuit Modeling and Analysis

      Vol:
    E93-A No:12
      Page(s):
    2409-2416

    A fast calculation tool for state-dependent capacitance of power distribution network is proposed. The proposed method achieves linear time-complexity, which can be more than four orders magnitude faster than a conventional SPICE-based capacitance calculation. Large circuits that have been unanalyzable with the conventional method become analyzable for more comprehensive exploration of capacitance variation. The capacitance obtained with the proposed method agrees SPICE-based method completely (up to 5 digits), and time-linearity is confirmed through numerical experiments on various circuits. The maximum and minimum capacitances are also calculated using average and variance estimation. Calculation times are linear time-complexity, too. The proposed tool facilitates to build an accurate macro model of an LSI.

  • Improved Sequential Dependency Analysis Integrating Labeling-Based Sentence Boundary Detection

    Takanobu OBA  Takaaki HORI  Atsushi NAKAMURA  

     
    PAPER-Natural Language Processing

      Vol:
    E93-D No:5
      Page(s):
    1272-1281

    A dependency structure interprets modification relationships between words or phrases and is recognized as an important element in semantic information analysis. With the conventional approaches for extracting this dependency structure, it is assumed that the complete sentence is known before the analysis starts. For spontaneous speech data, however, this assumption is not necessarily correct since sentence boundaries are not marked in the data. Although sentence boundaries can be detected before dependency analysis, this cascaded implementation is not suitable for online processing since it delays the responses of the application. To solve these problems, we proposed a sequential dependency analysis (SDA) method for online spontaneous speech processing, which enabled us to analyze incomplete sentences sequentially and detect sentence boundaries simultaneously. In this paper, we propose an improved SDA integrating a labeling-based sentence boundary detection (SntBD) technique based on Conditional Random Fields (CRFs). In the new method, we use CRF for soft decision of sentence boundaries and combine it with SDA to retain its online framework. Since CRF-based SntBD yields better estimates of sentence boundaries, SDA can provide better results in which the dependency structure and sentence boundaries are consistent. Experimental results using spontaneous lecture speech from the Corpus of Spontaneous Japanese show that our improved SDA outperforms the original SDA with SntBD accuracy providing better dependency analysis results.

  • Dependency Parsing with Lattice Structures for Resource-Poor Languages

    Sutee SUDPRASERT  Asanee KAWTRAKUL  Christian BOITET  Vincent BERMENT  

     
    PAPER-Natural Language Processing

      Vol:
    E92-D No:10
      Page(s):
    2122-2136

    In this paper, we present a new dependency parsing method for languages which have very small annotated corpus and for which methods of segmentation and morphological analysis producing a unique (automatically disambiguated) result are very unreliable. Our method works on a morphosyntactic lattice factorizing all possible segmentation and part-of-speech tagging results. The quality of the input to syntactic analysis is hence much better than that of an unreliable unique sequence of lemmatized and tagged words. We propose an adaptation of Eisner's algorithm for finding the k-best dependency trees in a morphosyntactic lattice structure encoding multiple results of morphosyntactic analysis. Moreover, we present how to use Dependency Insertion Grammar in order to adjust the scores and filter out invalid trees, the use of language model to rescore the parse trees and the k-best extension of our parsing model. The highest parsing accuracy reported in this paper is 74.32% which represents a 6.31% improvement compared to the model taking the input from the unreliable morphosyntactic analysis tools.

  • Static Dependency Pair Method Based on Strong Computability for Higher-Order Rewrite Systems

    Keiichirou KUSAKARI  Yasuo ISOGAI  Masahiko SAKAI  Frederic BLANQUI  

     
    PAPER-Computation and Computational Models

      Vol:
    E92-D No:10
      Page(s):
    2007-2015

    Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

  • Static Dependency Pair Method for Simply-Typed Term Rewriting and Related Techniques

    Keiichirou KUSAKARI  Masahiko SAKAI  

     
    PAPER

      Vol:
    E92-D No:2
      Page(s):
    235-247

    A static dependency pair method, proposed by us, can effectively prove termination of simply-typed term rewriting systems (STRSs). The theoretical basis is given by the notion of strong computability. This method analyzes a static recursive structure based on definition dependency. By solving suitable constraints generated by the analysis result, we can prove the termination. Since this method is not applicable to every system, we proposed a class, namely, plain function-passing, as a restriction. In this paper, we first propose the class of safe function-passing, which relaxes the restriction by plain function-passing. To solve constraints, we often use the notion of reduction pairs, which is designed from a reduction order by the argument filtering method. Next, we improve the argument filtering method for STRSs. Our argument filtering method does not destroy type structure unlike the existing method for STRSs. Hence, our method can effectively apply reduction orders which make use of type information. To reduce constraints, the notion of usable rules is proposed. Finally, we enhance the effectiveness of reducing constraints by incorporating argument filtering into usable rules for STRSs.

  • New Quasi-Deadbeat FIR Smoother for Discrete-Time State-Space Signal Models: An LMI Approach

    ChoonKi AHN  

     
    LETTER-Digital Signal Processing

      Vol:
    E91-A No:9
      Page(s):
    2671-2674

    In this letter, we propose a new H2 smoother (H2S) with a finite impulse response (FIR) structure for discrete-time state-space signal models. This smoother is called an H2 FIR smoother (H2FS). Constraints such as linearity, quasi-deadbeat property, FIR structure, and independence of the initial state information are required in advance to design H2FS that is optimal in the sense of H2 performance criterion. It is shown that H2FS design problem can be converted into the convex programming problem written in terms of a linear matrix inequality (LMI) with a linear equality constraint. Simulation study illustrates that the proposed H2FS is more robust against uncertainties and faster in convergence than the conventional H2S.

  • Online Chat Dependency: The Influence of Social Anxiety

    Chih-Chien WANG  Shu-Chen CHANG  

     
    PAPER-Media Communication

      Vol:
    E91-D No:6
      Page(s):
    1622-1627

    Recent developments in information technology have made it easy for people to "chat" online with others in real time, and many do so regularly. "Virtual" relationships can be attractive, especially for people with social interaction problems in the "real world". This study examines the influence on online chat dependency of three dimensions of social anxiety: general social situation fear, negative evaluation fear, and novel social situation fear. Participants of this study were 454 college students. The survey results show that negative evaluation fear and general social situation fear are relative to online chat dependency, while novel social situation fear does not seem to be a relevant factor.

  • On the Use of Structures for Spoken Language Understanding: A Two-Step Approach

    Minwoo JEONG  Gary Geunbae LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E91-D No:5
      Page(s):
    1552-1561

    Spoken language understanding (SLU) aims to map a user's speech into a semantic frame. Since most of the previous works use the semantic structures for SLU, we verify that the structure is useful even for noisy input. We apply a structured prediction method to SLU problem and compare it to an unstructured one. In addition, we present a combined method to embed long-distance dependency between entities in a cascaded manner. On air travel data, we show that our approach improves performance over baseline models.

  • New H FIR Smoother for Linear Discrete-Time State-Space Models

    ChoonKi AHN  SooHee HAN  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E91-B No:3
      Page(s):
    896-899

    This letter propose a new H∞ smoother (HIS) with a finite impulse response (FIR) structure for discrete-time state-space models. This smoother is called an H∞ FIR smoother (HIFS). Constraints such as linearity, quasi-deadbeat property, FIR structure, and independence of the initial state information are required in advance. Among smoothers with these requirements, we choose the HIFS to optimize H∞ performance criterion. The HIFS is obtained by solving the linear matrix inequality (LMI) problem with a parametrization of a linear equality constraint. It is shown through simulation that the proposed HIFS is more robust against uncertainties and faster in convergence than the conventional HIS.

  • Cost Reduction of Acoustic Modeling for Real-Environment Applications Using Unsupervised and Selective Training

    Tobias CINCAREK  Tomoki TODA  Hiroshi SARUWATARI  Kiyohiro SHIKANO  

     
    PAPER-Acoustic Modeling

      Vol:
    E91-D No:3
      Page(s):
    499-507

    Development of an ASR application such as a speech-oriented guidance system for a real environment is expensive. Most of the costs are due to human labeling of newly collected speech data to construct the acoustic model for speech recognition. Employment of existing models or sharing models across multiple applications is often difficult, because the characteristics of speech depend on various factors such as possible users, their speaking style and the acoustic environment. Therefore, this paper proposes a combination of unsupervised learning and selective training to reduce the development costs. The employment of unsupervised learning alone is problematic due to the task-dependency of speech recognition and because automatic transcription of speech is error-prone. A theoretically well-defined approach to automatic selection of high quality and task-specific speech data from an unlabeled data pool is presented. Only those unlabeled data which increase the model likelihood given the labeled data are employed for unsupervised training. The effectivity of the proposed method is investigated with a simulation experiment to construct adult and child acoustic models for a speech-oriented guidance system. A completely human-labeled database which contains real-environment data collected over two years is available for the development simulation. It is shown experimentally that the employment of selective training alleviates the problems of unsupervised learning, i.e. it is possible to select speech utterances of a certain speaker group but discard noise inputs and utterances with lower recognition accuracy. The simulation experiment is carried out for several selected combinations of data collection and human transcription period. It is found empirically that the proposed method is especially effective if only relatively few of the collected data can be labeled and transcribed by humans.

  • A New Low-Power 13.56-MHz CMOS Ring Oscillator with Low Sensitivity of fOSC to VDD

    Felix TIMISCHL  Takahiro INOUE  Akio TSUNEDA  Daisuke MASUNAGA  

     
    PAPER

      Vol:
    E91-A No:2
      Page(s):
    504-512

    A design of a low-power CMOS ring oscillator for an application to a 13.56 MHz clock generator in an implantable RFID tag is proposed. The circuit is based on a novel voltage inverter, which is an improved version of the conventional current-source loaded inverter. The proposed circuit enables low-power operation and low sensitivity of the oscillation frequency, fOSC, to decay of the power supply VDD. By employing a gm-boosting subcircuit, power dissipation is decreased to 49 µW at fOSC=13.56 MHz. The sensitivity of fOSC to VDD is reduced to -0.02 at fOSC=13.56 MHz thanks to the use of composite high-impedance current sources.

  • Pairwise Test Case Generation Based on Module Dependency

    Jangbok KIM  Kyunghee CHOI  Gihyun JUNG  

     
    LETTER-Software Engineering

      Vol:
    E89-D No:11
      Page(s):
    2811-2813

    This letter proposes a modified Pairwise test case generation algorithm. The proposed algorithm produces additional test cases that may not be covered by Pairwise algorithm due to the dependency between internal modules of software systems. The algorithm produces additional cases utilizing internal module dependencies. The algorithm effectively increases the coverage of testing without significantly increasing the number of test cases.

  • Utterance-Based Selective Training for the Automatic Creation of Task-Dependent Acoustic Models

    Tobias CINCAREK  Tomoki TODA  Hiroshi SARUWATARI  Kiyohiro SHIKANO  

     
    PAPER-Speech Recognition

      Vol:
    E89-D No:3
      Page(s):
    962-969

    To obtain a robust acoustic model for a certain speech recognition task, a large amount of speech data is necessary. However, the preparation of speech data including recording and transcription is very costly and time-consuming. Although there are attempts to build generic acoustic models which are portable among different applications, speech recognition performance is typically task-dependent. This paper introduces a method for automatically building task-dependent acoustic models based on selective training. Instead of setting up a new database, only a small amount of task-specific development data needs to be collected. Based on the likelihood of the target model parameters given this development data, utterances which are acoustically close to the development data are selected from existing speech data resources. Since there are too many possibilities for selecting a data subset from a larger database in general, a heuristic has to be employed. The proposed algorithm deletes single utterances temporarily or alternates between successive deletion and addition of multiple utterances. In order to make selective training computationally practical, model retraining and likelihood calculation need to be fast. It is shown, that the model likelihood can be calculated fast and easily based on sufficient statistics without the need for explicit reconstruction of model parameters. The algorithm is applied to obtain an infant- and elderly-dependent acoustic model with only very few development data available. There is an improvement in word accuracy of up to 9% in comparison to conventional EM training without selection. Furthermore, the approach was also better than MLLR and MAP adaptation with the development data.

  • Japanese Dependency Structure Analysis Using Information about Multiple Pauses and F0

    Meirong LU  Kazuyuki TAKAGI  Kazuhiko OZEKI  

     
    PAPER-Speech and Hearing

      Vol:
    E89-D No:1
      Page(s):
    298-304

    Syntax and prosody are closely related to each other. This paper is concerned with the problem of exploiting pause information for recovering dependency structures of read Japanese sentences. Our parser can handle both symbolic information such as dependency rule and numerical information such as the probability of dependency distance of a phrase in a unified way as linguistic information. In our past work, post-phrase pause that immediately succeeds a phrase in question was employed as prosodic information. In this paper, we employed two kinds of pauses in addition to the post-phrase pause: post-post-phrase pause that immediately succeeds the phrase that follows a phrase in question, and pre-phrase pause that immediately precedes a phrase in question. By combining the three kinds of pause information linearly with the optimal combination weights that were determined experimentally, the parsing accuracy was improved compared to the case where only the post-phrase pause was used as in our previous work. Linear combination of pause and fundamental frequency information yielded further improvement of parsing accuracy.

21-40hit(56hit)