The search functionality is under construction.

Keyword Search Result

[Keyword] C language(19hit)

1-19hit
  • Model Checking in the Presence of Schedulers Using a Domain-Specific Language for Scheduling Policies

    Nhat-Hoa TRAN  Yuki CHIBA  Toshiaki AOKI  

     
    PAPER-Software System

      Pubricized:
    2019/03/29
      Vol:
    E102-D No:7
      Page(s):
    1280-1295

    A concurrent system consists of multiple processes that are run simultaneously. The execution orders of these processes are defined by a scheduler. In model checking techniques, the scheduling policy is closely related to the search algorithm that explores all of the system states. To ensure the correctness of the system, the scheduling policy needs to be taken into account during the verification. Current approaches, which use fixed strategies, are only capable of limited kinds of policies and are difficult to extend to handle the variations of the schedulers. To address these problems, we propose a method using a domain-specific language (DSL) for the succinct specification of different scheduling policies. Necessary artifacts are automatically generated from the specification to analyze the behaviors of the system. We also propose a search algorithm for exploring the state space. Based on this method, we develop a tool to verify the system with the scheduler. Our experiments show that we could serve the variations of the schedulers easily and verify the systems accurately.

  • PROVIT-CI: A Classroom-Oriented Educational Program Visualization Tool

    Yu YAN  Kohei HARA  Takenobu KAZUMA  Yasuhiro HISADA  Aiguo HE  

     
    PAPER-Educational Technology

      Pubricized:
    2017/11/01
      Vol:
    E101-D No:2
      Page(s):
    447-454

    Studies have shown that program visualization(PV) is effective for student programming exercise or self-study support. However, very few instructors actively use PV tools for programming lectures. This article discussed the impediments the instructors meet during combining PV tools into lecture classrooms and proposed a C programming classroom instruction support tool based on program visualization — PROVIT-CI (PROgram VIsualization Tool for Classroom Instruction). PROVIT-CI has been consecutively and actively used by the instructors in author's university to enhance their lectures since 2015. The evaluation of application results in an introductory C programming course shows that PROVIT-CI is effective and helpful for instructors classroom use.

  • Register-Based Process Virtual Machine Acceleration Using Hardware Extension with Hybrid Execution

    Surachai THONGKAEW  Tsuyoshi ISSHIKI  Dongju LI  Hiroaki KUNIEDA  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E98-A No:12
      Page(s):
    2505-2518

    The Process Virtual Machine (VM) is typical software that runs applications inside operating systems. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware, operating system and allows bytecodes (portable code) to be executed in the same way on any other platforms. The Process VMs are implemented using an interpreter to interpret bytecode instead of direct execution of host machine codes. Thus, the bytecode execution is slower than those of the compiled programming language execution. Several techniques including our previous paper, the “Fetch/Decode Hardware Extension”, have been proposed to speed up the interpretation of Process VMs. In this paper, we propose an additional methodology, the “Hardware Extension with Hybrid Execution” to further enhance the performance of Process VMs interpretation and focus on Register-based model. This new technique provides an additional decoder which can classify bytecodes into either simple or complex instructions. With “Hybrid Execution”, the simple instruction will be directly executed on hardware of native processor. The complex instruction will be emulated by the “extra optimized bytecode software handler” of native processor. In order to eliminate the overheads of retrieving and storing operand on memory, we utilize the physical registers instead of (low address) virtual registers. Moreover, the combination of 3 techniques: Delay scheduling, Mode predictor HW and Branch/goto controller can eliminate all of the switching mode overheads between native mode and bytecode mode. The experimental results show the improvements of execution speed on the Arithmetic instructions, loop & conditional instructions and method invocation & return instructions can be achieved up to 16.9x, 16.1x and 3.1x respectively. The approximate size of the proposed hardware extension is 0.04mm2 (or equivalent to 14.81k gates) and consumes an additional power of only 0.24mW. The stated results are obtained from logic synthesis using the TSMC 90nm technology @ 200MHz.

  • Teachability of a Subclass of Simple Deterministic Languages

    Yasuhiro TAJIMA  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:12
      Page(s):
    2733-2742

    We show teachability of a subclass of simple deterministic languages. The subclass we define is called stack uniform simple deterministic languages. Teachability is derived by showing the query learning algorithm for this language class. Our learning algorithm uses membership, equivalence and superset queries. Then, it terminates in polynomial time. It is already known that simple deterministic languages are polynomial time query learnable by context-free grammars. In contrast, our algorithm guesses a hypothesis by a stack uniform simple deterministic grammar, thus our result is strict teachability of the subclass of simple deterministic languages. In addition, we discuss parameters of the polynomial for teachability. The “thickness” is an important parameter for parsing and it should be one of parameters to evaluate the time complexity.

  • Multi-Stage Automatic NE and PoS Annotation Using Pattern-Based and Statistical-Based Techniques for Thai Corpus Construction

    Nattapong TONGTEP  Thanaruk THEERAMUNKONG  

     
    PAPER-Natural Language Processing

      Vol:
    E96-D No:10
      Page(s):
    2245-2256

    Automated or semi-automated annotation is a practical solution for large-scale corpus construction. However, the special characteristics of Thai language, such as lack of word-boundary and sentence-boundary markers, trigger several issues in automatic corpus annotation. This paper presents a multi-stage annotation framework, containing two stages of chunking and three stages of tagging. The two chunking stages are pattern matching-based named entity (NE) extraction and dictionary-based word segmentation while the three succeeding tagging stages are dictionary-, pattern- and statist09812490981249ical-based tagging. Applying heuristics of ambiguity priority, NE extraction is performed first on an original text using a set of patterns, in the order of pattern ambiguity. Next, the remaining text is segmented into words with a dictionary. The obtained chunks are then tagged with types of named entities or parts-of-speech (PoS) using dictionaries, patterns and statistics. Focusing on the reduction of human intervention in corpus construction, our experimental results show that the dictionary-based tagging process can assign unique tags to 64.92% of the words, with the remaining of 24.14% unknown words and 10.94% ambiguously tagged words. Later, the pattern-based tagging can reduce unknown words to only 13.34% while the statistical-based tagging can solve the ambiguously tagged words to only 3.01%.

  • Probabilistic Treatment for Syntactic Gaps in Analytic Language Parsing

    Prachya BOONKWAN  Thepchai SUPNITHI  

     
    PAPER

      Vol:
    E94-D No:3
      Page(s):
    440-447

    This paper presents a syntax-based framework for gap resolution in analytic languages. CCG, reputable for dealing with deletion under coordination, is extended with a memory mechanism similar to the slot-and-filler mechanism, resulting in a wider coverage of syntactic gaps patterns. Though our grammar formalism is more expressive than the canonical CCG, its generative power is bounded by Partially Linear Indexed Grammar. Despite the spurious ambiguity originated from the memory mechanism, we also show that its probabilistic parsing is feasible by using the dual decomposition algorithm.

  • Decomposition of Task-Level Concurrency on C Programs Applied to the Design of Multiprocessor SoC

    Mohammad ZALFANY URFIANTO  Tsuyoshi ISSHIKI  Arif ULLAH KHAN  Dongju LI  Hiroaki KUNIEDA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E91-A No:7
      Page(s):
    1748-1756

    A simple extension used to assist the decomposition of task-level concurrency within C programs is presented in this paper. The concurrency decomposition is meant to be used as the point of entry for Multiprocessor System-on-Chips (MPSoC) architectures' design-flow. Our methodology allows the (re)use of readily available reference C programs and enables easy and rapid exploration for various alternatives of task partitioning strategies; a crucial task that greatly influences the overall quality of the designed MPSoC. A test case using a JPEG encoder application has been performed and the results are presented in this paper.

  • Synchronization Verification in System-Level Design with ILP Solvers

    Thanyapat SAKUNKONCHAK  Satoshi KOMATSU  Masahiro FUJITA  

     
    PAPER-System Level Design

      Vol:
    E89-A No:12
      Page(s):
    3387-3396

    Concurrency is one of the most important issues in system-level design. Interleaving among parallel processes can cause an extremely large number of different behaviors, making design and verification difficult tasks. In this work, we propose a synchronization verification method for system-level designs described in the SpecC language. Instead of modeling the design with timed FSMs and using a model checker for timed automata (such as UPPAAL or KRONOS), we formulate the timing constraints with equalities/inequalities that can be solved by integer linear programming (ILP) tools. Verification is conducted in two steps. First, similar to other software model checkers, we compute the reachability of an error state in the absence of timing constraints. Then, if a path to an error state exists, its feasibility is checked by using the ILP solver to evaluate the timing constraints along the path. This approach can drastically increase the sizes of the designs that can be verified. Abstraction and abstraction refinement techniques based on the Counterexample-Guided Abstraction Refinement (CEGAR) paradigm are applied.

  • Synchronization Mechanism for Timed/Untimed Mixed-Signal System Level Design Environment

    Yu LIU  Satoshi KOMATSU  Masahiro FUJITA  

     
    PAPER

      Vol:
    E89-A No:4
      Page(s):
    1018-1026

    Recently, system level design languages (SLDL), which can describe both hardware and software aspects of the design, are receiving attention. Mixed-signal extensions of SLDL enable current discrete-oriented SLDLs to describe and simulate not only digital systems but also digital-analog mixed-signal systems. The synchronization between discrete and continuous behaviors is widely regarded as a critical part in the extensions. In this paper, we present an event-driven synchronization mechanism for both timed and untimed system level designs through which discrete and continuous behaviors are synchronized via AD events and DA events. We also demonstrate how the synchronization mechanism can be incorporated into the kernel of SLDL, such as SpecC. In the extended kernel, a new simulation cycle, the AMS cycle, is introduced. Three case studies show that the extended SpecC-based system level design environment using our synchronization mechanism works well with timed/untimed mixed-signal system level description.

  • A Study on Acoustic Modeling for Speech Recognition of Predominantly Monosyllabic Languages

    Ekkarit MANEENOI  Visarut AHKUPUTRA  Sudaporn LUKSANEEYANAWIN  Somchai JITAPUNKUL  

     
    PAPER

      Vol:
    E87-D No:5
      Page(s):
    1146-1163

    This paper presents a study on acoustic modeling for speech recognition of predominantly monosyllabic languages. Various speech units used in speech recognition systems have been investigated. To evaluate the effectiveness of these acoustic models, the Thai language is selected, since it is a predominantly monosyllabic language and has a complex vowel system. Several experiments have been carried out to find the proper speech unit that can accurately create acoustic model and give a higher recognition rate. Results of recognition rates under different acoustic models are given and compared. In addition, this paper proposes a new speech unit for speech recognition, namely onset-rhyme unit. Two models are proposed-the Phonotactic Onset-Rhyme Model (PORM) and the Contextual Onset-Rhyme Model (CORM). The models comprise a pair of onset and rhyme units, which makes up a syllable. An onset comprises an initial consonant and its transition towards the following vowel. Together with the onset, the rhyme consists of a steady vowel segment and a final consonant. Experimental results show that the onset-rhyme model improves on the efficiency of other speech units. The onset-rhyme model improves on the accuracy of the inter-syllable triphone model by nearly 9.3% and of the context-dependent Initial-Final model by nearly 4.7% for the speaker-dependent systems using only an acoustic model, and 5.6% and 4.5% for the speaker-dependent systems using both acoustic and language model respectively. The results show that the onset-rhyme models attain a high recognition rate. Moreover, they also give more efficiency in terms of system complexity.

  • Verification of Synchronization in SpecC Description with the Use of Difference Decision Diagrams

    Thanyapat SAKUNKONCHAK  Satoshi KOMATSU  Masahiro FUJITA  

     
    PAPER-Logic and High Level Synthesis

      Vol:
    E86-A No:12
      Page(s):
    3192-3199

    SpecC language is designated to handle the design of entire system from specification to implementation and of hardware/software co-design. Concurrency is one of the features of SpecC which expresses the parallel execution of processes. Describing the systems which contain concurrent behaviors would have some data exchanging or transferring among them. Therefore, the synchronization semantics (notify/wait) of events should be incorporated. The actual design, which is usually sophisticated by its characteristic and functionalities, may contain a bunch of event synchronization codes. This will make the design difficult and time-consuming to verify. In this paper, we introduce a technique which helps verifying the synchronization of events in SpecC. The original SpecC code containing synchronization semantics is parsed and translated into a Boolean SpecC code. The difference decision diagrams (DDDs) is used to verify for event synchronization on Boolean SpecC code. The counter examples for tracing back to the original source are given when the verification results turn out to be unsatisfied. Here we also introduce idea on automatically refinement when the results are unsatisfied and preset some preliminary results.

  • Hardware Algorithm Optimization Using Bach C

    Kazuhisa OKADA  Akihisa YAMADA  Takashi KAMBE  

     
    PAPER

      Vol:
    E85-A No:4
      Page(s):
    835-841

    The Bach compiler is a behavioral synthesis tool, which synthesizes RT-level circuits from behavioral descriptions written in the Bach C language. It shortens the design period of LSI and helps designers concentrate on algorithm design and refinement. In this paper, we propose methods for optimizing the area and performance of algorithms described in Bach C. In our experiments, we optimized a Viterbi decoder algorithm using our proposed methods and synthesized the circuit using the Bach compiler. The conclusion is that the circuit produced using Bach is both smaller and faster than the hand-coded register transfer level (RTL) design. This proves that the Bach compiler produces high-quality results and the Bach C language is effective for describing the behavior of hardware at a high-level.

  • Polynomial Time Learnability of Simple Deterministic Languages from MAT and a Representative Sample

    Yasuhiro TAJIMA  Etsuji TOMITA  Mitsuo WAKATSUKI  

     
    PAPER-Theory of Automata, Formal Language Theory

      Vol:
    E83-D No:4
      Page(s):
    757-765

    We propose a learning algorithm for simple deterministic languages from queries and a priori knowledge. To the learner, a special finite subset of the target language, called a representative sample, is provided at the beginning and two types of queries, equivalence queries and membership queries, are available. This learning algorithm constructs nonterminals of a hypothesis grammar based on Ishizaka(1990)'s idea. In Ishizaka(1990)'s algorithm, the learner makes rules as many as possible from positive counterexamples, and diagnoses wrong rules from negative counterexamples. In contrast, our algorithm guesses a simple deterministic grammar and diagnoses them using positive and negative counterexamples based on Angluin(1987)'s algorithm.

  • A Reference Model of CAD System Generation from Various Object Model-Based Specification Description Languages Specific to Individual Domains

    Lukman EFENDY  Masaaki HASHIMOTO  Keiichi KATAMINE  Toyohiko HIROTA  

     
    PAPER-System

      Vol:
    E83-D No:4
      Page(s):
    713-721

    This paper proposes a reference model of CAD system generation, and describes its prototype implementation. The problems encountered in using CAD systems in industry involve complicated data handling and unsatisfied demands for domain knowledge because of the lack of a way of extracting and adopting it in the system. In the example domain of architecture, the authors have already defined domain-specific BDL (Building Design Language) for architecture experts to describe modelers of architectural structure in CAD systems by themselves. Moreover, the authors have developed a CAD system generator based on BDL descriptions. However, the different domain-specific languages required for individual domains create difficulty in developing various CAD system generators. The proposed reference model solves this problem by applying a common intermediate language based on the object model. Moreover, the model allows the creation of an integrated CAD system which contains multiple domains required by a field of industry. Its prototype implementation demonstrates its feasibility.

  • Hardware Synthesis from C Programs with Estimation of Bit Length of Variables

    Osamu OGAWA  Kazuyoshi TAKAGI  Yasufumi ITOH  Shinji KIMURA  Katsumasa WATANABE  

     
    PAPER

      Vol:
    E82-A No:11
      Page(s):
    2338-2346

    In the hardware synthesis methods with high level languages such as C language, optimization quality of the compilers has a great influence on the area and speed of the synthesized circuits. Among hardware-oriented optimization methods required in such compilers, minimization of the bit length of the data-paths is one of the most important issues. In this paper, we propose an estimation algorithm of the necessary bit length of variables for this aim. The algorithm analyzes the control/data-flow graph translated from C programs and decides the bit length of each variable. On several experiments, the bit length of variables can be reduced by half with respect to the declared length. This method is effective not only for reducing the circuit area but also for reducing the delay of the operation units such as adders.

  • Applying Program Transformation to Type Inference for a Logic Language

    Yuuichi KAWAGUCHI  Kiyoshi AKAMA  Eiichi MIYAMOTO  

     
    PAPER-Automata,Languages and Theory of Computing

      Vol:
    E81-D No:11
      Page(s):
    1141-1147

    This paper presents a type inference algorithm for a logic language, HS. The algorithm uses a program transformation, SPS, to given programs as a type inference. This method is theoretically clear, because applying it to given programs is equal to executing it partially. No other additional framework is needed for our approach. In contrast, many studies on type inference for logic languages are based on Mycroft and O'Keefe's famous algorithm, which was initially developed for functional languages. Therefore, the meanings of the algorithms are theoretically unclear in the domain of logic languages. Our type inference is flexible. Users of the type inference system can specify the types of objects abstractly (weakly) if the types are not exactly known, or they can specify them particularly (strongly) if the types are exactly known. Both kinds of programs are inferred for types. In contrast, many type inference systems accept purely untyped programs. Thus, with these two features, our method is simple and flexible.

  • Succeeding Word Prediction for Speech Recognition Based on Stochastic Language Model

    Min ZHOU  Seiichi NAKAGAWA  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E79-D No:4
      Page(s):
    333-342

    For the purpose of automatic speech recognition, language models (LMs) are used to predict possible succeeding words for a given partial word sequence and thereby to reduce the search space. In this paper several kinds of stochastic language models (SLMs) are evaluated-bigram, trigram, hidden Markov model (HMM), bigram-HMM, stochastic context-free grammar (SCFG) and hand-written Bunsetsu Grammar. To compare the predictive power of these SLMs, the evaluation was conducted from two points of views: (1) relationship between the number of model parameters and entropy, (2) predictive rate of succeeding part of speech (POS) and succeeding word. We propose a new type of bigram-HMM and compare it with the other models. Two kinds of approximations are tried and examined through experiments. Results based on both of English Brown-Corpus and Japanese ATR dialog database showed that the extended bigram-HMM had better performance than the others and was more suitable to be a language model.

  • Speech Recognition Using Function-Word N-Grams and Content-Word N-Grams

    Ryosuke ISOTANI  Shoichi MATSUNAGA  Shigeki SAGAYAMA  

     
    PAPER

      Vol:
    E78-D No:6
      Page(s):
    692-697

    This paper proposes a new stochastic language model for speech recognition based on function-word N-grams and content-word N-grams. The conventional word N-gram models are effective for speech recognition, but they represent only local constraints within a few successive words and lack the ability to capture global syntactic or semantic relationships between words. To represent more global constraints, the proposed language model gives the N-gram probabilities of word sequences, with attention given only to function words or to content words. The sequences of function words and of content words are expected to represent syntactic and semantic constraints, respectively. Probabilities of function-word bigrams and content-word bigrams were estimated from a 10,000-sentence text database, and analysis using information theoretic measure showed that expected constraints were extracted appropriately. As an application of this model to speech recognition, a post-processor was constructed to select the optimum sentence candidate from a phrase lattice obtained by a phrase recognition system. The phrase candidate sequence with the highest total acoustic and linguistic score was sought by dynamic programming. The results of experiments carried out on the utterances of 12 speakers showed that the proposed method is more accurate than a CFG-based method, thus demonstrating its effectiveness in improving speech recognition performance.

  • Task Adaptation in Syllable Trigram Models for Continuous Speech Recognition

    Sho-ichi MATSUNAGA  Tomokazu YAMADA  Kiyohiro SHIKANO  

     
    PAPER

      Vol:
    E76-D No:1
      Page(s):
    38-43

    In speech recognition systeme dealing with unlimited vocabulary and based on stochastic language models, when the target recognition task is changed, recognition performance decreases because the language model is no longer appropriate. This paper describes two approaches for adapting a specific/general syllable trigram model to a new task. One uses a amall amount of text data similar to the target task, and the other uses supervised learning using the most recent input phrases and similar text. In this paper, these adaptation methods are called preliminary learning" and successive learning", respectively. These adaptation are evaluated using syllable perplexity and phrase recognition rates. The perplexity was reduced from 24.5 to 14.3 for the adaptation using 1000 phrases of similar text by preliminary learning, and was reduced to 12.1 using 1000 phrases including the 100 most recent phrases by successive learning. The recognition rates were also improved from 42.3% to 51.3% and 52.9%, respectively. Text similarity for the approaches is also studied in this paper.