The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] logic programming(20hit)

1-20hit
  • Encoding Argumentation Semantics by Boolean Algebra

    Fuan PU  Guiming LUO  Zhou JIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/01/18
      Vol:
    E100-D No:4
      Page(s):
    838-848

    In this paper, a Boolean algebra approach is proposed to encode various acceptability semantics for abstract argumentation frameworks, where each semantics can be equivalently encoded into several Boolean constraint models based on Boolean matrices and a family of Boolean operations between them. Then, we show that these models can be easily translated into logic programs, and can be solved by a constraint solver over Boolean variables. In addition, we propose some querying strategies to accelerate the calculation of the grounded, stable and complete extensions. Finally, we describe an experimental study on the performance of our encodings according to different semantics and querying strategies.

  • Speculative Computation and Abduction for an Autonomous Agent

    Ken SATOH  

     
    PAPER

      Vol:
    E88-D No:9
      Page(s):
    2031-2038

    In this paper, we propose an agent architecture for a combination of speculative computation and abduction. Speculative computation is a tentative computation when complete information for performing computation is not obtained. We use a default value to complement such incomplete information. Unlike usual default reasoning, the real value for the information can be obtained during the computation and the computation can be revised on the fly. In the previous work, we applied this technique to handling distributed problem solving under incomplete communication environments in the context of multi-agent systems and proposed correct procedures in abductive logic programming in terms of perfect model semantics. In the previous work, however, we regarded assumptions as defaults and used these assumptions for speculative computation. Thus, we could not perform hypothetical reasoning, that is, the original usage of abduction. In this paper, we extend our framework so that speculative computation and abduction can be both performed. As a result, our procedure becomes an extension of the abductive procedure developed by Kakas and Mancarella augmented by dynamic belief revision mechanism about outside world.

  • Deriving Discrete Behavior of Hybrid Systems under Incomplete Knowledge

    Kunihiko HIRAISHI  

     
    PAPER-Hybrid Systems

      Vol:
    E87-A No:11
      Page(s):
    2913-2918

    We study analysis of hybrid systems under incomplete knowledge. The class of hybrid systems to be considered is assumed to have the form of a rectangular hybrid automaton such that each constant in invariants and guards is given as a parameter. We develop a method based on symbolic computation that computes an approximation of the discrete behavior of the automaton. We also show an implementation on a constraint logic programming language.

  • Semantics of Normal Goals as Acquisitors Caused by Negation as Failure

    Susumu YAMASAKI  

     
    PAPER-Theory and Models of Software

      Vol:
    E86-D No:6
      Page(s):
    993-1000

    We are concerned with semantic views on an extended version of SLD resolution with negation as failure (SLDNF resolution) for normal logic programs, which Eshghi and Kowalski (1989) presented by making the SLDNF resolution capable of keeping negated predicates in memory and of extracting abducible predicates. This paper deals with its formal representation in relational form, for the purpose of interpreting the normal goal as an acquisitor of negated predicates stored in memory. Some set acquired by the derivations which the normal goal evokes is defined to be a semantics of the goal, under the constraint that the set is as large as possible and does not violate consistency in model theory. The semantics is discussed with relation to the 3-valued logic model theory, where the model theory is represented by alternating fixpoint semantics (Van Gelder, 1993). For simplicity of treatment, this paper is concerned with the normal logic program in the propositional logic.

  • Collaborative Constraint Functional Logic Programming System in an Open Environment

    Norio KOBAYASHI  Mircea MARIN  Tetsuo IDA  

     
    PAPER-Cooperation in Distributed Systems and Agents

      Vol:
    E86-D No:1
      Page(s):
    63-70

    In this paper we describe collaborative constraint functional logic programming and the system called Open CFLP that supports this programming paradigm. The system solves equations by collaboration of various equational constraint solvers. The solvers include higher-order lazy narrowing calculi that serve as the interpreter of higher-order functional logic programming, and specialized solvers for solving equations over specific domains, such as a polynomial solver and a differential equation solver. The constraint solvers are distributed in an open environment such as the Internet. They act as providers of constraint solving services. The collaboration between solvers is programmed in a coordination language embedded in a host language. In Open CFLP the user can solve equations in a higher-order functional logic programming style and yet exploit solving resources in the Internet without giving low-level programs of distributions of resources or specifying details of solvers deployed in the Internet.

  • Discovering Knowledge from Graph Structured Data by Using Refutably Inductive Inference of Formal Graph Systems

    Tetsuhiro MIYAHARA  Tomoyuki UCHIDA  Takayoshi SHOUDAI  Tetsuji KUBOYAMA  Kenichi TAKAHASHI  Hiroaki UEDA  

     
    PAPER

      Vol:
    E84-D No:1
      Page(s):
    48-56

    We present a new method for discovering knowledge from structured data which are represented by graphs in the framework of Inductive Logic Programming. A graph, or network, is widely used for representing relations between various data and expressing a small and easily understandable hypothesis. The analyzing system directly manipulating graphs is useful for knowledge discovery. Our method uses Formal Graph System (FGS) as a knowledge representation language for graph structured data. FGS is a kind of logic programming system which directly deals with graphs just like first order terms. And our method employs a refutably inductive inference algorithm as a learning algorithm. A refutably inductive inference algorithm is a special type of inductive inference algorithm with refutability of hypothesis spaces, and is suitable for knowledge discovery. We give a sufficiently large hypothesis space, the set of weakly reducing FGS programs. And we show that this hypothesis space is refutably inferable from complete data. We have designed and implemented a prototype of a knowledge discovery system KD-FGS, which is based on our method and acquires knowledge directly from graph structured data. Finally we discuss the applicability of our method for graph structured data with experimental results on some graph theoretical notions.

  • Inductive Logic Programming: From Logic of Discovery to Machine Learning

    Hiroki ARIMURA  Akihiro YAMAMOTO  

     
    INVITED PAPER

      Vol:
    E83-D No:1
      Page(s):
    10-18

    Inductive Logic Programming (ILP) is a study of machine learning systems that use clausal theories in first-order logic as a representation language. In this paper, we survey theoretical foundations of ILP from the viewpoints of Logic of Discovery and Machine Learning, and try to unify these two views with the support of the modern theory of Logic Programming. Firstly, we define several hypothesis construction methods in ILP and give their proof-theoretic foundations by treating them as a procedure which complets incomplete proofs. Next, we discuss the design of individual learning algorithms using these hypothesis construction methods. We review known results on learning logic programs in computational learning theory, and show that these algorithms are instances of a generic learning strategy with proof completion methods.

  • On the Necessity of Special Mechanisms for Handling Types in Inductive Logic Programming

    Yutaka SASAKI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E82-D No:10
      Page(s):
    1401-1408

    This paper demonstrates the necessity of special handling mechanisms for type (or sort) information when learning logic programs on the basis of background knowledge that includes type hierarchy. We have developed a novel relational learner RHB, which incorporates special operations to handle the computing of the least general generalization (lgg) of examples and the code length of logic programs with types. It is possible for previous learners, such as FOIL, GOLEM and Progol, to generate logic programs that include type information represented as is_a relations. However, this expedient has two problems: one in the computation of the code length and the other in the performance. We will illustrate that simply adding is_a relations to background knowledge as ordinary literals causes a problem in computing the code length of logic programs with is_a literals. Experimental results on artificial data show that the learning speed of FOIL exponentially slows as the number of types in the background knowledge increases. The hypotheses generated by GOLEM are about 30% less accurate than those of RHB. Furthermore, Progol is two times slower than RHB. Compared to the three learners, RHB can efficiently handle about 3000 is_a relations while still achieving a high accuracy. This indicates that type information should be specially handled when learning logic programs with types.

  • The Underlying Ontology of a DSS Generator for Transportation Demand Forecasting

    Cristina FIERBINTEANU  Toshio OKAMOTO  Naotugu NOZUE  

     
    PAPER-Theory and Methodology

      Vol:
    E81-D No:12
      Page(s):
    1330-1338

    We introduce an ontology for transportation systems demand forecasting and its implementation into a decision support system (DSS) generator. The term ontology, as we use it here, means a collection of building blocks necessary and sufficient to construct a skeleton of a specific DSS, that is a task ontology. The ontology is specified in constraint logic, which also ensures a good support for modularity.

  • Finding Priorities of Circumscription Policy as a Skeptical Explanation in Abduction

    Toshiko WAKAKI  Ken SATOH  Katsumi NITTA  Seiichiro SAKURAI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E81-D No:10
      Page(s):
    1111-1119

    In the commonsense reasoning, priorities among rules are often required to be found out in order to derive the desired conclusion as a theorem of the reasoning. In this paper, first we present the bottom-up and top-down abduction procedures to compute skeptical explanations and secondly show that priorities of circumscription to infer a desired theorem can be abduced as a skeptical explanation in abduction. In our approach, the required priorities can be computed based on the procedure to compute skeptical explanations provided in this paper as well as Wakaki and Satoh's method of compiling circumscription into extended logic programs. The method, for example, enables us to automatically find the adequate priority w. r. t. the Yale Shooting Problem to express a human natural reasoning in the framework of circumscription.

  • Implementation of a Parallel Prolog System on a Distributed Memory Parallel Computer

    Hideo MATSUDA  Toru KAWABATA  Yukio KANEDA  

     
    PAPER

      Vol:
    E80-D No:4
      Page(s):
    504-509

    In this paper we propose a new method for parallel execution of Prolog programs and present its implementation on a distributed memory parallel computer, Fujitsu AP1000. In our method a number of processes (named Prolog engines) explore different branches of a search tree (named tasks) in parallel, which is the same as OR-parallelism. Unlike OR-parallelism, the mapping between Prolog engines and tasks is statically determined like data parallelism. Each Prolog engine can decide which task is executed by the engine without communicating with the other engines. In many search problems, however, such static task mapping may cause imbalance on the processing time of each engine since the computational costs to explore branches may vary substantially. To cope with this issue, we devise a method to adjust the task imbalance by periodical exchanging how many tasks were processed for each engine. Also for reducing communication overhead in load balancing, we limit the scope of engines that exchange the load information each other. The effectiveness of our method is evaluated by measuring execution times for N Queens and the Traveling Salesman Problem on the AP1000. Using 512 processors, we obtained 355-fold speedup for N Queens and 343-fold speedup on the Traveling Salesman Problem.

  • Reversible Functor: Immutable Aggregate with Constant Time Update Operation

    Tatsuya AOYAGI  

     
    PAPER-Software Theory

      Vol:
    E79-D No:12
      Page(s):
    1646-1654

    In logic programming or functional programming languages, data objects, such as terms and lists, are immutable. In a basic implementation of such language, updating one element of an aggregate (contiguous data structure, such as an array) involves making a new copy of the whole aggregate. However, such copying can be expensive, and can be avoided by using a destructive update. We introduce the concept of a wrapper which enables destructive operation on an immutable object. Based on this concept, we designed the reversible functor as a solution to the aggregate update problem. We implemented the reversible functor in the existing SB-Prolog system and carried out several benchmarks. These benchmark results show its effectiveness. When using a large functor and updating it many times, the performance is improved dramatically by implementing the reversible functor. It incurs some overhead at runtime, but the amount is small and acceptable.

  • Conceptual Graph Programs and Their Declarative Semantics

    Bikash Chandra GHOSH  Vilas WUWONGSE  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E78-D No:9
      Page(s):
    1208-1217

    Conceptual graph formalism is a knowledge representation language in AI based on a graphical form of logic. Although logic is the basis of the conceptual graph theory, there is a strongly felt absence of a formal treatment of conceptual graphs as a logic programming language. In this paper, we develop the notion of a conceptual graph program as a kind of graph-based order-sorted logic program. First, we define the syntax of the conceptual graph program by specifying its major syntactic elements. Then, we develop a kind of model theoretic semantics and fixpoint semantics of the conceptual graph program. Finally, we show that the two types of semantics coincide for the conceptual graph programs.

  • Learning Logic Programs Using Definite Equality Theories as Background Knowledge

    Akihiro YAMAMOTO  

     
    PAPER-Computational Learning Theory

      Vol:
    E78-D No:5
      Page(s):
    539-544

    In this paper we investigate the learnability of relations in Inductive Logic Programming, by using equality theories as background knowledge. We assume that a hypothesis and an observation are respectively a definite program and a set of ground literals. The targets of our learning algorithm are relations. By using equality theories as background knowledge we introduce tree structure into definite programs. The structure enable us to narrow the search space of hypothesis. We give pairs of a hypothesis language and a knowledge language in order to discuss the learnability of relations from the view point of inductive inference and PAC learning.

  • High-Level VLSI Design Specification Validation Using Algorithmic Debugging

    Jiro NAGANUMA  Takeshi OGURA  Tamio HOSHINO  

     
    PAPER

      Vol:
    E77-A No:12
      Page(s):
    1988-1998

    This paper proposes a new environment for high-level VLSI design specification validation using "Algorithmic Debugging" and evaluates its benefits on three significant examples (a protocol processor, an 8-bit CPU, and a Prolog processor). A design is specified at a high-level using the structured analysis (SA) method, which is useful for analyzing and understanding the functionality to be realized. The specification written in SA is transformed into a logic programming language and is simulated in it. The errors (which terminate with an incorrect output in the simulation) included in the three large examples are efficiently located by answering junt a few queries from the algorithmic debugger. The number of interactions between the designer and the debugger is reduced by a factor of ten to a hundred compared to conventional simulation based validation methodologies. The correct SA specification can be automatically translated into a Register Transfer Level (RTL) specification suitable for logic synthesis. In this environment, a designer is freed from the tedious task of debugging a RTL specification, and can concentrate on the design itself. This environment promises to be an important step towards efficient high-level VLSI design specification validation.

  • Outside-In Conditional Narrowing

    Tetsuo IDA  Satoshi OKUI  

     
    PAPER-Automata, Languages and Theory of Computing

      Vol:
    E77-D No:6
      Page(s):
    631-641

    We present outside-in conditional narrowing for orthogonal conditional term rewriting systems, and show the completeness of leftmost-outside-in conditional narrowing with respect to normalizable solutions. We consider orthogonal conditional term rewriting systems whose conditions consist of strict equality only. Completeness results are obtained for systems both with and without extra variables. The result bears practical significance since orthogonal conditional term rewriting systems can be viewed as a computation model for functional-logic programming languages and leftmost-outside-in conditional narrowing is the computing mechanism for the model.

  • cu-Prolog for Constraint-Based Natural Language Processing

    Hiroshi TSUDA  

     
    PAPER

      Vol:
    E77-D No:2
      Page(s):
    171-180

    This paper introduces a constraint logic programming (CLP) language cu-Prolog as an implementation framework for constraint-based natural language processing. Compared to other CLP languages, cu-Prolog has several unique features. Most CLP languages take algebraic equations or inequations as constraints. cu-Prolog, on the other hand, takes Prolog atomic formulas in terms of user-defined predicates. cu-Prolog, thus, can describe symbolic and combinatorial constraints occurring in the constraint-based grammar formalisms. As a constraint solver, cu-Prolog uses the unfold/fold transformation, which is well known as a program transformation technique, dynamically with some heuristics. To treat the information partiality described with feature structures, cu-Prolog uses PST (Partially Specified Term) as its data structure. Sections 1 and 2 give an introduction to the constraint-based grammar formalisms on which this paper is based and the outline of cu-Prolog is explained in Sect. 3 with implementation issues described in Sect. 4. Section 5 illustrates its linguistic application to disjunctive feature structure (DFS) and parsing constraint-based grammar formalisms such as Japanese Phrase Structure Grammar (JPSG). In either application, a disambiguation process is realized by transforming constraints, which gives a picture of constraint-based NLP.

  • Derivation of a Parallel Bottom-Up Parser from a Sequential Parser

    Kazuko TAKAHASHI  

     
    PAPER-Software Theory

      Vol:
    E75-D No:6
      Page(s):
    852-860

    This paper describes the derivation of a parallel program from a nondeterministic sequential program using a bottom-up parser as an example. The derivation procedure consists of two stages: exploitation of AND-parallelism and exploitation of OR-parallelism. An interpreter of the sequential parser BUP is first transformed so that processes for the nodes in a parsing tree can run in parallel. Then, the resultant program is transformed so that a nondeterministic search of a parsing tree can be done in parallel. The former stage is performed by hand-simulation, and the latter is accomplished by the compiler of ANDOR-, which is an AND/OR parallel logic programming language. The program finally derived, written in KL1 (Kernel Language of the FGCS Project), achieves an all-solution search without side effects. The program generated corresponds to an interpreter of PAX, a revised parallel version of BUP. This correspondence shows that the derivation method proposed in this paper is effective for creating efficient parallel programs.

  • Refining Theory with Multiple Faults

    Somkiat TANGKITVANICH  Masamichi SHIMURA  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    470-476

    This paper presents a system that automatically refines the theory expressed in the function-free first-order logic. Our system can efficiently correct multiple faults in both the concept and subconcepts of the theory, given only the classified examples of the concept. It can refine larger classes of theory than existing systems can since it has overcome many of their limitations. Our system is based on a new combination of an inductive and an explanation-based learning algorithms, which we call the biggest-first multiple-example EBL (BM-EBL). From a learning perspective, our system is an improvement over the FOIL learning system in that our system can accept a theory as well as examples. An experiment shows that when our system is given a theory that has the classification error rate as high as 50%, it can still learn faster and with more accuracy than when it is not given any theory.

  • Analogical Reasoning as a Form of Hypothetical Reasoning

    Ryohei ORIHARA  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    477-486

    The meaning of analogical reasoning in locally stratified logic programs are described by generalized stable model (GSM) semantics. Although studies on the theoretical aspects of analogical reasoning have recently been on the increase, there have been few attempts to give declarative semantics for analogical reasoning. This paper takes notice of the fact that GSM semantics gives meaning to the effect that the negated predicates represent exceptional cases. We define predicates that denote unusual cases regarding analogical reasoning; for example, ab(x)p(x)g(x), where p(s), q(s), p(t) are given. We also add rules with negated occurrences of such predicates into the original program. In this way, analogical models for original programs are given in the form of GSMs of extended programs. A proof procedure for this semantics is presented. The main objective of this paper is not to construct a practical analogical reasoning system, but rather to present a framework for analyzing characteristics of analogical reasoning.