The search functionality is under construction.
The search functionality is under construction.

Refining Theory with Multiple Faults

Somkiat TANGKITVANICH, Masamichi SHIMURA

  • Full Text Views

    0

  • Cite this

Summary :

This paper presents a system that automatically refines the theory expressed in the function-free first-order logic. Our system can efficiently correct multiple faults in both the concept and subconcepts of the theory, given only the classified examples of the concept. It can refine larger classes of theory than existing systems can since it has overcome many of their limitations. Our system is based on a new combination of an inductive and an explanation-based learning algorithms, which we call the biggest-first multiple-example EBL (BM-EBL). From a learning perspective, our system is an improvement over the FOIL learning system in that our system can accept a theory as well as examples. An experiment shows that when our system is given a theory that has the classification error rate as high as 50%, it can still learn faster and with more accuracy than when it is not given any theory.

Publication
IEICE TRANSACTIONS on Information Vol.E75-D No.4 pp.470-476
Publication Date
1992/07/25
Publicized
Online ISSN
DOI
Type of Manuscript
Special Section PAPER (Special Issue on Algorithmic Learning Theory)
Category

Authors

Keyword