The search functionality is under construction.

Keyword Search Result

[Keyword] decision trees(8hit)

1-8hit
  • Batch Updating of a Posterior Tree Distribution Over a Meta-Tree

    Yuta NAKAHARA  Toshiyasu MATSUSHIMA  

     
    LETTER-Learning

      Pubricized:
    2023/08/23
      Vol:
    E107-A No:3
      Page(s):
    523-525

    Previously, we proposed a probabilistic data generation model represented by an unobservable tree and a sequential updating method to calculate a posterior distribution over a set of trees. The set is called a meta-tree. In this paper, we propose a more efficient batch updating method.

  • Decision Tree-Based Acoustic Models for Speech Recognition with Improved Smoothness

    Masami AKAMINE  Jitendra AJMERA  

     
    PAPER-Speech and Hearing

      Vol:
    E94-D No:11
      Page(s):
    2250-2258

    This paper proposes likelihood smoothing techniques to improve decision tree-based acoustic models, where decision trees are used as replacements for Gaussian mixture models to compute the observation likelihoods for a given HMM state in a speech recognition system. Decision trees have a number of advantageous properties, such as not imposing restrictions on the number or types of features, and automatically performing feature selection. This paper describes basic configurations of decision tree-based acoustic models and proposes two methods to improve the robustness of the basic model: DT mixture models and soft decisions for continuous features. Experimental results for the Aurora 2 speech database show that a system using decision trees offers state-of-the-art performance, even without taking advantage of its full potential and soft decisions improve the performance of DT-based acoustic models with 16.8% relative error rate reduction over hard decisions.

  • Image Categorization Using Scene-Context Scale Based on Random Forests

    Yousun KANG  Hiroshi NAGAHASHI  Akihiro SUGIMOTO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E94-D No:9
      Page(s):
    1809-1816

    Scene-context plays an important role in scene analysis and object recognition. Among various sources of scene-context, we focus on scene-context scale, which means the effective scale of local context to classify an image pixel in a scene. This paper presents random forests based image categorization using the scene-context scale. The proposed method uses random forests, which are ensembles of randomized decision trees. Since the random forests are extremely fast in both training and testing, it is possible to perform classification, clustering and regression in real time. We train multi-scale texton forests which efficiently provide both a hierarchical clustering into semantic textons and local classification in various scale levels. The scene-context scale can be estimated by the entropy of the leaf node in the multi-scale texton forests. For image categorization, we combine the classified category distributions in each scale and the estimated scene-context scale. We evaluate on the MSRC21 segmentation dataset and find that the use of the scene-context scale improves image categorization performance. Our results have outperformed the state-of-the-art in image categorization accuracy.

  • Lower Bounds on Quantum Query Complexity for Read-Once Formulas with XOR and MUX Operators

    Hideaki FUKUHARA  Eiji TAKIMOTO  

     
    PAPER

      Vol:
    E93-D No:2
      Page(s):
    280-289

    We introduce a complexity measure r for the class F of read-once formulas over the basis {AND,OR,NOT, XOR, MUX} and show that for any Boolean formula F in the class F, r(F) is a lower bound on the quantum query complexity of the Boolean function that F represents. We also show that for any Boolean function f represented by a formula in F, the deterministic query complexity of f is only quadratically larger than the quantum query complexity of f. Thus, the paper gives further evidence for the conjecture that there is an only quadratic gap for all functions.

  • Speaker Recognition Using Adaptively Boosted Classifiers

    Say-Wei FOO  Eng-Guan LIM  

     
    PAPER-Speech and Speaker Recognition

      Vol:
    E86-D No:3
      Page(s):
    474-482

    In this paper, a novel approach to speaker recognition is proposed. The approach makes use of adaptive boosting (AdaBoost) and classifiers such as Multilayer Perceptrons (MLP) and C4.5 Decision Trees for closed set, text-dependent speaker recognition. The performance of the systems is assessed using a subset of utterances drawn from the YOHO speaker verification corpus. Experiments show that significant improvement in accuracy can be achieved with the application of adaptive boosting techniques. Results also reveal that an accuracy of 98.8% for speaker identification may be achieved using the adaptively boosted C4.5 system.

  • Suitable Domains for Using Ordered Attribute Trees to Impute Missing Values

    Oscar-Ortega LOBO  Masayuki NUMAO  

     
    PAPER-Databases

      Vol:
    E84-D No:2
      Page(s):
    262-270

    Using decision trees to fill the missing values in data has been shown experimentally to be useful in some domains. However, this is not the general case. In other domains, using decision trees for imputing missing attribute values does not outperform other methods. Trying to identify the reasons behind the success or failure of the various methods for filling missing values on different domains can be useful for deciding the technique to be used when learning concepts from a new domain with missing values. This paper presents a technique by which to approach to previous goal and presents the results of applying the technique on predicting the success or failure of a method that uses decision trees to fill the missing values in an ordered manner. Results are encouraging because the obtained decision tree is simple and it can even provide hints for further improvement on the use of decision trees to impute missing attribute values.

  • Expressive Tests for Classification and Regression

    Shinichi MORISHITA  Akihiro NAKAYA  

     
    INVITED PAPER

      Vol:
    E83-D No:1
      Page(s):
    52-60

    We address the problem of computing various types of expressive tests for decision trees and regression trees. Using expressive tests is promising, because it may improve the prediction accuracy of trees, and it may also provide us some hints on scientific discovery. The drawback is that computing an optimal test could be costly. We present a unified framework to approach this problem, and we revisit the design of efficient algorithms for computing important special cases. We also prove that it is intractable to compute an optimal conjunction or disjunction.

  • Data Analysis by Positive Decision Trees

    Kazuhisa MAKINO  Takashi SUDA  Hirotaka ONO  Toshihide IBARAKI  

     
    PAPER-Theoretical Aspects

      Vol:
    E82-D No:1
      Page(s):
    76-88

    Decision trees are used as a convenient means to explain given positive examples and negative examples, which is a form of data mining and knowledge discovery. Standard methods such as ID3 may provide non-monotonic decision trees in the sense that data with larger values in all attributes are sometimes classified into a class with a smaller output value. (In the case of binary data, this is equivalent to saying that the discriminant Boolean function that the decision tree represents is not positive. ) A motivation of this study comes from an observation that real world data are often positive, and in such cases it is natural to build decision trees which represent positive (i. e. , monotone) discriminant functions. For this, we propose how to modify the existing procedures such as ID3, so that the resulting decision tree represents a positive discriminant function. In this procedure, we add some new data to recover the positivity of data, which the original data had but was lost in the process of decomposing data sets by such methods as ID3. To compare the performance of our method with existing methods, we test (1) positive data, which are randomly generated from a hidden positive Boolean function after adding dummy attributes, and (2) breast cancer data as an example of the real-world data. The experimental results on (1) tell that, although the sizes of positive decision trees are relatively larger than those without positivity assumption, positive decision trees exhibit higher accuracy and tend to choose correct attributes, on which the hidden positive Boolean function is defined. For the breast cancer data set, we also observe a similar tendency; i. e. , positive decision trees are larger but give higher accuracy.