The search functionality is under construction.

Keyword Search Result

[Keyword] interpretability(5hit)

1-5hit
  • Alternative Ruleset Discovery to Support Black-Box Model Predictions

    Yoichi SASAKI  Yuzuru OKAJIMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/03/09
      Vol:
    E106-D No:6
      Page(s):
    1130-1141

    The increasing attention to the interpretability of machine learning models has led to the development of methods to explain the behavior of black-box models in a post-hoc manner. However, such post-hoc approaches generate a new explanation for every new input, and these explanations cannot be checked by humans in advance. A method that selects decision rules from a finite ruleset as explanation for neural networks has been proposed, but it cannot be used for other models. In this paper, we propose a model-agnostic explanation method to find a pre-verifiable finite ruleset from which a decision rule is selected to support every prediction made by a given black-box model. First, we define an explanation model that selects the rule, from a ruleset, that gives the closest prediction; this rule works as an alternative explanation or supportive evidence for the prediction of a black-box model. The ruleset should have high coverage to give close predictions for future inputs, but it should also be small enough to be checkable by humans in advance. However, minimizing the ruleset while keeping high coverage leads to a computationally hard combinatorial problem. Hence, we show that this problem can be reduced to a weighted MaxSAT problem composed only of Horn clauses, which can be efficiently solved with modern solvers. Experimental results showed that our method found small rulesets such that the rules selected from them can achieve higher accuracy for structured data as compared to the existing method using rulesets of almost the same size. We also experimentally compared the proposed method with two purely rule-based models, CORELS and defragTrees. Furthermore, we examine rulesets constructed for real datasets and discuss the characteristics of the proposed method from different viewpoints including interpretability, limitation, and possible use cases.

  • SeCAM: Tightly Accelerate the Image Explanation via Region-Based Segmentation

    Phong X. NGUYEN  Hung Q. CAO  Khang V. T. NGUYEN  Hung NGUYEN  Takehisa YAIRI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2022/05/11
      Vol:
    E105-D No:8
      Page(s):
    1401-1417

    In recent years, there has been an increasing trend of applying artificial intelligence in many different fields, which has a profound and direct impact on human life. Consequently, this raises the need to understand the principles of model making predictions. Since most current high-precision models are black boxes, neither the AI scientist nor the end-user profoundly understands what is happening inside these models. Therefore, many algorithms are studied to explain AI models, especially those in the image classification problem in computer vision such as LIME, CAM, GradCAM. However, these algorithms still have limitations, such as LIME's long execution time and CAM's confusing interpretation of concreteness and clarity. Therefore, in this paper, we will propose a new method called Segmentation - Class Activation Mapping (SeCAM)/ This method combines the advantages of these algorithms above while at simultaneously overcoming their disadvantages. We tested this algorithm with various models, including ResNet50, InceptionV3, and VGG16 from ImageNet Large Scale Visual Recognition Challenge (ILSVRC) data set. Outstanding results were achieved when the algorithm has met all the requirements for a specific explanation in a remarkably short space of time.

  • Robustness of Deep Learning Models in Dermatological Evaluation: A Critical Assessment

    Sourav MISHRA  Subhajit CHAUDHURY  Hideaki IMAIZUMI  Toshihiko YAMASAKI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/12/22
      Vol:
    E104-D No:3
      Page(s):
    419-429

    Our paper attempts to critically assess the robustness of deep learning methods in dermatological evaluation. Although deep learning is being increasingly sought as a means to improve dermatological diagnostics, the performance of models and methods have been rarely investigated beyond studies done under ideal settings. We aim to look beyond results obtained on curated and ideal data corpus, by investigating resilience and performance on user-submitted data. Assessing via few imitated conditions, we have found the overall accuracy to drop and individual predictions change significantly in many cases despite of robust training.

  • Multi-Task Convolutional Neural Network Leading to High Performance and Interpretability via Attribute Estimation

    Keisuke MAEDA  Kazaha HORII  Takahiro OGAWA  Miki HASEYAMA  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E103-A No:12
      Page(s):
    1609-1612

    A multi-task convolutional neural network leading to high performance and interpretability via attribute estimation is presented in this letter. Our method can provide interpretation of the classification results of CNNs by outputting attributes that explain elements of objects as a judgement reason of CNNs in the middle layer. Furthermore, the proposed network uses the estimated attributes for the following prediction of classes. Consequently, construction of a novel multi-task CNN with improvements in both of the interpretability and classification performance is realized.

  • Sensitivity-Characterised Activity Neurogram (SCAN) for Visualising and Understanding the Inner Workings of Deep Neural Network Open Access

    Khe Chai SIM  

     
    INVITED PAPER

      Pubricized:
    2016/07/19
      Vol:
    E99-D No:10
      Page(s):
    2423-2430

    Deep Neural Network (DNN) is a powerful machine learning model that has been successfully applied to a wide range of pattern classification tasks. Due to the great ability of the DNNs in learning complex mapping functions, it has been possible to train and deploy DNNs pretty much as a black box without the need to have an in-depth understanding of the inner workings of the model. However, this often leads to solutions and systems that achieve great performance, but offer very little in terms of how and why they work. This paper introduces Sensitivity-characterised Activity Neorogram (SCAN), a novel approach for understanding the inner workings of a DNN by analysing and visualising the sensitivity patterns of the neuron activities. SCAN constructs a low-dimensional visualisation space for the neurons so that the neuron activities can be visualised in a meaningful and interpretable way. The embedding of the neurons within this visualisation space can be used to compare the neurons, both within the same DNN and across different DNNs trained for the same task. This paper will present the observations from using SCAN to analyse DNN acoustic models for automatic speech recognition.