Explainable artificial intelligence (AI) technology enables us to quantitatively analyze the whole prediction logic of AI as a global explanation. However, unwanted relationships learned by AI due to data sparsity, high dimensionality, and noise are also visualized in the explanation, which deteriorates confidence in the AI. Thus, methods for correcting those unwanted relationships in explanation has been developed. However, since these methods are applicable only to differentiable machine learning (ML) models but not to non-differentiable models such as tree-based models, they are insufficient for covering a wide range of ML technology. Since these methods also require re-training of the model for correcting its explanation (i.e., in-processing method), they cannot be applied to black-box models provided by third parties. Therefore, we propose a method called ensemble-based explanation correction (EBEC) as a post-processing method for correcting the global explanation of a prediction model in a model-agnostic manner by using the Rashomon effect of statistics. We evaluated the performance of EBEC with three different tasks and analyzed its function in more detail. The evaluation results indicate that EBEC can correct global explanation of the model so that the explanation aligns with the domain knowledge given by the user while maintaining its accuracy. EBEC can be extended in various ways and combined with any method to improve correction performance since it is a post-processing-type correction method. Hence, EBEC would contribute to high-productivity ML modeling as a new type of explanation-correction method.
Masaki HAMAMOTO
Hitachi, Ltd.
Hiroyuki NAMBA
Hitachi, Ltd.
Masashi EGI
Hitachi, Ltd.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Masaki HAMAMOTO, Hiroyuki NAMBA, Masashi EGI, "Ensemble-Based Method for Correcting Global Explanation of Prediction Model" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 2, pp. 218-228, February 2023, doi: 10.1587/transinf.2022EDP7095.
Abstract: Explainable artificial intelligence (AI) technology enables us to quantitatively analyze the whole prediction logic of AI as a global explanation. However, unwanted relationships learned by AI due to data sparsity, high dimensionality, and noise are also visualized in the explanation, which deteriorates confidence in the AI. Thus, methods for correcting those unwanted relationships in explanation has been developed. However, since these methods are applicable only to differentiable machine learning (ML) models but not to non-differentiable models such as tree-based models, they are insufficient for covering a wide range of ML technology. Since these methods also require re-training of the model for correcting its explanation (i.e., in-processing method), they cannot be applied to black-box models provided by third parties. Therefore, we propose a method called ensemble-based explanation correction (EBEC) as a post-processing method for correcting the global explanation of a prediction model in a model-agnostic manner by using the Rashomon effect of statistics. We evaluated the performance of EBEC with three different tasks and analyzed its function in more detail. The evaluation results indicate that EBEC can correct global explanation of the model so that the explanation aligns with the domain knowledge given by the user while maintaining its accuracy. EBEC can be extended in various ways and combined with any method to improve correction performance since it is a post-processing-type correction method. Hence, EBEC would contribute to high-productivity ML modeling as a new type of explanation-correction method.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7095/_p
Copy
@ARTICLE{e106-d_2_218,
author={Masaki HAMAMOTO, Hiroyuki NAMBA, Masashi EGI, },
journal={IEICE TRANSACTIONS on Information},
title={Ensemble-Based Method for Correcting Global Explanation of Prediction Model},
year={2023},
volume={E106-D},
number={2},
pages={218-228},
abstract={Explainable artificial intelligence (AI) technology enables us to quantitatively analyze the whole prediction logic of AI as a global explanation. However, unwanted relationships learned by AI due to data sparsity, high dimensionality, and noise are also visualized in the explanation, which deteriorates confidence in the AI. Thus, methods for correcting those unwanted relationships in explanation has been developed. However, since these methods are applicable only to differentiable machine learning (ML) models but not to non-differentiable models such as tree-based models, they are insufficient for covering a wide range of ML technology. Since these methods also require re-training of the model for correcting its explanation (i.e., in-processing method), they cannot be applied to black-box models provided by third parties. Therefore, we propose a method called ensemble-based explanation correction (EBEC) as a post-processing method for correcting the global explanation of a prediction model in a model-agnostic manner by using the Rashomon effect of statistics. We evaluated the performance of EBEC with three different tasks and analyzed its function in more detail. The evaluation results indicate that EBEC can correct global explanation of the model so that the explanation aligns with the domain knowledge given by the user while maintaining its accuracy. EBEC can be extended in various ways and combined with any method to improve correction performance since it is a post-processing-type correction method. Hence, EBEC would contribute to high-productivity ML modeling as a new type of explanation-correction method.},
keywords={},
doi={10.1587/transinf.2022EDP7095},
ISSN={1745-1361},
month={February},}
Copy
TY - JOUR
TI - Ensemble-Based Method for Correcting Global Explanation of Prediction Model
T2 - IEICE TRANSACTIONS on Information
SP - 218
EP - 228
AU - Masaki HAMAMOTO
AU - Hiroyuki NAMBA
AU - Masashi EGI
PY - 2023
DO - 10.1587/transinf.2022EDP7095
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2023
AB - Explainable artificial intelligence (AI) technology enables us to quantitatively analyze the whole prediction logic of AI as a global explanation. However, unwanted relationships learned by AI due to data sparsity, high dimensionality, and noise are also visualized in the explanation, which deteriorates confidence in the AI. Thus, methods for correcting those unwanted relationships in explanation has been developed. However, since these methods are applicable only to differentiable machine learning (ML) models but not to non-differentiable models such as tree-based models, they are insufficient for covering a wide range of ML technology. Since these methods also require re-training of the model for correcting its explanation (i.e., in-processing method), they cannot be applied to black-box models provided by third parties. Therefore, we propose a method called ensemble-based explanation correction (EBEC) as a post-processing method for correcting the global explanation of a prediction model in a model-agnostic manner by using the Rashomon effect of statistics. We evaluated the performance of EBEC with three different tasks and analyzed its function in more detail. The evaluation results indicate that EBEC can correct global explanation of the model so that the explanation aligns with the domain knowledge given by the user while maintaining its accuracy. EBEC can be extended in various ways and combined with any method to improve correction performance since it is a post-processing-type correction method. Hence, EBEC would contribute to high-productivity ML modeling as a new type of explanation-correction method.
ER -