The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.
Yangchao ZHANG
Osaka University
Hiroaki ITSUJI
Research & Development Group, Hitachi, Ltd.
Takumi UEZONO
Research & Development Group, Hitachi, Ltd.
Tadanobu TOBA
Research & Development Group, Hitachi, Ltd.
Masanori HASHIMOTO
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Yangchao ZHANG, Hiroaki ITSUJI, Takumi UEZONO, Tadanobu TOBA, Masanori HASHIMOTO, "Vulnerability Estimation of DNN Model Parameters with Few Fault Injections" in IEICE TRANSACTIONS on Fundamentals,
vol. E106-A, no. 3, pp. 523-531, March 2023, doi: 10.1587/transfun.2022VLP0004.
Abstract: The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2022VLP0004/_p
Copy
@ARTICLE{e106-a_3_523,
author={Yangchao ZHANG, Hiroaki ITSUJI, Takumi UEZONO, Tadanobu TOBA, Masanori HASHIMOTO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Vulnerability Estimation of DNN Model Parameters with Few Fault Injections},
year={2023},
volume={E106-A},
number={3},
pages={523-531},
abstract={The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.},
keywords={},
doi={10.1587/transfun.2022VLP0004},
ISSN={1745-1337},
month={March},}
Copy
TY - JOUR
TI - Vulnerability Estimation of DNN Model Parameters with Few Fault Injections
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 523
EP - 531
AU - Yangchao ZHANG
AU - Hiroaki ITSUJI
AU - Takumi UEZONO
AU - Tadanobu TOBA
AU - Masanori HASHIMOTO
PY - 2023
DO - 10.1587/transfun.2022VLP0004
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E106-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2023
AB - The reliability of deep neural networks (DNN) against hardware errors is essential as DNNs are increasingly employed in safety-critical applications such as automatic driving. Transient errors in memory, such as radiation-induced soft error, may propagate through the inference computation, resulting in unexpected output, which can adversely trigger catastrophic system failures. As a first step to tackle this problem, this paper proposes constructing a vulnerability model (VM) with a small number of fault injections to identify vulnerable model parameters in DNN. We reduce the number of bit locations for fault injection significantly and develop a flow to incrementally collect the training data, i.e., the fault injection results, for VM accuracy improvement. We enumerate key features (KF) that characterize the vulnerability of the parameters and use KF and the collected training data to construct VM. Experimental results show that VM can estimate vulnerabilities of all DNN model parameters only with 1/3490 computations compared with traditional fault injection-based vulnerability estimation.
ER -