Use case modeling is popular to represent the functionality of the system to be developed, and it consists of two parts: a use case diagram and use case descriptions. Use case descriptions are structured text written in natural language, and the usage of natural language can lead to poor descriptions such as ambiguous, inconsistent, and/or incomplete descriptions. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of the produced use case model. This paper proposes a technique to automate detecting bad smells of use case descriptions, i.e., symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalog of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contributions of this paper are the developed catalog of bad smells and the automated detection of these bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and a recall ratio of 0.981. Through evaluating our catalog and the automated tool, we found additional six bad smells and two metrics. Then, we obtained the precision of 0.596 and the recall of 1.000 by our final version of the automated tool.
Yotaro SEKI
Tokyo Institute of Technology
Shinpei HAYASHI
Tokyo Institute of Technology
Motoshi SAEKI
Nanzan University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Yotaro SEKI, Shinpei HAYASHI, Motoshi SAEKI, "Cataloging Bad Smells in Use Case Descriptions and Automating Their Detection" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 5, pp. 849-863, May 2022, doi: 10.1587/transinf.2021KBP0008.
Abstract: Use case modeling is popular to represent the functionality of the system to be developed, and it consists of two parts: a use case diagram and use case descriptions. Use case descriptions are structured text written in natural language, and the usage of natural language can lead to poor descriptions such as ambiguous, inconsistent, and/or incomplete descriptions. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of the produced use case model. This paper proposes a technique to automate detecting bad smells of use case descriptions, i.e., symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalog of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contributions of this paper are the developed catalog of bad smells and the automated detection of these bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and a recall ratio of 0.981. Through evaluating our catalog and the automated tool, we found additional six bad smells and two metrics. Then, we obtained the precision of 0.596 and the recall of 1.000 by our final version of the automated tool.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021KBP0008/_p
Copy
@ARTICLE{e105-d_5_849,
author={Yotaro SEKI, Shinpei HAYASHI, Motoshi SAEKI, },
journal={IEICE TRANSACTIONS on Information},
title={Cataloging Bad Smells in Use Case Descriptions and Automating Their Detection},
year={2022},
volume={E105-D},
number={5},
pages={849-863},
abstract={Use case modeling is popular to represent the functionality of the system to be developed, and it consists of two parts: a use case diagram and use case descriptions. Use case descriptions are structured text written in natural language, and the usage of natural language can lead to poor descriptions such as ambiguous, inconsistent, and/or incomplete descriptions. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of the produced use case model. This paper proposes a technique to automate detecting bad smells of use case descriptions, i.e., symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalog of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contributions of this paper are the developed catalog of bad smells and the automated detection of these bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and a recall ratio of 0.981. Through evaluating our catalog and the automated tool, we found additional six bad smells and two metrics. Then, we obtained the precision of 0.596 and the recall of 1.000 by our final version of the automated tool.},
keywords={},
doi={10.1587/transinf.2021KBP0008},
ISSN={1745-1361},
month={May},}
Copy
TY - JOUR
TI - Cataloging Bad Smells in Use Case Descriptions and Automating Their Detection
T2 - IEICE TRANSACTIONS on Information
SP - 849
EP - 863
AU - Yotaro SEKI
AU - Shinpei HAYASHI
AU - Motoshi SAEKI
PY - 2022
DO - 10.1587/transinf.2021KBP0008
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2022
AB - Use case modeling is popular to represent the functionality of the system to be developed, and it consists of two parts: a use case diagram and use case descriptions. Use case descriptions are structured text written in natural language, and the usage of natural language can lead to poor descriptions such as ambiguous, inconsistent, and/or incomplete descriptions. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of the produced use case model. This paper proposes a technique to automate detecting bad smells of use case descriptions, i.e., symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalog of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contributions of this paper are the developed catalog of bad smells and the automated detection of these bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and a recall ratio of 0.981. Through evaluating our catalog and the automated tool, we found additional six bad smells and two metrics. Then, we obtained the precision of 0.596 and the recall of 1.000 by our final version of the automated tool.
ER -