Full Text Views
31
Numerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task.
Mahmud Dwi SULISTIYO
Nagoya University,Telkom University
Yasutomo KAWANISHI
Nagoya University
Daisuke DEGUCHI
Nagoya University
Ichiro IDE
Nagoya University
Takatsugu HIRAYAMA
Nagoya University
Jiang-Yu ZHENG
Indiana University-Purdue University Indianapolis
Hiroshi MURASE
Nagoya University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Mahmud Dwi SULISTIYO, Yasutomo KAWANISHI, Daisuke DEGUCHI, Ichiro IDE, Takatsugu HIRAYAMA, Jiang-Yu ZHENG, Hiroshi MURASE, "Attribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations" in IEICE TRANSACTIONS on Fundamentals,
vol. E103-A, no. 1, pp. 231-242, January 2020, doi: 10.1587/transfun.2019TSP0001.
Abstract: Numerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2019TSP0001/_p
Copy
@ARTICLE{e103-a_1_231,
author={Mahmud Dwi SULISTIYO, Yasutomo KAWANISHI, Daisuke DEGUCHI, Ichiro IDE, Takatsugu HIRAYAMA, Jiang-Yu ZHENG, Hiroshi MURASE, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Attribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations},
year={2020},
volume={E103-A},
number={1},
pages={231-242},
abstract={Numerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task.},
keywords={},
doi={10.1587/transfun.2019TSP0001},
ISSN={1745-1337},
month={January},}
Copy
TY - JOUR
TI - Attribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 231
EP - 242
AU - Mahmud Dwi SULISTIYO
AU - Yasutomo KAWANISHI
AU - Daisuke DEGUCHI
AU - Ichiro IDE
AU - Takatsugu HIRAYAMA
AU - Jiang-Yu ZHENG
AU - Hiroshi MURASE
PY - 2020
DO - 10.1587/transfun.2019TSP0001
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E103-A
IS - 1
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - January 2020
AB - Numerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task.
ER -