Front video and sensor data captured by vehicle-mounted event recorders are used for not only traffic accident evidence but also safe-driving education as near-miss traffic incident data. However, most event recorder (ER) data shows only regular driving events. To utilize near-miss data for safe-driving education, we need to be able to easily and rapidly locate the appropriate data from large amounts of ER data through labels attached to the scenes/events of interest. This paper proposes a method that can automatically identify near-misses with objects such as pedestrians and bicycles by processing the ER data. The proposed method extracts two deep feature representations that consider car status and the environment surrounding the car. The first feature representation is generated by considering the temporal transitions of car status. The second one can extract the positional relationship between the car and surrounding objects by processing object detection results. Experiments on actual ER data demonstrate that the proposed method can accurately identify and tag near-miss events.
Shuhei YAMAMOTO
NTT Corporation
Takeshi KURASHIMA
NTT Corporation
Hiroyuki TODA
NTT Corporation
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Shuhei YAMAMOTO, Takeshi KURASHIMA, Hiroyuki TODA, "Classifying Near-Miss Traffic Incidents through Video, Sensor, and Object Features" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 2, pp. 377-386, February 2022, doi: 10.1587/transinf.2021EDP7017.
Abstract: Front video and sensor data captured by vehicle-mounted event recorders are used for not only traffic accident evidence but also safe-driving education as near-miss traffic incident data. However, most event recorder (ER) data shows only regular driving events. To utilize near-miss data for safe-driving education, we need to be able to easily and rapidly locate the appropriate data from large amounts of ER data through labels attached to the scenes/events of interest. This paper proposes a method that can automatically identify near-misses with objects such as pedestrians and bicycles by processing the ER data. The proposed method extracts two deep feature representations that consider car status and the environment surrounding the car. The first feature representation is generated by considering the temporal transitions of car status. The second one can extract the positional relationship between the car and surrounding objects by processing object detection results. Experiments on actual ER data demonstrate that the proposed method can accurately identify and tag near-miss events.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7017/_p
Copy
@ARTICLE{e105-d_2_377,
author={Shuhei YAMAMOTO, Takeshi KURASHIMA, Hiroyuki TODA, },
journal={IEICE TRANSACTIONS on Information},
title={Classifying Near-Miss Traffic Incidents through Video, Sensor, and Object Features},
year={2022},
volume={E105-D},
number={2},
pages={377-386},
abstract={Front video and sensor data captured by vehicle-mounted event recorders are used for not only traffic accident evidence but also safe-driving education as near-miss traffic incident data. However, most event recorder (ER) data shows only regular driving events. To utilize near-miss data for safe-driving education, we need to be able to easily and rapidly locate the appropriate data from large amounts of ER data through labels attached to the scenes/events of interest. This paper proposes a method that can automatically identify near-misses with objects such as pedestrians and bicycles by processing the ER data. The proposed method extracts two deep feature representations that consider car status and the environment surrounding the car. The first feature representation is generated by considering the temporal transitions of car status. The second one can extract the positional relationship between the car and surrounding objects by processing object detection results. Experiments on actual ER data demonstrate that the proposed method can accurately identify and tag near-miss events.},
keywords={},
doi={10.1587/transinf.2021EDP7017},
ISSN={1745-1361},
month={February},}
Copy
TY - JOUR
TI - Classifying Near-Miss Traffic Incidents through Video, Sensor, and Object Features
T2 - IEICE TRANSACTIONS on Information
SP - 377
EP - 386
AU - Shuhei YAMAMOTO
AU - Takeshi KURASHIMA
AU - Hiroyuki TODA
PY - 2022
DO - 10.1587/transinf.2021EDP7017
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2022
AB - Front video and sensor data captured by vehicle-mounted event recorders are used for not only traffic accident evidence but also safe-driving education as near-miss traffic incident data. However, most event recorder (ER) data shows only regular driving events. To utilize near-miss data for safe-driving education, we need to be able to easily and rapidly locate the appropriate data from large amounts of ER data through labels attached to the scenes/events of interest. This paper proposes a method that can automatically identify near-misses with objects such as pedestrians and bicycles by processing the ER data. The proposed method extracts two deep feature representations that consider car status and the environment surrounding the car. The first feature representation is generated by considering the temporal transitions of car status. The second one can extract the positional relationship between the car and surrounding objects by processing object detection results. Experiments on actual ER data demonstrate that the proposed method can accurately identify and tag near-miss events.
ER -