Masahiro TADA Masayuki NISHIDA
In this study, we use a vision-based driving monitoring sensor to track drivers’ visual scanning behavior, a key factor for preventing traffic accidents. Our system evaluates driver’s behaviors by referencing the safety knowledge of professional driving instructors, and provides real-time voice-guided safety advice to encourage safer driving. Our system’s evaluation of safe driving behaviors matched the instructor’s evaluation with accuracy over 80%.
Fuma SAWA Yoshinori KAMIZONO Wataru KOBAYASHI Ittetsu TANIGUCHI Hiroki NISHIKAWA Takao ONOYE
Advanced driver-assistance systems (ADAS) generally play an important role to support safe drive by detecting potential risk factors beforehand and informing the driver of them. However, if too many services in ADAS rely on visual-based technologies, the driver becomes increasingly burdened and exhausted especially on their eyes. The drivers should be back out of monitoring tasks other than significantly important ones in order to alleviate the burden of the driver as long as possible. In-vehicle auditory signals to assist the safe drive have been appealing as another approach to altering visual suggestions in recent years. In this paper, we developed an in-vehicle auditory signals evaluation platform in an existing driving simulator. In addition, using in-vehicle auditory signals, we have demonstrated that our developed platform has highlighted the possibility to partially switch from only visual-based tasks to mixing with auditory-based ones for alleviating the burden on drivers.
Toshihisa SATO Naohisa HASHIMOTO
Mobility as a Service (MaaS) is expected to spread globally and in Japan as a solution for social issues related to transportation. Researchers have conducted MaaS trials in several cities. However, only a few trials have reached full-scale practical use. Therefore, it is essential to clarify issues such as the business model and user acceptability and seek solutions to social problems rather than simply conducting trials. This paper describes the introduction of a MaaS project supported by the Japanese government known as the “Smart Mobility Challenge” project, conducted in 2020 and 2021. We employed five themes necessary for social implementation from the first trial of this MaaS project. As a consortium, we also promoted regional demonstrations by soliciting regional applications based on these five themes. In addition, we conducted fundamental research using data from the MaaS projects to clarify local transportation issues in detail, collect residents' mobile behavior data, and assess the project's effects on the participant's happiness. We employed the life-space assessment method to investigate the spread of the residents' behavioral life-space resulting from using mobility services. The spread of the life-space mobility before and after using mobility services confirmed an expansion of the life-space because of specific services. Moreover, we conducted questionnaire surveys and clarified the relationships between life-space assessment, human characteristics, and subjective happiness using path analysis. We also conducted a persona-based approach in addition to objective data collection using GPS and wearable monitors and a web-based questionnaire. We found differences between the actual participants and participants assumed by local governments. We conducted interviews and developed tips for improving mobility service. We propose that qualitative data help clarify the image of mobility services that meet the residents' needs.
In recent years, driver's visual attention has been actively studied for driving automation technology. However, the number of models is few to perceive an insight understanding of driver's attention in various moments. All attention models process multi-level image representations by a two-stream/multi-stream network, increasing the computational cost due to an increment of model parameters. However, multi-level image representation such as optical flow plays a vital role in tasks involving videos. Therefore, to reduce the computational cost of a two-stream network and use multi-level image representation, this work proposes a single stream driver's visual attention model for a critical situation. The experiment was conducted using a publicly available critical driving dataset named BDD-A. Qualitative results confirm the effectiveness of the proposed model. Moreover, quantitative results highlight that the proposed model outperforms state-of-the-art visual attention models according to CC and SIM. Extensive ablation studies verify the presence of optical flow in the model, the position of optical flow in the spatial network, the convolution layers to process optical flow, and the computational cost compared to a two-stream model.
This research develops a new automatic path following control method for a car model based on just-in-time modeling. The purpose is that a lot of basic driving data for various situations are accumulated into a database, and we realize automatic path following for unknown roads by using only data in the database. Especially, just-in-time modeling is repeatedly utilized in order to follow the desired points on the given road. From the results of a numerical simulation, it turns out that the proposed new method can make the car follow the desired points on the given road with small error, and it shows high computational efficiency.
Autonomous driving technology is currently attracting a lot of attention as a technology that will play a role in the next generation of mobility. For autonomous driving in urban areas, it is necessary to recognize various information. Especially, the recognition of traffic lights is important in crossing intersections. In this paper, traffic light recognition technology developed by the authors was evaluated using onboard sensor data during autonomous driving in the Tokyo waterfront area as an example of traffic light recognition technology. Based on the results, it was found that traffic lights could be recognized with an accuracy of approximately 99% to carry out the decision making for intersection approaching. However, from the evaluation results, it was also confirmed that traffic light recognition became difficult under situations involving occlusion by other object, background assimilation, nighttime conditions, and backlight by sunlight. It was also confirmed that these effects are mostly temporary, and do not significantly affect decision-making to enter intersections as a result of utilizing information from multiple traffic lights installed at an intersection. On the other hand, it is expected that recognition with current onboard cameras will become technically difficult during situations in which not all traffic lights are visually recognizable due to the effects of back or front light by sunlight when stopped at the stop line of an intersection. This paper summarizes these results and presents the necessity of appropriate traffic light installation on the assumption of recognition by onboard cameras.
Ryota ISHIBASHI Takuma TSUBAKI Shingo OKADA Hiroshi YAMAMOTO Takeshi KUWAHARA Kenichi KAWAMURA Keisuke WAKAO Takatsune MORIYAMA Ricardo OSPINA Hiroshi OKAMOTO Noboru NOGUCHI
To sustain and expand the agricultural economy even as its workforce shrinks, the efficiency of farm operations must be improved. One key to efficiency improvement is completely unmanned driving of farm machines, which requires stable monitoring and control of machines from remote sites, a safety system to ensure safe autonomous driving even without manual operations, and precise positioning in not only small farm fields but also wider areas. As possible solutions for those issues, we have developed technologies of wireless network quality prediction, an end-to-end overlay network, machine vision for safety and positioning, network cooperated vehicle control and autonomous tractor control and conducted experiments in actual field environments. Experimental results show that: 1) remote monitoring and control can be seamlessly continued even when connection between the tractor and the remote site needs to be switched across different wireless networks during autonomous driving; 2) the safety of the autonomous driving can automatically be ensured by detecting both the existence of people in front of the unmanned tractor and disturbance of network quality affecting remote monitoring operation; and 3) the unmanned tractor can continue precise autonomous driving even when precise positioning by satellite systems cannot be performed.
Kei SAKAGUCHI Ryuichi FUKATSU Tao YU Eisuke FUKUDA Kim MAHLER Robert HEATH Takeo FUJII Kazuaki TAKAHASHI Alexey KHORYAEV Satoshi NAGATA Takayuki SHIMIZU
Millimeter wave provides high data rates for Vehicle-to-Everything (V2X) communications. This paper motivates millimeter wave to support automated driving and begins by explaining V2X use cases that support automated driving with references to several standardization bodies. The paper gives a classification of existing V2X standards: IEEE802.11p and LTE V2X, along with the status of their commercial deployment. Then, the paper provides a detailed assessment on how millimeter wave V2X enables the use case of cooperative perception. The explanations provide detailed rate calculations for this use case and show that millimeter wave is the only technology able to achieve the requirements. Furthermore, specific challenges related to millimeter wave for V2X are described, including coverage enhancement and beam alignment. The paper concludes with some results from three studies, i.e. IEEE802.11ad (WiGig) based V2X, extension of 5G NR (New Radio) toward mmWave V2X, and prototypes of intelligent street with mmWave V2X.
Rousslan F. J. DOSSA Xinyu LIAN Hirokazu NOMOTO Takashi MATSUBARA Kuniaki UEHARA
Reinforcement learning methods achieve performance superior to humans in a wide range of complex tasks and uncertain environments. However, high performance is not the sole metric for practical use such as in a game AI or autonomous driving. A highly efficient agent performs greedily and selfishly, and is thus inconvenient for surrounding users, hence a demand for human-like agents. Imitation learning reproduces the behavior of a human expert and builds a human-like agent. However, its performance is limited to the expert's. In this study, we propose a training scheme to construct a human-like and efficient agent via mixing reinforcement and imitation learning for discrete and continuous action space problems. The proposed hybrid agent achieves a higher performance than a strict imitation learning agent and exhibits more human-like behavior, which is measured via a human sensitivity test.
Keiichiro INAGAKI Tatsuya MARUNO Kota YAMAMOTO
The brain processes numerous information related to traffic scenes for appropriate perception, judgment, and operation in vehicle driving. Here, the strategy for perception, judgment, and operation is individually different for each driver, and this difference is thought to be arise from experience of driving. In the present work, we measure and analyze human brain activity (EEG: Electroencephalogram) related to visual perception during vehicle driving to clarify the relationship between experience of driving and brain activity. As a result, more experts generate α activities than beginners, and also confirm that the β activities is reduced than beginners. These results firstly indicate that experience of driving is reflected into the activation pattern of EEG.
In IoT systems, data acquired by many sensors are required. However, since sensor operation depends on the actual environment, it is important to ensure sensor redundancy to improve system reliability in IoT systems. To evaluate the safety of the system, it is important to estimate the achievement probability of the function based on the sensing probability. In this research, we proposed a method to automatically generate a PRISM model from the sensor configuration of the target system and calculate and verify the function achievement probability in the assumed environment. By designing and evaluating iteratively until the target achievement probability is reached, the reliability of the system can be estimated at the initial design phase. This method reduces the possibility that the lack of reliability will be found after implementation and the redesign accompanying it will occur.
Abraham MONRROY CANO Eijiro TAKEUCHI Shinpei KATO Masato EDAHIRO
We present an accurate and easy-to-use multi-sensor fusion toolbox for autonomous vehicles. It includes a ‘target-less’ multi-LiDAR (Light Detection and Ranging), and Camera-LiDAR calibration, sensor fusion, and a fast and accurate point cloud ground classifier. Our calibration methods do not require complex setup procedures, and once the sensors are calibrated, our framework eases the fusion of multiple point clouds, and cameras. In addition we present an original real-time ground-obstacle classifier, which runs on the CPU, and is designed to be used with any type and number of LiDARs. Evaluation results on the KITTI dataset confirm that our calibration method has comparable accuracy with other state-of-the-art contenders in the benchmark.
Yuta SAKAGAWA Kosuke NAKAJIMA Gosuke OHASHI
We propose a method that detects vehicles from in-vehicle monocular camera images captured during nighttime driving. Detecting vehicles from their shape is difficult at night; however, many vehicle detection methods focusing on light have been proposed. We detect bright spots by appropriate binarization based on the characteristics of vehicle lights such as brightness and color. Also, as the detected bright spots include lights other than vehicles, we need to distinguish the vehicle lights from other bright spots. Therefore, the bright spots were distinguished using Random Forest, a multiclass classification machine-learning algorithm. The features of bright spots not associated with vehicles were effectively utilized in the vehicle detection in our proposed method. More precisely vehicle detection is performed by giving weights to the results of the Random Forest based on the features of vehicle bright spots and the features of bright spots not related to the vehicle. Our proposed method was applied to nighttime images and confirmed effectiveness.
The demand for and the scope of connected services have rapidly grown and developed in many industries such as electronic appliances, robotics, and industry automation. In the automotive field, including connected vehicles, different types of connected services have become available and they provide convenience and comfort with users while yielding new business opportunities. With the advent of connected vehicles, the threat of cyber attacks has become a serious issue and protection methods against these attacks are urgently needed to provide safe and secure connected services. From 2017, attack methods have become more sophisticated through different attack surfaces attached to navigation systems and telematics modules, and security requirements to circumvent such attacks have begun to be established. Individual threats have been addressed previously; however, there are few reports that provide an overview of cyber security related to connected vehicles. This paper gives our perspective on cyber security for connected vehicles based on a survey of recent studies related to vehicle security. To introduce these studies, the environment surrounding connected vehicles is classified into three categories: inside the vehicle, communications between the back-end systems and vehicles, and the back-end systems. In each category, this paper introduces recent trends in cyber attacks and the protection requirements that should be developed for connected services. We show that the overall security covering the three categories must be considered because the security of the vehicle is jeopardized even if one item in the categories is not covered. We believe that this paper will further contribute to development of all service systems related to connected vehicles including autonomous vehicles and to the investigation into cyber security against these attacks.
Dual-motor driving servo systems are widely used in many military and civil fields. Since backlash nonlinearity affects the dynamic performance and steady-state tracking accuracy of these systems, it is necessary to study a control strategy to reduce its adverse effects. We first establish the state-space model of a system. To facilitate the design of the controller, we simplify the model based on the state-space model. Then, we design an adaptive controller combining a projection algorithm with dynamic surface control applied to a dual-motor driving servo system, which we believe to be the first, and analyze its stability. Simulation results show that projection algorithm-based dynamic surface control has smaller tracking error, faster tracking speed, and better robustness and stability than mere dynamic surface control. Finally, the experimental analysis validates the effectiveness of the proposed control algorithm.
Hideaki NANBA Yukihito IKAMI Kenichiro IMAI Kenji KOBAYASHI Manabu SAWADA
When the automated driving cars are in widespread usage, traffic will coexist with prioritized vehicles (e.g., ambulances, fire trucks, police vehicles) and automated driving cars. Automated driving cars are expected to be safer and lower stress than manual driving vehicles because of passengers paying less attention to driving. However, there are many challenges for automated driving cars to get along with surrounding transport participants. In particular, when an ambulance is driving into an intersection with the red traffic signal, the automated driving car is required to deal with a situation differently from normal traffic situations. In order to continue safe driving, it is necessary to recognize the approach of the ambulance at an earlier time. Possible means of recognizing ambulances include siren sound, rotating red lights and vehicle to vehicle communication. Based on actual traffic data, the authors created a mathematical model of deceleration for giving way and consider the status of suitable behavior by automated driving cars. The authors calculate the detection distance required to take suitable action. The results indicate that there are advantages in vehicle to vehicle communication in detecting ambulances by automated driving cars.
Takahiro TANAKA Kazuhiro FUJIKAKE Takashi YONEKAWA Misako YAMAGISHI Makoto INAGAMI Fumiya KINOSHITA Hirofumi AOKI Hitoshi KANAMORI
In recent years, the number of traffic accidents caused by elderly drivers has increased in Japan. However, cars are an important mode of transportation for the elderly. Therefore, to ensure safe driving, a system that can assist elderly drivers is required. We propose a driver-agent system that provides support to elderly drivers during and after driving and encourages them to improve their driving. This paper describes the prototype system and the analysis conducted of the teaching records of a human instructor, the impression caused by the instructions on a subject during driving, and subjective evaluation of the driver-agent system.
Yuki IMAEDA Takatsugu HIRAYAMA Yasutomo KAWANISHI Daisuke DEGUCHI Ichiro IDE Hiroshi MURASE
We propose an estimation method of pedestrian detectability considering the driver's visual adaptation to drastic illumination change, which has not been studied in previous works. We assume that driver's visual characteristics change in proportion to the elapsed time after illumination change. In this paper, as a solution, we construct multiple estimators corresponding to different elapsed periods, and estimate the detectability by switching them according to the elapsed period. To evaluate the proposed method, we construct an experimental setup to present a participant with illumination changes and conduct a preliminary simulated experiment to measure and estimate the pedestrian detectability according to the elapsed period. Results show that the proposed method can actually estimate the detectability accurately after a drastic illumination change.
Kyeongmin JEONG Kwangyeon CHOI Donghwan KIM Byung Cheol SONG
Advanced driver assistance system (ADAS) can recognize traffic signals, vehicles, pedestrians, and so on all over the vehicle. However, because the ADAS is based on images taken in an outdoor environment, it is susceptible to ambient weather such as fog. So, preprocessing such as de-fog and de-hazing techniques is required to prevent degradation of object recognition performance due to decreased visibility. But, if such a fog removal technique is applied in an environment where there is little or no fog, the visual quality may be deteriorated due to excessive contrast improvement. And in foggy road environments, typical fog removal algorithms suffer from color distortion. In this paper, we propose a temporal filter-based fog detection algorithm to selectively apply de-fogging method only in the presence of fog. We also propose a method to avoid color distortion by detecting the sky region and applying different methods to the sky region and the non-sky region. Experimental results show that in the actual images, the proposed algorithm shows an average of more than 97% fog detection accuracy, and improves subjective image quality of existing de-fogging algorithms. In addition, the proposed algorithm shows very fast computation time of less than 0.1ms per frame.
Yutaro ONO Yuhei MORIMOTO Reiji HATTORI Masayuki WATANABE Nanae MICHIDA Kazuo NISHIKAWA
We present a smart steering wheel that detects the gripping position and area, as well as the distance to the approaching driver's hands by measuring the resonant frequency and its resistance value in an LCR circuit composed of the floating capacitance between the gripping hand and the electrode of the steering, and the body resistance. The resonant frequency measurement provides a high sensitivity that enables the estimation of the distance to the approaching hand, the gripping area of a gloved hand, and for covering the steering surface with any type of insulating material. This system can be applied for drowsiness detection, driving technique improvements, and for customization of the driving settings.