The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] autonomous driving(8hit)

1-8hit
  • BiConvNet: Integrating Spatial Details and Deep Semantic Features in a Bilateral-Branch Image Segmentation Network Open Access

    Zhigang WU  Yaohui ZHU  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2024/07/16
      Vol:
    E107-D No:11
      Page(s):
    1385-1395

    This article focuses on improving the BiSeNet v2 bilateral branch image segmentation network structure, enhancing its learning ability for spatial details and overall image segmentation accuracy. A modified network called “BiconvNet” is proposed. Firstly, to extract shallow spatial details more effectively, a parallel concatenated strip and dilated (PCSD) convolution module is proposed and used to extract local features and surrounding contextual features in the detail branch. Continuing on, the semantic branch is reconstructed using the lightweight capability of depth separable convolution and high performance of ConvNet, in order to enable more efficient learning of deep advanced semantic features. Finally, fine-tuning is performed on the bilateral guidance aggregation layer of BiSeNet v2, enabling better fusion of the feature maps output by the detail branch and semantic branch. The experimental part discusses the contribution of stripe convolution and different sizes of empty convolution to image segmentation accuracy, and compares them with common convolutions such as Conv2d convolution, CG convolution and CCA convolution. The experiment proves that the PCSD convolution module proposed in this paper has the highest segmentation accuracy in all categories of the Cityscapes dataset compared with common convolutions. BiConvNet achieved a 9.39% accuracy improvement over the BiSeNet v2 network, with only a slight increase of 1.18M in model parameters. A mIoU accuracy of 68.75% was achieved on the validation set. Furthermore, through comparative experiments with commonly used autonomous driving image segmentation algorithms in recent years, BiConvNet demonstrates strong competitive advantages in segmentation accuracy on the Cityscapes and BDD100K datasets.

  • Edge Device Verification Techniques for Updated Object Detection AI via Target Object Existence Open Access

    Akira KITAYAMA  Goichi ONO  Hiroaki ITO  

     
    PAPER-Intelligent Transport System

      Pubricized:
    2023/12/20
      Vol:
    E107-A No:8
      Page(s):
    1286-1295

    Edge devices with strict safety and reliability requirements, such as autonomous driving cars, industrial robots, and drones, necessitate software verification on such devices before operation. The human cost and time required for this analysis constitute a barrier in the cycle of software development and updating. In particular, the final verification at the edge device should at least strictly confirm that the updated software is not degraded from the current it. Since the edge device does not have the correct data, it is necessary for a human to judge whether the difference between the updated software and the operating it is due to degradation or improvement. Therefore, this verification is very costly. This paper proposes a novel automated method for efficient verification on edge devices of an object detection AI, which has found practical use in various applications. In the proposed method, a target object existence detector (TOED) (a simple binary classifier) judges whether an object in the recognition target class exists in the region of a prediction difference between the AI’s operating and updated versions. Using the results of this TOED judgement and the predicted difference, an automated verification system for the updated AI was constructed. TOED was designed as a simple binary classifier with four convolutional layers, and the accuracy of object existence judgment was evaluated for the difference between the predictions of the YOLOv5 L and X models using the Cityscapes dataset. The results showed judgement with more than 99.5% accuracy and 8.6% over detection, thus indicating that a verification system adopting this method would be more efficient than simple analysis of the prediction differences.

  • Current Status and Issues of Traffic Light Recognition Technology in Autonomous Driving System Open Access

    Naoki SUGANUMA  Keisuke YONEDA  

     
    INVITED PAPER

      Pubricized:
    2021/10/12
      Vol:
    E105-A No:5
      Page(s):
    763-769

    Autonomous driving technology is currently attracting a lot of attention as a technology that will play a role in the next generation of mobility. For autonomous driving in urban areas, it is necessary to recognize various information. Especially, the recognition of traffic lights is important in crossing intersections. In this paper, traffic light recognition technology developed by the authors was evaluated using onboard sensor data during autonomous driving in the Tokyo waterfront area as an example of traffic light recognition technology. Based on the results, it was found that traffic lights could be recognized with an accuracy of approximately 99% to carry out the decision making for intersection approaching. However, from the evaluation results, it was also confirmed that traffic light recognition became difficult under situations involving occlusion by other object, background assimilation, nighttime conditions, and backlight by sunlight. It was also confirmed that these effects are mostly temporary, and do not significantly affect decision-making to enter intersections as a result of utilizing information from multiple traffic lights installed at an intersection. On the other hand, it is expected that recognition with current onboard cameras will become technically difficult during situations in which not all traffic lights are visually recognizable due to the effects of back or front light by sunlight when stopped at the stop line of an intersection. This paper summarizes these results and presents the necessity of appropriate traffic light installation on the assumption of recognition by onboard cameras.

  • Experiment of Integrated Technologies in Robotics, Network, and Computing for Smart Agriculture Open Access

    Ryota ISHIBASHI  Takuma TSUBAKI  Shingo OKADA  Hiroshi YAMAMOTO  Takeshi KUWAHARA  Kenichi KAWAMURA  Keisuke WAKAO  Takatsune MORIYAMA  Ricardo OSPINA  Hiroshi OKAMOTO  Noboru NOGUCHI  

     
    INVITED PAPER

      Pubricized:
    2021/11/05
      Vol:
    E105-B No:4
      Page(s):
    364-378

    To sustain and expand the agricultural economy even as its workforce shrinks, the efficiency of farm operations must be improved. One key to efficiency improvement is completely unmanned driving of farm machines, which requires stable monitoring and control of machines from remote sites, a safety system to ensure safe autonomous driving even without manual operations, and precise positioning in not only small farm fields but also wider areas. As possible solutions for those issues, we have developed technologies of wireless network quality prediction, an end-to-end overlay network, machine vision for safety and positioning, network cooperated vehicle control and autonomous tractor control and conducted experiments in actual field environments. Experimental results show that: 1) remote monitoring and control can be seamlessly continued even when connection between the tractor and the remote site needs to be switched across different wireless networks during autonomous driving; 2) the safety of the autonomous driving can automatically be ensured by detecting both the existence of people in front of the unmanned tractor and disturbance of network quality affecting remote monitoring operation; and 3) the unmanned tractor can continue precise autonomous driving even when precise positioning by satellite systems cannot be performed.

  • Hybrid of Reinforcement and Imitation Learning for Human-Like Agents

    Rousslan F. J. DOSSA  Xinyu LIAN  Hirokazu NOMOTO  Takashi MATSUBARA  Kuniaki UEHARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/06/15
      Vol:
    E103-D No:9
      Page(s):
    1960-1970

    Reinforcement learning methods achieve performance superior to humans in a wide range of complex tasks and uncertain environments. However, high performance is not the sole metric for practical use such as in a game AI or autonomous driving. A highly efficient agent performs greedily and selfishly, and is thus inconvenient for surrounding users, hence a demand for human-like agents. Imitation learning reproduces the behavior of a human expert and builds a human-like agent. However, its performance is limited to the expert's. In this study, we propose a training scheme to construct a human-like and efficient agent via mixing reinforcement and imitation learning for discrete and continuous action space problems. The proposed hybrid agent achieves a higher performance than a strict imitation learning agent and exhibits more human-like behavior, which is measured via a human sensitivity test.

  • Evaluation the Redundancy of the IoT System Based on Individual Sensing Probability

    Ryuichi TAKAHASHI  

     
    PAPER-Formal Approaches

      Pubricized:
    2020/05/14
      Vol:
    E103-D No:8
      Page(s):
    1783-1793

    In IoT systems, data acquired by many sensors are required. However, since sensor operation depends on the actual environment, it is important to ensure sensor redundancy to improve system reliability in IoT systems. To evaluate the safety of the system, it is important to estimate the achievement probability of the function based on the sensing probability. In this research, we proposed a method to automatically generate a PRISM model from the sensor configuration of the target system and calculate and verify the function achievement probability in the assumed environment. By designing and evaluating iteratively until the target achievement probability is reached, the reliability of the system can be estimated at the initial design phase. This method reduces the possibility that the lack of reliability will be found after implementation and the redesign accompanying it will occur.

  • An Open Multi-Sensor Fusion Toolbox for Autonomous Vehicles

    Abraham MONRROY CANO  Eijiro TAKEUCHI  Shinpei KATO  Masato EDAHIRO  

     
    PAPER

      Vol:
    E103-A No:1
      Page(s):
    252-264

    We present an accurate and easy-to-use multi-sensor fusion toolbox for autonomous vehicles. It includes a ‘target-less’ multi-LiDAR (Light Detection and Ranging), and Camera-LiDAR calibration, sensor fusion, and a fast and accurate point cloud ground classifier. Our calibration methods do not require complex setup procedures, and once the sensors are calibrated, our framework eases the fusion of multiple point clouds, and cameras. In addition we present an original real-time ground-obstacle classifier, which runs on the CPU, and is designed to be used with any type and number of LiDARs. Evaluation results on the KITTI dataset confirm that our calibration method has comparable accuracy with other state-of-the-art contenders in the benchmark.

  • An Overview of Cyber Security for Connected Vehicles Open Access

    Junko TAKAHASHI  

     
    INVITED PAPER

      Pubricized:
    2018/08/22
      Vol:
    E101-D No:11
      Page(s):
    2561-2575

    The demand for and the scope of connected services have rapidly grown and developed in many industries such as electronic appliances, robotics, and industry automation. In the automotive field, including connected vehicles, different types of connected services have become available and they provide convenience and comfort with users while yielding new business opportunities. With the advent of connected vehicles, the threat of cyber attacks has become a serious issue and protection methods against these attacks are urgently needed to provide safe and secure connected services. From 2017, attack methods have become more sophisticated through different attack surfaces attached to navigation systems and telematics modules, and security requirements to circumvent such attacks have begun to be established. Individual threats have been addressed previously; however, there are few reports that provide an overview of cyber security related to connected vehicles. This paper gives our perspective on cyber security for connected vehicles based on a survey of recent studies related to vehicle security. To introduce these studies, the environment surrounding connected vehicles is classified into three categories: inside the vehicle, communications between the back-end systems and vehicles, and the back-end systems. In each category, this paper introduces recent trends in cyber attacks and the protection requirements that should be developed for connected services. We show that the overall security covering the three categories must be considered because the security of the vehicle is jeopardized even if one item in the categories is not covered. We believe that this paper will further contribute to development of all service systems related to connected vehicles including autonomous vehicles and to the investigation into cyber security against these attacks.