The search functionality is under construction.

Keyword Search Result

[Keyword] LiDAR(10hit)

1-10hit
  • Shadow Detection Based on Luminance-LiDAR Intensity Uncorrelation

    Shogo SATO  Yasuhiro YAO  Taiga YOSHIDA  Shingo ANDO  Jun SHIMAMURA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2023/06/20
      Vol:
    E106-D No:9
      Page(s):
    1556-1563

    In recent years, there has been a growing demand for urban digitization using cameras and light detection and ranging (LiDAR). Shadows are a condition that affects measurement the most. Therefore, shadow detection technology is essential. In this study, we propose shadow detection utilizing the LiDAR intensity that depends on the surface properties of objects but not on irradiation from other light sources. Unlike conventional LiDAR-intensity-aided shadow detection methods, our method embeds the un-correlation between luminance and LiDAR intensity in each position into the optimization. The energy, which is defined by the un-correlation between luminance and LiDAR intensity in each position, is minimized by graph-cut segmentation to detect shadows. In evaluations on KITTI and Waymo datasets, our shadow-detection method outperformed the previous methods in terms of multiple evaluation indices.

  • A Tutorial and Review of Automobile Direct ToF LiDAR SoCs: Evolution of Next-Generation LiDARs Open Access

    Kentaro YOSHIOKA  

     
    INVITED PAPER

      Pubricized:
    2022/04/11
      Vol:
    E105-C No:10
      Page(s):
    534-543

    LiDAR is a distance sensor that plays a key role in the realization of advanced driver assistance systems (ADAS). In this paper, we present a tutorial and review of automotive direct time of flight (dToF) LiDAR from the aspect of circuit systems. We discuss the breakthrough in ADAS LiDARs through comparison with the first-generation LiDAR systems, which were conventionally high-cost and had an immature performance. We define current high-performance and low-cost LiDARs as next-generation LiDAR systems, which have significantly improved the cost and performance by integrating the photodetector, the readout circuit, and the signal processing unit into a single SoC. This paper targets reader who is new to ADAS LiDARs and will cover the basic principles of LiDAR, also comparing with range methods other than dToF. In addition, we discuss the development of this area through the latest research examples such as the 2-chip approach, 2D SPAD array, and 3D integrated LiDARs.

  • Adversarial Scan Attack against Scan Matching Algorithm for Pose Estimation in LiDAR-Based SLAM Open Access

    Kota YOSHIDA  Masaya HOJO  Takeshi FUJINO  

     
    PAPER

      Pubricized:
    2021/10/26
      Vol:
    E105-A No:3
      Page(s):
    326-335

    Autonomous robots are controlled using physical information acquired by various sensors. The sensors are susceptible to physical attacks, which tamper with the observed values and interfere with control of the autonomous robots. Recently, sensor spoofing attacks targeting subsequent algorithms which use sensor data have become large threats. In this paper, we introduce a new attack against the LiDAR-based simultaneous localization and mapping (SLAM) algorithm. The attack uses an adversarial LiDAR scan to fool a pose graph and a generated map. The adversary calculates a falsification amount for deceiving pose estimation and physically injects the spoofed distance against LiDAR. The falsification amount is calculated by gradient method against a cost function of the scan matching algorithm. The SLAM algorithm generates the wrong map from the deceived movement path estimated by scan matching. We evaluated our attack on two typical scan matching algorithms, iterative closest point (ICP) and normal distribution transform (NDT). Our experimental results show that SLAM can be fooled by tampering with the scan. Simple odometry sensor fusion is not a sufficient countermeasure. We argue that it is important to detect or prevent tampering with LiDAR scans and to notice inconsistencies in sensors caused by physical attacks.

  • Rapid Revolution Speed Control of the Brushless DC Motor for Automotive LIDAR Applications

    Hironobu AKITA  Tsunenobu KIMOTO  

     
    PAPER-Storage Technology

      Pubricized:
    2020/01/10
      Vol:
    E103-C No:6
      Page(s):
    324-331

    A laser imaging detection and ranging (LIDAR) is one of the key sensors for autonomous driving. In order to improve its performance of the measurable distance, especially toward the front-side direction of the vehicle, this paper presents rapid revolution speed control of a brushless DC (BLDC) motor with a cyclostationary command signal. This enables the increase of the signal integration time for the designated direction, and thus improves the signal-to-noise ratio (SNR), while maintaining the averaged revolution speed. We propose the use of pre-emphasis circuits to accelerate and decelerate the revolution speed of the motor rapidly, by modifying the command signal so as to enhance the transition of the speed. The adaptive signal processing can adjust coefficients of the pre-emphasis filter automatically, so that it can compensate for the decayed response of the motor and its controller. Experiments with a 20-W BLDC motor prove that the proposed technique can achieve the actual revolution speed output to track the designated speed profile ranging from 600 to 1400 revolutions per minute (rpm) during one turn.

  • An Open Multi-Sensor Fusion Toolbox for Autonomous Vehicles

    Abraham MONRROY CANO  Eijiro TAKEUCHI  Shinpei KATO  Masato EDAHIRO  

     
    PAPER

      Vol:
    E103-A No:1
      Page(s):
    252-264

    We present an accurate and easy-to-use multi-sensor fusion toolbox for autonomous vehicles. It includes a ‘target-less’ multi-LiDAR (Light Detection and Ranging), and Camera-LiDAR calibration, sensor fusion, and a fast and accurate point cloud ground classifier. Our calibration methods do not require complex setup procedures, and once the sensors are calibrated, our framework eases the fusion of multiple point clouds, and cameras. In addition we present an original real-time ground-obstacle classifier, which runs on the CPU, and is designed to be used with any type and number of LiDARs. Evaluation results on the KITTI dataset confirm that our calibration method has comparable accuracy with other state-of-the-art contenders in the benchmark.

  • Robustly Tracking People with LIDARs in a Crowded Museum for Behavioral Analysis

    Md. Golam RASHED  Ryota SUZUKI  Takuya YONEZAWA  Antony LAM  Yoshinori KOBAYASHI  Yoshinori KUNO  

     
    PAPER-Vision

      Vol:
    E100-A No:11
      Page(s):
    2458-2469

    This introduces a method which uses LIDAR to identify humans and track their positions, body orientation, and movement trajectories in any public space to read their various types of behavioral responses to surroundings. We use a network of LIDAR poles, installed at the shoulder level of typical adults to reduce potential occlusion between persons and/or objects even in large-scale social environments. With this arrangement, a simple but effective human tracking method is proposed that works by combining multiple sensors' data so that large-scale areas can be covered. The effectiveness of this method is evaluated in an art gallery of a real museum. The result revealed good tracking performance and provided valuable behavioral information related to the art gallery.

  • Parametric Wind Velocity Vector Estimation Method for Single Doppler LIDAR Model

    Takayuki MASUO  Fang SHANG  Shouhei KIDERA  Tetsuo KIRIMOTO  Hiroshi SAKAMAKI  Nobuhiro SUZUKI  

     
    PAPER-Sensing

      Pubricized:
    2016/10/12
      Vol:
    E100-B No:3
      Page(s):
    465-472

    Doppler lidar (LIght Detection And Ranging) can provide accurate wind velocity vector estimates by processing the time delay and Doppler spectrum of received signals. This system is essential for real-time wind monitoring to assist aircraft taking off and landing. Considering the difficulty of calibration and cost, a single Doppler lidar model is more attractive and practical than a multiple lidar model. In general, it is impossible to estimate two or three dimensional wind vectors from a single lidar model without any prior information, because lidar directly observes only a 1-dimensional (radial direction) velocity component of wind. Although the conventional VAD (Velocity Azimuth Display) and VVP (Velocity Volume Processing) methods have been developed for single lidar model, both of them are inaccurate in the presence of local air turbulence. This paper proposes an accurate wind velocity estimation method based on a parametric approach using typical turbulence models such as tornado, micro-burst and gust front. The results from numerical simulation demonstrate that the proposed method remarkably enhances the accuracy for wind velocity estimation in the assumed modeled turbulence cases, compared with that obtained by the VAD or other conventional method.

  • Sensor Fusion and Registration of Lidar and Stereo Camera without Calibration Objects

    Vijay JOHN  Qian LONG  Yuquan XU  Zheng LIU  Seiichi MITA  

     
    PAPER

      Vol:
    E100-A No:2
      Page(s):
    499-509

    Environment perception is an important task for intelligent vehicles applications. Typically, multiple sensors with different characteristics are employed to perceive the environment. To robustly perceive the environment, the information from the different sensors are often integrated or fused. In this article, we propose to perform the sensor fusion and registration of the LIDAR and stereo camera using the particle swarm optimization algorithm, without the aid of any external calibration objects. The proposed algorithm automatically calibrates the sensors and registers the LIDAR range image with the stereo depth image. The registered LIDAR range image functions as the disparity map for the stereo disparity estimation and results in an effective sensor fusion mechanism. Additionally, we perform the image denoising using the modified non-local means filter on the input image during the stereo disparity estimation to improve the robustness, especially at night time. To evaluate our proposed algorithm, the calibration and registration algorithm is compared with baseline algorithms on multiple datasets acquired with varying illuminations. Compared to the baseline algorithms, we show that our proposed algorithm demonstrates better accuracy. We also demonstrate that integrating the LIDAR range image within the stereo's disparity estimation results in an improved disparity map with significant reduction in the computational complexity.

  • Snowfall Characteristics Observed by Weather Radars, an Optical Lidar and a Video Camera

    Henri SERVOMAA  Ken-ichiro MURAMOTO  Toru SHIINA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E85-D No:8
      Page(s):
    1314-1324

    This paper introduces an automatic and multi-instrument snowfall observation system and proposes techniques that could be used in the estimation of snowfall characteristics. The instruments used in this study include two microwave radars, an optical lidar, a CCD camera based imaging system and high-accuracy electrical balances for reference data. The emphasis has been on obtaining good temporal resolution and synchronization accuracy of separate datasets. In most research done so far, this has not been a principal point, either because only very long snowfall events have been measured, or wide area estimates were desired, or due to limitations in manual sampling methods and other technical issues. The measurements were also contained in a small area to make sure that all instruments record data from the same target. One radar and the optical lidar recorded an atmospheric profile up to 6000 m, while the other radar, the imaging system and the two balances recorded snowfall on the ground level. The combination of optical, microwave and direct visual observations of snowfall show that a change in cloud conditions can result in snowfall having different characteristics. The lidar backscatter was used as main indicator of transitions in cloud conditions. A direct visual evaluation of snowflake size distribution using a CCD camera shows that it is extremely helpful in order to interpret radar data. The camera observed velocity distribution showed no large variations between snowfall events, however, it could be useful in detecting graupel and hail precipitations which have much faster terminal velocities. This paper will conclude with a discussion on further elaborating the use of lidar and visual data to complement radar observations of snowfall.

  • Rayleigh and Rayleigh Doppler Lidars for the Observations of the Arctic Middle Atmosphere

    Kohei MIZUTANI  Toshikazu ITABE  Motoaki YASUI  Tetsuo AOKI  Yasuhiro MURAYAMA  Richard L. COLLINS  

     
    PAPER

      Vol:
    E83-B No:9
      Page(s):
    2004-2009

    A Rayleigh lidar (laser radar) system was developed and is now working well for temperature observations of the middle atmosphere at Poker Flat Research Range near Fairbanks, Alaska (65.1 N, 147.5 W). A comparison of lidar data and balloon sonde data showed good agreement in overlapped altitudes. The atmospheric fluctuations are detected in the temperature profiles obtained by the Rayleigh lidar and these are useful for the study of gravity waves. A Rayleigh Doppler lidar for wind measurements of the middle atmosphere is under the phase of development. The expected accuracy in measurements of horizontal winds up to an altitude of 60 km is smaller than 6 m/s in 2 hours observation. The system will be operated at Poker Flat after the completion of development. The combination of these lidars and radars installed at Poker Flat, give us chances of simultaneous observations of the structure and dynamics of the atmosphere in broad range of altitudes. Here, we give descriptions of the Rayleigh lidar and the Rayleigh Doppler lidar for the observations of the Arctic middle atmosphere at Poker Flat.