The search functionality is under construction.

Keyword Search Result

[Keyword] ground detection(10hit)

1-10hit
  • Reconfigurable Pedestrian Detection System Using Deep Learning for Video Surveillance

    M.K. JEEVARAJAN  P. NIRMAL KUMAR  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/06/09
      Vol:
    E106-D No:9
      Page(s):
    1610-1614

    We present a reconfigurable deep learning pedestrian detection system for surveillance systems that detect people with shadows in different lighting and heavily occluded conditions. This work proposes a region-based CNN, combined with CMOS and thermal cameras to obtain human features even under poor lighting conditions. The main advantage of a reconfigurable system with respect to processor-based systems is its high performance and parallelism when processing large amount of data such as video frames. We discuss the details of hardware implementation in the proposed real-time pedestrian detection algorithm on a Zynq FPGA. Simulation results show that the proposed integrated approach of R-CNN architecture with cameras provides better performance in terms of accuracy, precision, and F1-score. The performance of Zynq FPGA was compared to other works, which showed that the proposed architecture is a good trade-off in terms of quality, accuracy, speed, and resource utilization.

  • An Autoencoder Based Background Subtraction for Public Surveillance

    Yue LI  Xiaosheng YU  Haijun CAO  Ming XU  

     
    LETTER-Image

      Pubricized:
    2021/04/08
      Vol:
    E104-A No:10
      Page(s):
    1445-1449

    An autoencoder is trained to generate the background from the surveillance image by setting the training label as the shuffled input, instead of the input itself in a traditional autoencoder. Then the multi-scale features are extracted by a sparse autoencoder from the surveillance image and the corresponding background to detect foreground.

  • An Open Multi-Sensor Fusion Toolbox for Autonomous Vehicles

    Abraham MONRROY CANO  Eijiro TAKEUCHI  Shinpei KATO  Masato EDAHIRO  

     
    PAPER

      Vol:
    E103-A No:1
      Page(s):
    252-264

    We present an accurate and easy-to-use multi-sensor fusion toolbox for autonomous vehicles. It includes a ‘target-less’ multi-LiDAR (Light Detection and Ranging), and Camera-LiDAR calibration, sensor fusion, and a fast and accurate point cloud ground classifier. Our calibration methods do not require complex setup procedures, and once the sensors are calibrated, our framework eases the fusion of multiple point clouds, and cameras. In addition we present an original real-time ground-obstacle classifier, which runs on the CPU, and is designed to be used with any type and number of LiDARs. Evaluation results on the KITTI dataset confirm that our calibration method has comparable accuracy with other state-of-the-art contenders in the benchmark.

  • Entropy Based Illumination-Invariant Foreground Detection

    Karthikeyan PANJAPPAGOUNDER RAJAMANICKAM  Sakthivel PERIYASAMY  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/04/18
      Vol:
    E102-D No:7
      Page(s):
    1434-1437

    Background subtraction algorithms generate a background model of the monitoring scene and compare the background model with the current video frame to detect foreground objects. In general, most of the background subtraction algorithms fail to detect foreground objects when the scene illumination changes. An entropy based background subtraction algorithm is proposed to address this problem. The proposed method adapts to illumination changes by updating the background model according to differences in entropy value between the current frame and the previous frame. This entropy based background modeling can efficiently handle both sudden and gradual illumination variations. The proposed algorithm is tested in six video sequences and compared with four algorithms to demonstrate its efficiency in terms of F-score, similarity and frame rate.

  • Adaptive Block-Propagative Background Subtraction Method for UHDTV Foreground Detection

    Axel BEAUGENDRE  Satoshi GOTO  

     
    PAPER-Image

      Vol:
    E98-A No:11
      Page(s):
    2307-2314

    This paper presents an Adapting Block-Propagative Background Subtraction (ABPBGS) designed for Ultra High Definition Television (UHDTV) foreground detection. The main idea is to detect block after block along the objects in order to skip all areas of the image in which there is no moving object. This is particularly interesting for UHDTV when the objects of interest could represent not even 0.1% of the total area. From a seed block which is determined in a previous iteration, the detection will spread along an object as long as it detects a part of that object. A block history map guaranties that each block is processed only once. Moreover, only small blocks are loaded and processed, thus saving computational time and memory usage. The process of each block is independent enough to be easily parallelized. Compared to 9 state-of-the-art works, the ABPBGS achieved the best results with an average global quality score of 0.57 (1 being the maximum) on a dataset of 4K and 8K UHDTV sequences developed for this work. None of the state-of-the-art methods could process 4K videos in reasonable time while the ABPBGS has shown an average speed of 5.18fps. In comparison, 5 of the 9 state-of-the-art methods performed slower on 270p down-scale version of the same videos. The experiments have also shown that for the process an 8K UHDTV video the ABPBGS can divide the memory required by about 24 for a total of 450MB.

  • Inequality-Constrained RPCA for Shadow Removal and Foreground Detection

    Hang LI  Yafei ZHANG  Jiabao WANG  Yulong XU  Yang LI  Zhisong PAN  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/03/02
      Vol:
    E98-D No:6
      Page(s):
    1256-1259

    State-of-the-art background subtraction and foreground detection methods still face a variety of challenges, including illumination changes, camouflage, dynamic backgrounds, shadows, intermittent object motion. Detection of foreground elements via the robust principal component analysis (RPCA) method and its extensions based on low-rank and sparse structures have been conducted to achieve good performance in many scenes of the datasets, such as Changedetection.net (CDnet); however, the conventional RPCA method does not handle shadows well. To address this issue, we propose an approach that considers observed video data as the sum of three parts, namely a row-rank background, sparse moving objects and moving shadows. Next, we cast inequality constraints on the basic RPCA model and use an alternating direction method of multipliers framework combined with Rockafeller multipliers to derive a closed-form solution of the shadow matrix sub-problem. Our experiments have demonstrated that our method works effectively on challenging datasets that contain shadows.

  • Design of Real-Time Self-Frame-Rate-Control Foreground Detection for Multiple Camera Surveillance System

    Tsung-Han TSAI  Chung-Yuan LIN  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E94-D No:12
      Page(s):
    2513-2522

    Emerging video surveillance technologies are based on foreground detection to achieve event detection automatically. Integration foreground detection with a modern multi-camera surveillance system can significantly increase the surveillance efficiency. The foreground detection often leads to high computational load and increases the cost of surveillance system when a mass deployment of end cameras is needed. This paper proposes a DSP-based foreground detection algorithm. Our algorithm incorporates a temporal data correlation predictor (TDCP) which can exhibit the correlation of data and reduce computation based on this correlation. With the DSP-oriented foreground detection, an adaptive frame rate control is developed as a low cost solution for multi-camera surveillance system. The adaptive frame rate control automatically detects the computational load of foreground detection on multiple video sources and adaptively tunes the TDCP to meet the real-time specification. Therefore, no additional hardware cost is required when the number of deployed cameras is increased. Our method has been validated on a demonstration platform. Performance can achieve real-time CIF frame processing for a 16-camera surveillance system by single-DSP chip. Quantitative evaluation demonstrates that our solution provides satisfied detection rate, while significantly reducing the hardware cost.

  • Probabilistic BPRRC: Robust Change Detection against Illumination Changes and Background Movements

    Kentaro YOKOI  

     
    PAPER

      Vol:
    E93-D No:7
      Page(s):
    1700-1707

    This paper presents Probabilistic Bi-polar Radial Reach Correlation (PrBPRRC), a change detection method that is robust against illumination changes and background movements. Most of the traditional change detection methods are robust against either illumination changes or background movements; BPRRC is one of the illumination-robust change detection methods. We introduce a probabilistic background texture model into BPRRC and add the robustness against background movements including foreground invasions such as moving cars, walking people, swaying trees, and falling snow. We show the superiority of PrBPRRC in the environment with illumination changes and background movements by using three public datasets and one private dataset: ATON Highway data, Karlsruhe traffic sequence data, PETS 2007 data, and Walking-in-a-room data.

  • Automatic Data Processing Procedure for Ground Probing Radar

    Toru SATO  Kenya TAKADA  Toshio WAKAYAMA  Iwane KIMURA  Tomoyuki ABE  Tetsuya SHINBO  

     
    PAPER-Electronic and Radio Applications

      Vol:
    E77-B No:6
      Page(s):
    831-837

    We developed an automatic data processing algorithm for a ground-probing radar which is essential in analyzing a large amount of data by a non-expert. Its aim is to obtain an optimum result that the conventional technique can give, without the assistance of an experienced operator. The algorithm is general except that it postulates the existence of at least one isolated target in the radar image. The raw images of underground objects are compressed in the vertical and the horizontal directions by using a pulse-compression filter and the aperture synthesis technique, respectively. The test function needed to configure the compression filter is automatically selected from the given image. The sensitivity of the compression filter is adjusted to minimize the magnitude of spurious responses. The propagation velocity needed to perform the aperture synthesis is determined by fitting a hyperbola to the selected echo trace. We verified the algorithm by applying it to the data obtained at two test sites with different magnitude of clutter echoes.

  • High-Resolution Radar Image Reconstruction Using an Arbitrary Array

    Toshio WAKAYAMA  Toru SATO  Iwane KIMURA  

     
    PAPER-Subsurface Radar

      Vol:
    E76-B No:10
      Page(s):
    1305-1312

    Radar imaging technique is one of the most powerful tool for underground detection. However, performance of conventional methods is not sufficiently high when the observational direction or the aperture size is restricted. In the present paper, an image reconstruction method based on a model fitting with nonlinear least-squares has been developed, which is applicable to arbitrarily arranged arrays. Reconstruction is executed on the assumption that targets consist of discrete point scatterers embedded in a homogeneous medium. Model fitting is iterated as the number of point target in the assumed model is increased, until the residual in fitting becomes unchanged or small enough. A penalty function is used in nonlinear least-squares to make the algorithm stable. Fundamental characteristics of the method revealed with computer simulation are described. This method focuses a much sharper image than that obtained by the conventional aperture synthesis technique.