1-4hit |
Karthikeyan PANJAPPAGOUNDER RAJAMANICKAM Sakthivel PERIYASAMY
Background subtraction algorithms generate a background model of the monitoring scene and compare the background model with the current video frame to detect foreground objects. In general, most of the background subtraction algorithms fail to detect foreground objects when the scene illumination changes. An entropy based background subtraction algorithm is proposed to address this problem. The proposed method adapts to illumination changes by updating the background model according to differences in entropy value between the current frame and the previous frame. This entropy based background modeling can efficiently handle both sudden and gradual illumination variations. The proposed algorithm is tested in six video sequences and compared with four algorithms to demonstrate its efficiency in terms of F-score, similarity and frame rate.
Yoichi TOMIOKA Hikaru MURAKAMI Hitoshi KITAZAWA
Recently, video surveillance systems have been widely introduced in various places, and protecting the privacy of objects in the scene has been as important as ensuring security. Masking each moving object with a background subtraction method is an effective technique to protect its privacy. However, the background subtraction method is heavily affected by sunshine change, and a redundant masking by over-extraction is inevitable. Such superfluous masking disturbs the quality of video surveillance. In this paper, we propose a moving object masking method combining background subtraction and machine learning based on Real AdaBoost. This method can reduce the superfluous masking while maintaining the reliability of privacy protection. In the experiments, we demonstrate that the proposed method achieves about 78-94% accuracy for classifying superfluous masking regions and moving objects.
Daisuke ABE Eigo SEGAWA Osafumi NAKAYAMA Morito SHIOHARA Shigeru SASAKI Nobuyuki SUGANO Hajime KANNO
In this paper, we present a robust small-object detection method, which we call "Frequency Pattern Emphasis Subtraction (FPES)", for wide-area surveillance such as that of harbors, rivers, and plant premises. For achieving robust detection under changes in environmental conditions, such as illuminance level, weather, and camera vibration, our method distinguishes target objects from background and noise based on the differences in frequency components between them. The evaluation results demonstrate that our method detected more than 95% of target objects in the images of large surveillance areas ranging from 30-75 meters at their center.
Kenji IDE Ryusuke KAWAHARA Satoshi SHIMIZU Takayuki HAMAMOTO
We have investigated real-time object tracking using a wide view imaging system. For the system, we have designed and fabricated new smart image sensor with four functions effective in wide view imaging, such as a random access function. In this system, eight smart sensors and an octagonal mirror are used and each image obtained by the sensors is equivalent to a partial image of the wide view. In addition, by using an FPGA for processing, the circuits in this system can be scaled down and a panoramic image can be obtained in real time. For object tracking using this system, the object-detection method based on background subtraction is used. When moving objects are detected in the panoramic image, the objects are constantly displayed on the monitor at higher resolution in real time. In this paper, we describe the random access image sensor and show some results obtained using this sensor. In addition, we describe the wide view imaging system using eight sensors. Furthermore, we explain the method of object tracking in this system and show the results of real-time multipl-object tracking.