The search functionality is under construction.

Keyword Search Result

[Keyword] motion detection(20hit)

1-20hit
  • Transient Fault Tolerant State Assignment for Stochastic Computing Based on Linear Finite State Machines

    Hideyuki ICHIHARA  Motoi FUKUDA  Tsuyoshi IWAGAKI  Tomoo INOUE  

     
    PAPER

      Vol:
    E103-A No:12
      Page(s):
    1464-1471

    Stochastic computing (SC), which is an approximate computation with probabilities, has attracted attention owing to its small area, small power consumption and high fault tolerance. In this paper, we focus on the transient fault tolerance of SC based on linear finite state machines (linear FSMs). We show that state assignment of FSMs considerably affects the fault tolerance of linear FSM-based SC circuits, and present a Markov model for representing the impact of the state assignment on the behavior of faulty FSMs and estimating the expected error significance of the faulty FSM-based SC circuits. Furthermore, we propose a heuristic algorithm for appropriate state assignment that can mitigate the influence of transient faults. Experimental analysis shows that the state assignment has an impact on the transient fault tolerance of linear FSM-based SC circuits and the proposed state assignment algorithm can achieve a quasi-optimal state assignment in terms of high fault tolerance.

  • Entropy Based Illumination-Invariant Foreground Detection

    Karthikeyan PANJAPPAGOUNDER RAJAMANICKAM  Sakthivel PERIYASAMY  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/04/18
      Vol:
    E102-D No:7
      Page(s):
    1434-1437

    Background subtraction algorithms generate a background model of the monitoring scene and compare the background model with the current video frame to detect foreground objects. In general, most of the background subtraction algorithms fail to detect foreground objects when the scene illumination changes. An entropy based background subtraction algorithm is proposed to address this problem. The proposed method adapts to illumination changes by updating the background model according to differences in entropy value between the current frame and the previous frame. This entropy based background modeling can efficiently handle both sudden and gradual illumination variations. The proposed algorithm is tested in six video sequences and compared with four algorithms to demonstrate its efficiency in terms of F-score, similarity and frame rate.

  • Robust Ghost-Free High-Dynamic-Range Imaging by Visual Salience Based Bilateral Motion Detection and Stack Extension Based Exposure Fusion

    Zijie WANG  Qin LIU  Takeshi IKENAGA  

     
    PAPER-Image Processing

      Vol:
    E100-A No:11
      Page(s):
    2266-2274

    High-dynamic-range imaging (HDRI) technologies aim to extend the dynamic range of luminance against the limitation of camera sensors. Irradiance information of a scene can be reconstructed by fusing multiple low-dynamic-range (LDR) images with different exposures. The key issue is removing ghost artifacts caused by motion of moving objects and handheld cameras. This paper proposes a robust ghost-free HDRI algorithm by visual salience based bilateral motion detection and stack extension based exposure fusion. For ghost areas detection, visual salience is introduced to measure the differences between multiple images; bilateral motion detection is employed to improve the accuracy of labeling motion areas. For exposure fusion, the proposed algorithm reduces the discontinuity of brightness by stack extension and rejects the information of ghost areas to avoid artifacts via fusion masks. Experiment results show that the proposed algorithm can remove ghost artifacts accurately for both static and handheld cameras, remain robust to scenes with complex motion and keep low complexity over recent advances including rank minimization based method and patch based method by 63.6% and 20.4% time savings averagely.

  • Robust Motion Detection Based on the Enhanced ViBe

    Zhihui FAN  Zhaoyang LU  Jing LI  Chao YAO  Wei JIANG  

     
    LETTER-Computer Graphics

      Pubricized:
    2015/06/10
      Vol:
    E98-D No:9
      Page(s):
    1724-1726

    To eliminate casting shadows of moving objects, which cause difficulties in vision applications, a novel method is proposed based on Visual background extractor by altering its updating mechanism using relevant spatiotemporal information. An adaptive threshold and a spatial adjustment are also employed. Experiments on typical surveillance scenes validate this scheme.

  • Foreground Segmentation Using Morphological Operator and Histogram Analysis for Indoor Applications

    Kyounghoon JANG  Geun-Jun KIM  Hosang CHO  Bongsoon KANG  

     
    LETTER-Vision

      Vol:
    E98-A No:9
      Page(s):
    1998-2003

    This paper proposes a foreground segmentation method for indoor environments using depth images only. It uses a morphological operator and histogram analysis to segment the foreground. In order to compare the accuracy for foreground segmentation, we use metric measurements of false positive rate (FPR), false negative rate (FNR), total error (TE), and a similarity measure (S). A series of experimental results using video sequences collected under various circumstances are discussed. The proposed system is also designed in a field-programmable gate array (FPGA) implementation with low hardware resources.

  • Motion Detection Algorithm for Unmanned Aerial Vehicle Nighttime Surveillance

    Huaxin XIAO  Yu LIU  Wei WANG  Maojun ZHANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2014/09/22
      Vol:
    E97-D No:12
      Page(s):
    3248-3251

    In consideration of the image noise captured by photoelectric cameras at nighttime, a robust motion detection algorithm based on sparse representation is proposed in this study. A universal dictionary for arbitrary scenes is presented. Realistic and synthetic experiments demonstrate the robustness of the proposed approach.

  • A Motion Detection Model Inspired by the Neuronal Propagation in the Hippocampus

    Haichao LIANG  Takashi MORIE  

     
    PAPER-Vision

      Vol:
    E95-A No:2
      Page(s):
    576-585

    We propose a motion detection model, which is suitable for higher speed operation than the video rate, inspired by the neuronal propagation in the hippocampus in the brain. The model detects motion of edges, which are extracted from monocular image sequences, on specified 2D maps without image matching. We introduce gating units into a CA3-CA1 model, where CA3 and CA1 are the names of hippocampal regions. We use the function of gating units to reduce mismatching for applying our model in complicated situations. We also propose a map-division method to achieve accurate detection. We have evaluated the performance of the proposed model by using artificial and real image sequences. The results show that the proposed model can run up to 1.0 ms/frame if using a resolution of 6460 units division of 320240 pixels image. The detection rate of moving edges is achieved about 99% under a complicated situation. We have also verified that the proposed model can achieve accurate detection of approaching objects at high frame rate (>100 fps), which is better than conventional models, provided we can obtain accurate positions of image features and filter out the origins of false positive results in the post-processing.

  • Flicker Parameters Estimation in Old Film Sequences Containing Moving Objects

    Xiaoyong ZHANG  Masahide ABE  Masayuki KAWAMATA  

     
    PAPER-Digital Signal Processing

      Vol:
    E94-A No:12
      Page(s):
    2836-2844

    The aim of this study is to improve the accuracy of flicker parameters estimation in old film sequences in which moving objects are present. Conventional methods tend to fail in flicker parameters estimation due to the effects of moving objects. Our proposed method firstly utilizes an adaptive Gaussian mixture model (GMM)-based method to detect the moving objects in the film sequences, and combines the detected results with the histogram-matched frames to generate reference frames for flicker parameters estimation. Then, on the basis of a linear flicker model, the proposed method uses an M-estimator with the reference frames to estimate the flicker parameters. Experimental results show that the proposed method can effectively improve the accuracy of flicker parameters estimation when the moving objects are present in the film sequences.

  • Foreground-Adaptive Motion Detection in Broad Surveillance Environments

    Fan-Chieh CHENG  Shih-Chia HUANG  Shanq-Jang RUAN  

     
    LETTER-Image Processing

      Vol:
    E93-A No:11
      Page(s):
    2096-2097

    In this letter, we propose a novel motion detection method in order to accurately perform the detection of moving objects in the automatic video surveillance system. Based on the proposed Background Generation Mechanism, the presence of either moving object or background information is firstly checked in order to supply the selective updating of the high-quality adaptive background model, which facilitates the further motion detection using the Laplacian distribution model. The overall results of the detection accuracy will be demonstrated that our proposed method attains a substantially higher degree of efficacy, outperforming the state-of-the-art method by average Similarity accuracy rates of up to 56.64%, 27.78%, 50.04%, 43.33%, and 44.09%, respectively.

  • An Ego-Motion Detection System Employing Directional-Edge-Based Motion Field Representations

    Jia HAO  Tadashi SHIBATA  

     
    PAPER-Pattern Recognition

      Vol:
    E93-D No:1
      Page(s):
    94-106

    In this paper, a motion field representation algorithm based on directional edge information has been developed. This work is aiming at building an ego-motion detection system using dedicated VLSI chips developed for real time motion field generation at low powers . Directional edge maps are utilized instead of original gray-scale images to represent local features of an image and to detect the local motion component in a moving image sequence. Motion detection by edge histogram matching has drastically reduced the computational cost of block matching, while achieving a robust performance of the ego-motion detection system under dynamic illumination variation. Two kinds of feature vectors, the global motion vector and the component distribution vectors, are generated from a motion field at two different scales and perspectives. They are jointly utilized in the hierarchical classification scheme employing multiple-clue matching. As a result, the problems of motion ambiguity as well as motion field distortion caused by camera shaking during video capture have been resolved. The performance of the ego-motion detection system was evaluated under various circumstances, and the effectiveness of this work has been verified.

  • Background Independent Moving Object Segmentation for Video Surveillance

    M. Ali Akber DEWAN  M. Julius HOSSAIN  Oksam CHAE  

     
    PAPER-Multimedia Systems for Communications

      Vol:
    E92-B No:2
      Page(s):
    585-598

    Background modeling is one of the most challenging and time consuming tasks in motion detection from video sequence. This paper presents a background independent moving object segmentation algorithm utilizing the spatio-temporal information of the last three frames. Existing three-frame based methods face challenges due to the insignificant gradient information in the overlapping region of difference images and edge localization errors. These methods extract scattered moving edges and experience poor detection rate especially when objects with slow movement exist in the scene. Moreover, they are not much suitable for moving object segmentation and tracking. The proposed method solves these problems by representing edges as segments and applying a novel segment based flexible edge matching algorithm which makes use of gradient accumulation through distance transformation. Due to working with three most recent frames, the proposed method can adapt to changes in the environment. Segment based representation facilitates local geometric transformation and thus it can make proper use of flexible matching to provide an effective solution for tracking. To segment the moving object region from the detected moving edges, we introduce a watershed based algorithm followed by an iterative background removal procedure. Watershed based segmentation algorithm helps to extract moving object with more accurate boundary which eventually achieves higher coding efficiency in content based applications and ensures a good visual quality even in the limited bit rate multimedia communication.

  • Comparison of Classification Methods for Detecting Emotion from Mandarin Speech

    Tsang-Long PAO  Yu-Te CHEN  Jun-Heng YEH  

     
    PAPER-Human-computer Interaction

      Vol:
    E91-D No:4
      Page(s):
    1074-1081

    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.

  • Moving Object Detection for Real Time Video Surveillance: An Edge Based Approach

    M. Julius HOSSAIN  M. Ali Akber DEWAN  Oksam CHAE  

     
    PAPER-Multimedia Systems for Communications

      Vol:
    E90-B No:12
      Page(s):
    3654-3664

    This paper presents an automatic edge segment based algorithm for the detection of moving objects that has been specially developed to deal with the variations in illumination and contents of background. We investigated the suitability of the proposed edge segment based moving object detection algorithm in comparison with the traditional intensity based as well as edge pixel based detection methods. In our method, edges are extracted from video frames and are represented as segments using an efficiently designed edge class. This representation helps to obtain the geometric information of edge in the case of edge matching and shape retrieval; and creates effective means to incorporate knowledge into edge segment during background modeling and motion tracking. An efficient approach for background edge generation and a robust method of edge matching are presented to effectively reduce the risk of false alarm due to illumination change and camera motion while maintaining the high sensitivity to the presence of moving object. The proposed method can be successfully realized in video surveillance applications in home networking environment as well as various monitoring systems. As, video coding standard MPEG-4 enables content based functionality, it can successfully utilize the shape information of the detected moving objects to achieve high coding efficiency. Experiments with real image sequences, along with comparisons with some other existing methods are presented, illustrating the robustness of the proposed algorithm.

  • A Two-Dimensional Network of Analog Circuits for Motion Detection Based on the Frog Visual System

    Kimihiro NISHIO  Hiroo YONEZU  Yuzo FURUKAWA  

     
    PAPER

      Vol:
    E89-A No:2
      Page(s):
    428-438

    A two-dimensional network for motion detection constructed of simple analog circuits was proposed and designed based on the frog visual system. In the frog visual system, the two-dimensional motion of a moving object can be detected by performing simple information processing in the tectum and thalamus of the frog brain. The measured results of the test chip fabricated by a 1.2 µm complementary metal oxide semiconductor (CMOS) process confirmed the correct operation of the basic circuits in the network. The results obtained with the simulation program with integrated circuit emphasis (SPICE) showed that the proposed network can detect the motion direction and velocity of a moving object. Thus, a chip for two-dimensional motion detection was realized using the proposed network.

  • Motion Detecting Artificial Retina Model by Two-Dimensional Multi-Layered Analog Electronic Circuits

    Masashi KAWAGUCHI  Takashi JIMBO  Masayoshi UMENO  

     
    PAPER

      Vol:
    E86-A No:2
      Page(s):
    387-395

    We propose herein a motion detection artificial vision model which uses analog electronic circuits. The proposed model is comprised of four layers. The first layer is a differentiation circuit of the large CR coefficient, and the second layer is a differentiation circuit of the small CR coefficient. Thus, the speed of the movement object is detected. The third layer is a difference circuit for detecting the movement direction, and the fourth layer is a multiple circuit for detecting pure motion output. When the object moves from left to right the model outputs a positive signal, and when the object moves from right to left the model outputs a negative signal. We first designed a one-dimensional model, which we later enhanced to obtain a two-dimensional model. The model was shown to be capable of detecting a movement object in the image. Using analog electronic circuits, the number of connections decrease and real-time processing becomes feasible. In addition, the proposed model offers excellent fault tolerance. Moreover, the proposed model can be used to detect two or more objects, which is advantageous for detection in an environment in which several objects are moving in multiple directions simultaneously. Thus, the proposed model allows practical, cheap movement sensors to be realized for applications such as the measurement of road traffic volume or counting the number of pedestrians in an area. From a technological viewpoint, the proposed model facilitates clarification of the mechanism of the biomedical vision system, which should enable design and simulation by an analog electric circuit for detecting the movement and speed of objects.

  • Visual Stereo Image Generation from Video Data Using Phase Correlation Technique

    Xiaohua ZHANG  Masayuki NAKAJIMA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:6
      Page(s):
    1266-1273

    We propose a new method for generating visual stereo images from the common two dimensional images without 3D reconstruction. The major novel contributions of this report are in two aspects. First, we address the detection of dominant motion presented in the given scenes, for doing so we borrow phase shift theorem and calculate the inverse Fourier transform of cross-power spectrum to find the maximum peak value whose position can be used to decide motion parameters. Secondly, unlike most of researchers study the stereo vision to recover 3D information for modeling and rendering, we address the visual stereo image generation without 3D reconstruction by applying the computed motion parameters to make decision of selecting two given images to form a stereo pair for left eye and right eye respectively. The proposed approaches can be employed for applications such as navigation in a virtual environment.

  • Use of Multimodal Information in Facial Emotion Recognition

    Liyanage C. DE SILVA  Tsutomu MIYASATO  Ryohei NAKATSU  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E81-D No:1
      Page(s):
    105-114

    Detection of facial emotions are mainly addressed by computer vision researchers based on facial display. Also detection of vocal expressions of emotions is found in research work done by acoustic researchers. Most of these research paradigms are devoted purely to visual or purely to auditory human emotion detection. However we found that it is very interesting to consider both of these auditory and visual informations together, for processing, since we hope this kind of multimodal information processing will become a datum of information processing in future multimedia era. By several intensive subjective evaluation studies we found that human beings recognize Anger, happiness, Surprise and Dislike by their visual appearance, compared to voice only detection. When the audio track of each emotion clip is dubbed with a different type of auditory emotional expression, still Anger, Happiness and Surprise were video dominant. However Dislike emotion gave mixed responses to different speakers. In both studies we found that Sadness and Fear emotions were audio dominant. As a conclusion to the paper we propose a method of facial emotion detection by using a hybrid approach, which uses multimodal informations for facial emotion recognition.

  • Motion-Compensated Prediction Method Based on Perspective transform for Coding of Moving Images

    Atsushi KOIKE  Satoshi KATSUNO  Yoshinori HATORI  

     
    PAPER

      Vol:
    E79-B No:10
      Page(s):
    1443-1451

    Hybrid image coding method is one of the most promising methods for efficient coding of moving images. The method makes use of jointly motion-compensated prediction and orthogonal transform like DCT. This type of coding scheme was adopted in several world standards such as H.261 and MPEG in ITU-T and ISO as a basic framework [1], [2]. Most of the work done in motion-compensated prediction has been based on a block matching method. However, when input moving images include complicated motion like rotation or enlargement, it often causes block distortion in decoded images, especially in the case of very low bit-rate image coding. Recently, as one way of solving this problem, some motion-compensated prediction methods based on an affine transform or bilinear transform were developed [3]-[8]. These methods, however, cannot always express the appearance of the motion in the image plane, which is projected plane form 3-D space to a 2-D plane, since the perspective transform is usually assumed. Also, a motion-compensation method using a perspective transform was discussed in Ref, [6]. Since the motion detection method is defined as an extension of the block matching method, it can not always detect motion parameters accurately when compared to gradient-based motion detection. In this paper, we propose a new motion-compensated prediction method for coding of moving images, especially for very low bit-rate image coding such as less than 64 kbit/s. The proposed method is based on a perspective transform and the constraint principle for the temporal and spatial gradients of pixel value, and complicated motion in the image plane including rotation and enlargement based on camera zooming can also be detected theoretically in addition to translational motion. A computer simulation was performed using moving test images, and the resulting predicted images were compared with conventional methods such as the block matching method using the criteria of SNR and entropy. The results showed that SNR and entropy of the proposed method are better than those of conventional methods. Also, the proposed method was applied to very low bit-rate image coding at 16 kbit/s, and was compared with a conventional method, H.261. The resulting SNR and decoded images in the proposed method were better than those of H.261. We conclude that the proposed method is effective as a motion-compensated prediction method.

  • Interactive Model-Based Coding of Facial Image Sequence with a New Motion Detection Algorithm

    Kazuo OHZEKI  Takahiro SAITO  Masahide KANEKO  Hiroshi HARASHIMA  

     
    PAPER

      Vol:
    E79-B No:10
      Page(s):
    1474-1483

    To make the model-based coding a practical method, new signal processing techniques other than fully-automatic image recognition should be studied. Also after having realized the model-based coding, another new signal processing technique to improve the performance of the model-based coding should be studied. Moreover non-coding functions related to the model-based coding can be embedded as additional features. The authors are studying the interactive model-based coding in order to achieve its practical realization, improve its performance and extend related non-coding functions. We have already proposed the basic concept of interactive model-based coding and presented an eyeglasses processing for a facial image with glasses to remove the frame for improving the model-based coding performance. In this paper, we focus on the 3-D motion detection algorithm in the interactive model-based coding. Previous works were mainly based on iterative methods to solve non-linear equations. A new motion detection algorithm is developed for interactive model-based coding. It is linear because the interactive operation generates more information and the environment of the applications limits the range of parameters. The depth parameter is first obtained by the fact that a line segment is invariant as to 3-D space transformation. Relation of distance between two points is utilized. The number of conditions is larger than that of the unknown variables, which allows to use least square method for obtaining stable solutions in the environment of the applications. Experiments are carried out using the proposed motion detection method and input noise problems are removed. Synthesized wireframe modified by eight parameters provides smooth and natural motion.

  • Very Low Bit-rate Coding Based on Wavelet, Edge Detection, and Motion Interpolation /Extrapolation

    Zhixiong WU  Toshifumi KANAMARU  

     
    PAPER

      Vol:
    E79-B No:10
      Page(s):
    1434-1442

    For very low bit-rate video coding such as under 64 kbps, it is unreasonable to encode and transmit all the information. Thus, it is very important to choose the "important" information and encode it efficiently. In this paper, we first propose an image separation-composition method to solve this problem. At the encoder, an image is separated into a low-frequency part and two (horizontal and vertical) edge parts, which are considered as "important" information for human visualization. The low-frequency part is encoded by using block DCT and linear quantization. And the edges are selected by their values and encoded by using Chain coding to remain the most of the important parts for human visualization. At the decoder, the image is reconstructed by first generating the high-frequency parts from the horizontal and vertical edge parts, respectively, and then applying the inverse wavelet transform to the low frequency part and high frequency parts. This composition algorithm has less computational complexity than the conventional analytic/synthetic algorithms because it is not based on iterating approach. Moreover, to reduce the temporal redundancy efficiently, we propose a hierarchical motion detection and a motion interpolation /extrapolation algorithm. We detect motion vectors and motion regions between two reconstructed images and then predict the motion vectors of the current image from the previous detected motion vectors and motion regions by using the interpolation/extrapolation both at the encoder and at the decoder. Therefore, it is unnecessary to transmit the motion vectors and motion regions. This algorithm reduces not only the temporal redundancy but also bit-rates for coding side information . Furthermore, because the motion detection is completely syntax independent, any type of motion detection can be used. We show some simulation results of the proposed video coding algorithm with the coding bit-rate down to 24 kbps and 10 kbps.