The search functionality is under construction.

IEICE TRANSACTIONS on Information

Recognition of Moving Object in High Dynamic Scene for Visual Prosthesis

Fei GUO, Yuan YANG, Yang XIAO, Yong GAO, Ningmei YU

  • Full Text Views

    0

  • Cite this

Summary :

Currently, visual perceptions generated by visual prosthesis are low resolution with unruly color and restricted grayscale. This severely restricts the ability of prosthetic implant to complete visual tasks in daily scenes. Some studies explore existing image processing techniques to improve the percepts of objects in prosthetic vision. However, most of them extract the moving objects and optimize the visual percepts in general dynamic scenes. The application of visual prosthesis in daily life scenes with high dynamic is greatly limited. Hence, in this study, a novel unsupervised moving object segmentation model is proposed to automatically extract the moving objects in high dynamic scene. In this model, foreground cues with spatiotemporal edge features and background cues with boundary-prior are exploited, the moving object proximity map are generated in dynamic scene according to the manifold ranking function. Moreover, the foreground and background cues are ranked simultaneously, and the moving objects are extracted by the two ranking maps integration. The evaluation experiment indicates that the proposed method can uniformly highlight the moving object and keep good boundaries in high dynamic scene with other methods. Based on this model, two optimization strategies are proposed to improve the perception of moving objects under simulated prosthetic vision. Experimental results demonstrate that the introduction of optimization strategies based on the moving object segmentation model can efficiently segment and enhance moving objects in high dynamic scene, and significantly improve the recognition performance of moving objects for the blind.

Publication
IEICE TRANSACTIONS on Information Vol.E102-D No.7 pp.1321-1331
Publication Date
2019/07/01
Publicized
2019/04/17
Online ISSN
1745-1361
DOI
10.1587/transinf.2018EDP7405
Type of Manuscript
PAPER
Category
Human-computer Interaction

Authors

Fei GUO
  Xi'an University of Technology
Yuan YANG
  Xi'an University of Technology
Yang XIAO
  Xi'an University of Technology
Yong GAO
  Xi'an University of Technology
Ningmei YU
  Xi'an University of Technology

Keyword