The search functionality is under construction.

Author Search Result

[Author] Hangyu LI(2hit)

1-2hit
  • Cultivating Listening Skills for Academic English Based on Strategy Object Mashups Approach

    Hangyu LI  Hajime KIRA  Shinobu HASEGAWA  

     
    PAPER-Educational Technology

      Pubricized:
    2016/03/22
      Vol:
    E99-D No:6
      Page(s):
    1615-1625

    This paper aims to support the cultivation of proper cognitive skills for academic English listening. First of all, this paper identified several listening strategies proved to be effective for cultivating listening skills through past research and builds up the respective strategy models, based on which we designed and developed various functional units as strategy objects, and the mashup environment where these function units can be assembled to serve as a personal learning environment. We also attached listening strategies and tactics to each object, in order to make learners aware of the related strategies and tactics applied during learning. Both short-term and mid-term case studies were carried out, and the data collected showed several positive results and some interesting indications.

  • Robust Segmentation of Highly Dynamic Scene with Missing Data

    Yinhui ZHANG  Zifen HE  Changyu LIU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2014/09/29
      Vol:
    E98-D No:1
      Page(s):
    201-205

    Segmenting foreground objects from highly dynamic scenes with missing data is very challenging. We present a novel unsupervised segmentation approach that can cope with extensive scene dynamic as well as a substantial amount of missing data that present in dynamic scene. To make this possible, we exploit convex optimization of total variation beforehand for images with missing data in which depletion mask is available. Inpainting depleted images using total variation facilitates detecting ambiguous objects from highly dynamic images, because it is more likely to yield areas of object instances with improved grayscale contrast. We use a conditional random field that adapts to integrate both appearance and motion knowledge of the foreground objects. Our approach segments foreground object instances while inpainting the highly dynamic scene with a variety amount of missing data in a coupled way. We demonstrate this on a very challenging dataset from the UCSD Highly Dynamic Scene Benchmarks (HDSB) and compare our method with two state-of-the-art unsupervised image sequence segmentation algorithms and provide quantitative and qualitative performance comparisons.