The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Shigeru KURIYAMA(4hit)

1-4hit
  • Generating Concise Rules for Human Motion Retrieval

    Tomohiko MUKAI  Ken-ichi WAKISAKA  Shigeru KURIYAMA  

     
    PAPER-Computer Graphics

      Vol:
    E93-D No:6
      Page(s):
    1636-1643

    This paper proposes a method for retrieving human motion data with concise retrieval rules based on the spatio-temporal features of motion appearance. Our method first converts motion clip into a form of clausal language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming (ILP). ILP automatically discovers the essential rule in the same clausal form with a user-defined hypothesis-testing procedure. All motions are indexed using this clausal language, and the desired clips are retrieved by subsequence matching using the rule. Such rule-based retrieval offers reasonable performance and the rule can be intuitively edited in the same language form. Consequently, our method enables efficient and flexible search from a large dataset with simple query language.

  • Mimetic Code Using Successive Additive Color Mixture

    Shigeyuki KOMURO  Shigeru KURIYAMA  Takao JINNO  

     
    LETTER

      Vol:
    E98-D No:1
      Page(s):
    98-102

    Multimedia contents can be enriched by introducing navigation with image codes readable by camera-mounted mobile devices such as smartphones. Data hiding technologies were utilized for embedding such codes to make their appearances inconspicuous, which can reduce esthetic damage on visual media. This article proposes a method of embedding two-dimensional codes into images based on successive color mixture for a blue-color channel. This technology can make the color of codes mimic those used on a cover image, while preserving their readability for current general purpose image sensors.

  • Extensible Task Simulation with Motion Archive

    Shigeru KURIYAMA  Tomohiko MUKAI  Yusuke IRINO  Kazuyuki ANDA  Toyohisa KANEKO  

     
    PAPER

      Vol:
    E88-D No:5
      Page(s):
    809-815

    This paper proposes a new framework to produce humanoid animations for simulating human tasks. Natural working movements are generated via management of motion capture data with our simulation package. An extensible middleware controls reactive human behaviors, and all processes of simulation in a cyber factory are controlled through XML documents including motions, scene objects, and behaviors. This package displays simulation using Web3D technology and X3D specifications which can supply a common interface for customizing cyberworlds.

  • Estimation of Multiple Illuminant Colors Using Color Line Features

    Quan XIU HO  Takao JINNO  Yusuke UCHIMI  Shigeru KURIYAMA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2022/06/23
      Vol:
    E105-D No:10
      Page(s):
    1751-1758

    The colors of objects in natural images are affected by the color of lighting, and accurately estimating an illuminant's color is indispensable in analyzing scenes lit by colored lightings. Recent lighting environments enhance colorfulness due to the spread of light-emitting diode (LED) lightings whose colors are flexibly controlled in a full visible spectrum. However, existing color estimations mainly focus on the single illuminant of normal color ranges. The estimation of multiple illuminants of unusual color settings, such as blue or red of high chroma, has not been studied yet. Therefore, new color estimations should be developed for multiple illuminants of various colors. In this article, we propose a color estimation for LED lightings using Color Line features, which regards the color distribution as a straight line in a local area. This local estimate is suitable for estimating various colors of multiple illuminants. The features are sampled at many small regions in an image and aggregated to estimate a few global colors using supervised learning with a convolutional neural network. We demonstrate the higher accuracy of our method over existing ones for such colorful lighting environments by producing the image dataset lit by multiple LED lightings in a full-color range.