The search functionality is under construction.

Author Search Result

[Author] Takao YAMANAKA(2hit)

1-2hit
  • Fundamental Study of Odor Recorder Using Inkjet Devices for Low-Volatile Scents

    Takamichi NAKAMOTO  Hidehiko TAKIGAWA  Takao YAMANAKA  

     
    PAPER-Bioelectronic and Sensor

      Vol:
    E87-C No:12
      Page(s):
    2081-2086

    A smell reproduction technique is useful in the field of virtual reality. We have developed the system called an odor recorder for reproducing the smell recorded using the odor sensing technique. We proposed the new type of the odor recorder using the inkjet devices together with a mesh heater. Droplets with tiny volume were forcibly evaporated to generate smell rapidly and reproducibly. Moreover, the mesh heater was directly connected to the sensors without plumbing tubes and the sensors were placed away from the wall of sensor cell. The recording time of the odor with high odor intensity became much shorter than that of the previous system. Then, the recipe of jasmine scent composed of benzyl acetate and Ylang Ylang was successfully determined using the proposed system.

  • Multi-Scale Estimation for Omni-Directional Saliency Maps Using Learnable Equator Bias

    Takao YAMANAKA  Tatsuya SUZUKI  Taiki NOBUTSUNE  Chenjunlin WU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2023/07/19
      Vol:
    E106-D No:10
      Page(s):
    1723-1731

    Omni-directional images have been used in wide range of applications including virtual/augmented realities, self-driving cars, robotics simulators, and surveillance systems. For these applications, it would be useful to estimate saliency maps representing probability distributions of gazing points with a head-mounted display, to detect important regions in the omni-directional images. This paper proposes a novel saliency-map estimation model for the omni-directional images by extracting overlapping 2-dimensional (2D) plane images from omni-directional images at various directions and angles of view. While 2D saliency maps tend to have high probability at the center of images (center bias), the high-probability region appears at horizontal directions in omni-directional saliency maps when a head-mounted display is used (equator bias). Therefore, the 2D saliency model with a center-bias layer was fine-tuned with an omni-directional dataset by replacing the center-bias layer to an equator-bias layer conditioned on the elevation angle for the extraction of the 2D plane image. The limited availability of omni-directional images in saliency datasets can be compensated by using the well-established 2D saliency model pretrained by a large number of training images with the ground truth of 2D saliency maps. In addition, this paper proposes a multi-scale estimation method by extracting 2D images in multiple angles of view to detect objects of various sizes with variable receptive fields. The saliency maps estimated from the multiple angles of view were integrated by using pixel-wise attention weights calculated in an integration layer for weighting the optimal scale to each object. The proposed method was evaluated using a publicly available dataset with evaluation metrics for omni-directional saliency maps. It was confirmed that the accuracy of the saliency maps was improved by the proposed method.