The search functionality is under construction.

Author Search Result

[Author] Tomohiro MASHITA(4hit)

1-4hit
  • Calibration Method for Misaligned Catadioptric Camera

    Tomohiro MASHITA  Yoshio IWAI  Masahiko YACHIDA  

     
    PAPER-Camera Calibration

      Vol:
    E89-D No:7
      Page(s):
    1984-1993

    This paper proposes a calibration method for catadioptric camera systems consisting of a mirror whose reflecting surface is the surface of revolution and a perspective camera as typified by HyperOmni Vision. The proposed method is based on conventional camera calibration and mirror posture estimation. Many methods for camera calibration have been proposed and during the last decade, methods for catadioptric camera calibration have also been proposed. The main problem with catadioptric camera calibration is that the degree of freedom of mirror posture is limited or the accuracy of the estimated parameters is inadequate due to nonlinear optimization. On the other hand, our method can estimate five degrees of freedom of mirror posture and is free from the volatility of nonlinear optimization. The mirror posture has five degrees of freedom, because the mirror surface has a surface of revolution. Our method uses the mirror boundary and can estimate up to four mirror postures. We apply an extrinsic parameter calibration method based on conic fitting for this estimation method. Because an estimate of the mirror posture is not unique, we also propose a selection method for finding the best one. By using the conic-based analytical method we can avoid the initial value problem arising from nonlinear optimization. We conducted experiments on synthesized images and real images to evaluate the performance of our method, and discuss its accuracy.

  • Solving 3D Container Loading Problems Using Physics Simulation for Genetic Algorithm Evaluation

    Shuhei NISHIYAMA  Chonho LEE  Tomohiro MASHITA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/08/06
      Vol:
    E104-D No:11
      Page(s):
    1913-1922

    In this work, an optimization method for the 3D container loading problem with multiple constraints is proposed. The method consists of a genetic algorithm to generate an arrangement of cargo and a fitness evaluation using a physics simulation. The fitness function considers not only the maximization of the container density and fitness value but also several different constraints such as weight, stack-ability, fragility, and orientation of cargo pieces. We employed a container shaking simulation for the fitness evaluation to include constraint effects during loading and transportation. We verified that the proposed method successfully provides the optimal cargo arrangement for small-scale problems with about 10 pieces of cargo.

  • Feature Description with Feature Point Registration Error Using Local and Global Point Cloud Encoders

    Kenshiro TAMATA  Tomohiro MASHITA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/10/11
      Vol:
    E105-D No:1
      Page(s):
    134-140

    A typical approach to reconstructing a 3D environment model is scanning the environment with a depth sensor and fitting the accumulated point cloud to 3D models. In this kind of scenario, a general 3D environment reconstruction application assumes temporally continuous scanning. However in some practical uses, this assumption is unacceptable. Thus, a point cloud matching method for stitching several non-continuous 3D scans is required. Point cloud matching often includes errors in the feature point detection because a point cloud is basically a sparse sampling of the real environment, and it may include quantization errors that cannot be ignored. Moreover, depth sensors tend to have errors due to the reflective properties of the observed surface. We therefore make the assumption that feature point pairs between two point clouds will include errors. In this work, we propose a feature description method robust to the feature point registration error described above. To achieve this goal, we designed a deep learning based feature description model that consists of a local feature description around the feature points and a global feature description of the entire point cloud. To obtain a feature description robust to feature point registration error, we input feature point pairs with errors and train the models with metric learning. Experimental results show that our feature description model can correctly estimate whether the feature point pair is close enough to be considered a match or not even when the feature point registration errors are large, and our model can estimate with higher accuracy in comparison to methods such as FPFH or 3DMatch. In addition, we conducted experiments for combinations of input point clouds, including local or global point clouds, both types of point cloud, and encoders.

  • Improving Pointing Direction Estimation by Considering Hand- and Ocular-Dominance

    Tomohiro MASHITA  Koichi SHINTANI  Kiyoshi KIYOKAWA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2020/07/20
      Vol:
    E103-D No:10
      Page(s):
    2168-2177

    This paper introduces a user study regarding the effects of hand- and ocular-dominances to pointing gestures. The result of this study is applicable for designing new gesture interfaces which are close to a user's cognition, intuitive, and easy to use. The user study investigates the relationship between the participant's dominances and pointing gestures. Four participant groups—right-handed right-eye dominant, right-handed left-eye dominant, left-handed right-eye dominant and left-handed left-eye dominant—were prepared, and participants were asked to point at the targets on a screen by their left and right hands. The pointing errors among the different participant groups are calculated and compared. The result of this user study shows that using dominant eyes produces better results than using non-dominant eyes and the accuracy increases when the targets are located at the same side of dominant eye. Based on these interesting properties, a method to find the dominant eye for pointing gestures is proposed. This method can find the dominant eye of an individual with more than 90% accuracy.