The search functionality is under construction.

Author Search Result

[Author] Koji TSUDA(2hit)

1-2hit
  • Cartesian Kernel: An Efficient Alternative to the Pairwise Kernel

    Hisashi KASHIMA  Satoshi OYAMA  Yoshihiro YAMANISHI  Koji TSUDA  

     
    PAPER

      Vol:
    E93-D No:10
      Page(s):
    2672-2679

    Pairwise classification has many applications including network prediction, entity resolution, and collaborative filtering. The pairwise kernel has been proposed for those purposes by several research groups independently, and has been used successfully in several fields. In this paper, we propose an efficient alternative which we call a Cartesian kernel. While the existing pairwise kernel (which we refer to as the Kronecker kernel) can be interpreted as the weighted adjacency matrix of the Kronecker product graph of two graphs, the Cartesian kernel can be interpreted as that of the Cartesian graph, which is more sparse than the Kronecker product graph. We discuss the generalization bounds of the two pairwise kernels by using eigenvalue analysis of the kernel matrices. Also, we consider the N-wise extensions of the two pairwise kernels. Experimental results show the Cartesian kernel is much faster than the Kronecker kernel, and at the same time, competitive with the Kronecker kernel in predictive performance.

  • Fast Iterative Mining Using Sparsity-Inducing Loss Functions

    Hiroto SAIGO  Hisashi KASHIMA  Koji TSUDA  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:8
      Page(s):
    1766-1773

    Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it difficult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns for classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.