The search functionality is under construction.

Author Search Result

[Author] Masakazu MURAMATSU(3hit)

1-3hit
  • Network Congestion Minimization Models Based on Robust Optimization

    Bimal CHANDRA DAS  Satoshi TAKAHASHI  Eiji OKI  Masakazu MURAMATSU  

     
    PAPER-Network

      Pubricized:
    2017/09/14
      Vol:
    E101-B No:3
      Page(s):
    772-784

    This paper introduces robust optimization models for minimization of the network congestion ratio that can handle the fluctuation in traffic demands between nodes. The simplest and widely used model to minimize the congestion ratio, called the pipe model, is based on precisely specified traffic demands. However, in practice, network operators are often unable to estimate exact traffic demands as they can fluctuate due to unpredictable factors. To overcome this weakness, we apply robust optimization to the problem of minimizing the network congestion ratio. First, we review existing models as robust counterparts of certain uncertainty sets. Then we consider robust optimization assuming ellipsoidal uncertainty sets, and derive a tractable optimization problem in the form of second-order cone programming (SOCP). Furthermore, we take uncertainty sets to be the intersection of ellipsoid and polyhedral sets, and considering the mirror subproblems inherent in the models, obtain tractable optimization problems, again in SOCP form. Compared to the previous model that assumes an error interval on each coordinate, our models have the advantage of being able to cope with the total amount of errors by setting a parameter that determines the volume of the ellipsoid. We perform numerical experiments to compare our SOCP models with the existing models which are formulated as linear programming problems. The results demonstrate the relevance of our models in terms of congestion ratio and computation time.

  • Shift Quality Classifier Using Deep Neural Networks on Small Data with Dropout and Semi-Supervised Learning

    Takefumi KAWAKAMI  Takanori IDE  Kunihito HOKI  Masakazu MURAMATSU  

     
    PAPER-Pattern Recognition

      Pubricized:
    2023/09/05
      Vol:
    E106-D No:12
      Page(s):
    2078-2084

    In this paper, we apply two methods in machine learning, dropout and semi-supervised learning, to a recently proposed method called CSQ-SDL which uses deep neural networks for evaluating shift quality from time-series measurement data. When developing a new Automatic Transmission (AT), calibration takes place where many parameters of the AT are adjusted to realize pleasant driving experience in all situations that occur on all roads around the world. Calibration requires an expert to visually assess the shift quality from the time-series measurement data of the experiments each time the parameters are changed, which is iterative and time-consuming. The CSQ-SDL was developed to shorten time consumed by the visual assessment, and its effectiveness depends on acquiring a sufficient number of data points. In practice, however, data amounts are often insufficient. The methods proposed here can handle such cases. For the cases wherein only a small number of labeled data points is available, we propose a method that uses dropout. For those cases wherein the number of labeled data points is small but the number of unlabeled data is sufficient, we propose a method that uses semi-supervised learning. Experiments show that while the former gives moderate improvement, the latter offers a significant performance improvement.

  • Implementation Issues of Second-Order Cone Programming Approaches for Support Vector Machine Learning Problems

    Rameswar DEBNATH  Masakazu MURAMATSU  Haruhisa TAKAHASHI  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E92-A No:4
      Page(s):
    1209-1222

    The core of the support vector machine (SVM) problem is a quadratic programming problem with a linear constraint and bounded variables. This problem can be transformed into the second order cone programming (SOCP) problems. An interior-point-method (IPM) can be designed for the SOCP problems in terms of storage requirements as well as computational complexity if the kernel matrix has low-rank. If the kernel matrix is not a low-rank matrix, it can be approximated by a low-rank positive semi-definite matrix, which in turn will be fed into the optimizer. In this paper we present two SOCP formulations for each SVM classification and regression problem. There are several search direction methods for implementing SOCPs. Our main goal is to find a better search direction for implementing the SOCP formulations of the SVM problems. Two popular search direction methods: HKM and AHO are tested analytically for the SVM problems, and efficiently implemented. The computational costs of each iteration of the HKM and AHO search direction methods are shown to be the same for the SVM problems. Thus, the training time depends on the number of IPM iterations. Our experimental results show that the HKM method converges faster than the AHO method. We also compare our results with the method proposed in Fine and Scheinberg (2001) that also exploits the low-rank of the kernel matrix, the state-of-the-art SVM optimization softwares SVMTorch and SVMlight. The proposed methods are also compared with Joachims 'Linear SVM' method on linear kernel.