The search functionality is under construction.

Author Search Result

[Author] Toshihide IBARAKI(17hit)

1-17hit
  • Solving the Single-Vehicle Scheduling Problems for All Home Locations under Depth-First Routing on a Tree

    Hiroshi NAGAMOCHI  Koji MOCHIZUKI  Toshihide IBARAKI  

     
    PAPER

      Vol:
    E84-A No:5
      Page(s):
    1135-1143

    We consider a single-vehicle scheduling problem on a tree, where each vertex has a job with a release time and a processing time and each edge has a travel time. There is a single vehicle which starts from a start vertex s and reaches a goal vertex g after finishing all jobs. In particular, s is called a home location if s = g. The objective of the problem is to find a depth-first routing on T so as to minimize the completion time. In this paper, we first show that the minimum completion times of the problem for all home locations s V can be simultaneously computed in O(n) time, once the problem with a specified home location s V has been solved, where n is the number of vertices. We also show that given a specified start vertex s, the minimum completion times for all goal vertices g can be computed in O(n) time.

  • Parallel-Machine Scheduling Problem with Unit Processing Time When Jobs Have Ready and Due Times

    Toshihide IBARAKI  Hiroshi KISE  Hisashi MINE  

     
    PAPER-Computers

      Vol:
    E59-E No:7
      Page(s):
    1-6

    Let J{1, 2, , n} be a set of jobs. Each job j can be processed on one of the m identical machines in one unit time, and has ready and due times. It is shown that the problem of finding an optimal schedule minimizing the sum of costs associated with jobs j, which are monotone functions of completion time of j, can be reduced to the assignment problem with 0(n) vertices; hence it can be solved in 0(n3) steps. As important special cases, this cost includes the weighted number of late jobs, the weighted tardiness, and the weighted flow-time. In addition, it is shown that the problem has an 0(n log n) algorithm if the objective is to minimize the (unweighted) number of late jobs. This problem is critical for having an efficient algorithm in the sense that generalizing it in various directions easily results in NP-complete problems. As an example, it is shown that it becomes NP-complete if precedence constraints are imposed.

  • Reachability Problems of Random Digraphs

    Yushi UNO  Toshihide IBARAKI  

     
    PAPER-Graphs and Networks

      Vol:
    E81-A No:12
      Page(s):
    2694-2702

    Consider a random digraph G=(V,A), where |V|=n and an arc (u,v) is present in A with probability p(n) independent of the existence of the other arcs. We discuss the expected number of vertices reachable from a vertex, the expected size of the transitive closure of G and their related topics based on the properties of reachability, where the reachability from a vertex s to t is defined as the probability that s is reachable to t. Let γn,p(n) denote the reachability s to t (s) in the above random digraph G. (In case of s=t, it requires another definition. ) We first present a method of computing the exact value of γn,p(n) for given n and p(n). Since the computation of γn,p(n) by this method requires O(n3) time, we then derive simple upper and lower bounds γn,p(n)U and γn,p(n)L on γn,p(n), respectively, and in addition, we give an upper bound n,p(n) on γn,p(n)U, which is easier to analyze but is still rather accurate. Then, we discuss the asymptotic behavior of n,p(n) and show that, if p(n)=α/(n-1), limnn,p(n) converges to one of the solutions of the equation 1-x-e-α x=0. Furthermore, as for (n) and (n), which are upper bounds on the expected number of reachable vertices and the expected size of the transitive closure of G, resp. , it turns out that limn(n) =α/(1-α) if p(n)=α/(n-1) for 0<α<1; otherwise either 0 or , and limn(n)=α if p(n)=α/(n-1)2 for α0; otherwise either 0 or .

  • FOREWORD

    Toshihide IBARAKI  Masafumi YAMASHITA  

     
    FOREWORD

      Vol:
    E83-D No:3
      Page(s):
    319-321
  • A Randomness Based Analysis on the Data Size Needed for Removing Deceptive Patterns

    Kazuya HARAGUCHI  Mutsunori YAGIURA  Endre BOROS  Toshihide IBARAKI  

     
    PAPER-Algorithm Theory

      Vol:
    E91-D No:3
      Page(s):
    781-788

    We consider a data set in which each example is an n-dimensional Boolean vector labeled as true or false. A pattern is a co-occurrence of a particular value combination of a given subset of the variables. If a pattern appears frequently in the true examples and infrequently in the false examples, we consider it a good pattern. In this paper, we discuss the problem of determining the data size needed for removing "deceptive" good patterns; in a data set of a small size, many good patterns may appear superficially, simply by chance, independently of the underlying structure. Our hypothesis is that, in order to remove such deceptive good patterns, the data set should contain a greater number of examples than that at which a random data set contains few good patterns. We justify this hypothesis by computational studies. We also derive a theoretical upper bound on the needed data size in view of our hypothesis.

  • NP-Complete Diagnosis Problems on System Graphs

    Toshihide IBARAKI  Tsunehiko KAMEDA  Shunichi TOIDA  

     
    PAPER-Computers

      Vol:
    E62-E No:2
      Page(s):
    81-88

    Various minimization problems associated with the diagnosis of systems represented by directed graphs are shown to be NP-complete. These problems include () finding the minimum number of test points, test connections and blocking gates on a SEC graph (single entry single exit connected acyclic graph), respectively, to make it distinguishable, and () finding a test set with the minimum number of tests to locate a faulty vertex on a SEC graph with test points, test connections and blocking gates attached, respectively. The NP-completeness results indicate that these problems are all intractable in the sense that it is most unlikely that some algorithm can solve them in polynomial time of the problem size.

  • Collision Probability in an In-Line Equipment Model under Erlang Distribution

    Eishi CHIBA  Hiroshi FUJIWARA  Yoshiyuki SEKIGUCHI  Toshihide IBARAKI  

     
    PAPER

      Vol:
    E96-D No:3
      Page(s):
    400-407

    Flat Panel Displays (FPDs) are manufactured using many pieces of different processing equipment arranged sequentially in a line. Although the constant inter-arrival time (i.e., the tact time) of glass substrates in the line should be kept as short as possible, the collision probability between glass substrates increases as tact time decreases. Since the glass substrate is expensive and fragile, collisions should be avoided. In this paper, we derive a closed form formula of the approximate collision probability for a model, in which the processing time on each piece of equipment is assumed to follow Erlang distribution. We also compare some numerical results of the closed form and computer simulation results of the collision probability.

  • The Computational Complexity of the m-Center Problems on the Plane

    Shigeru MASUYAMA  Toshihide IBARAKI  Toshiharu HASEGAWA  

     
    PAPER-Miscellaneous

      Vol:
    E64-E No:2
      Page(s):
    57-64

    The m-center problem asks to place m objects on the plane so that the distance from a client (a point) to the closest object is at most a given number r. This problem is often encountered in locating facilities or resources of a geographically distributed system. This paper shows that this problem is NP-complete. The NP-complenteness indicates its computational intractability, i.e., it is most unlikely that some algorithm can solve it in polynomial time of the problem size.

  • Approximability of the Minimum Maximal Matching Problem in Planar Graphs

    Hiroshi NAGAMOCHI  Yukihiro NISHIDA  Toshihide IBARAKI  

     
    PAPER-Graphs and Networks

      Vol:
    E86-A No:12
      Page(s):
    3251-3258

    Given an edge-weighted graph G, the minimum maximal matching problem asks to find a minimum weight maximal matching. The problem is known to be NP-hard even if the graph is planar and unweighted. In this paper, we consider the problem in planar graphs. First, we prove a strong inapproximability for the problem in weighted planar graphs. Second, in contrast with the first result, we show that a polynomial time approximation scheme (PTAS) for the problem in unweighted planar graphs can be obtained by a divide-and-conquer method based on the planar separator theorem. For a given ε > 0, our scheme delivers in time a solution with size at most (1 + ε) times the optimal value, where n is the number of vertices in G and α is a constant number.

  • A 7/3-Approximation for the Minimum Weight 3-Connected Spanning Subgraph Problem

    Hiroshi NAGAMOCHI  Katsuhiro SEKI  Toshihide IBARAKI  

     
    PAPER

      Vol:
    E83-A No:4
      Page(s):
    687-691

    We consider the problem of finding a minimum weight k-connected spanning subgraph of a given edge-weighted graph G for k=3. The problem is known to be NP-hard for k 2, and there are an O(n2m) time 3-approximation algorithm due to Nutov and Penn and an O(n8) time 2-approximation algorithm due to Dinitz and Nutov, where n and m are the numbers of vertices and edges in G, respectively. In this paper, we present a 7/3-approximation algorithm which runs in O(n2m) time.

  • A Simple Proof of a Minimum Cut Algorithm and Its Applications

    Hiroshi NAGAMOCHI  Toshimasa ISHII  Toshihide IBARAKI  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E82-A No:10
      Page(s):
    2231-2236

    For the correctness of the minimum cut algorithm proposed in [H. Nagamochi and T. Ibaraki, Computing edge-connectivity of multigraphs and capacitated graphs, SIAM J. Discrete Mathematics, 5, 1992, pp. 54-66], several simple proofs have been presented so far. This paper gives yet another simple proof. As a byproduct, it can provide an O(m log n) time algorithm that outputs a maximum flow between the pair of vertices s and t selected by the algorithm, where n and m are the numbers of vertices and edges, respectively. This algorithm can be used to speed up the algorithm to compute DAGs,t that represents all minimum cuts separating vertices s and t in a graph G, and the algorithm to compute the cactus Γ(G) that represents all minimum cuts in G.

  • Data Analysis by Positive Decision Trees

    Kazuhisa MAKINO  Takashi SUDA  Hirotaka ONO  Toshihide IBARAKI  

     
    PAPER-Theoretical Aspects

      Vol:
    E82-D No:1
      Page(s):
    76-88

    Decision trees are used as a convenient means to explain given positive examples and negative examples, which is a form of data mining and knowledge discovery. Standard methods such as ID3 may provide non-monotonic decision trees in the sense that data with larger values in all attributes are sometimes classified into a class with a smaller output value. (In the case of binary data, this is equivalent to saying that the discriminant Boolean function that the decision tree represents is not positive. ) A motivation of this study comes from an observation that real world data are often positive, and in such cases it is natural to build decision trees which represent positive (i. e. , monotone) discriminant functions. For this, we propose how to modify the existing procedures such as ID3, so that the resulting decision tree represents a positive discriminant function. In this procedure, we add some new data to recover the positivity of data, which the original data had but was lost in the process of decomposing data sets by such methods as ID3. To compare the performance of our method with existing methods, we test (1) positive data, which are randomly generated from a hidden positive Boolean function after adding dummy attributes, and (2) breast cancer data as an example of the real-world data. The experimental results on (1) tell that, although the sizes of positive decision trees are relatively larger than those without positivity assumption, positive decision trees exhibit higher accuracy and tend to choose correct attributes, on which the hidden positive Boolean function is defined. For the breast cancer data set, we also observe a similar tendency; i. e. , positive decision trees are larger but give higher accuracy.

  • A Note on Approximating the Survivable Network Design Problem in Hypergraphs

    Liang ZHAO  Hiroshi NAGAMOCHI  Toshihide IBARAKI  

     
    PAPER

      Vol:
    E85-D No:2
      Page(s):
    322-326

    We consider to design approximation algorithms for the survivable network design problem in hypergraphs (SNDPHG) based on algorithms developed for the survivable network design problem in graphs (SNDP) or the element connectivity problem in graphs (ECP). Given an instance of the SNDPHG, by replacing each hyperedge e={v1,,vk} with a new vertex we and k edges {we, v1},, {we, vk}, we define an SNDP or ECP in the resulting graph. We show that by approximately solving the SNDP or ECP defined in this way, several approximation algorithms for the SNDPHG can be obtained. One of our results is a dmax+-approximation algorithm for the SNDPHG with dmax 3, where dmax (resp. dmax+) is the maximum degree of hyperedges (resp. hyperedges with positive cost). Another is a dmax+(rmax)-approximation algorithm for the SNDPHG, where (i)=j=1i(1/j) is the harmonic function and rmax is the maximum connectivity requirement.

  • FOREWORD

    Toshihide IBARAKI  Osamu WATANABE  

     
    FOREWORD

      Vol:
    E75-D No:1
      Page(s):
    3-4
  • Construction of Classifiers by Iterative Compositions of Features with Partial Knowledge

    Kazuya HARAGUCHI  Toshihide IBARAKI  

     
    PAPER

      Vol:
    E89-A No:5
      Page(s):
    1284-1291

    We consider the classification problem to construct a classifier c:{0,1}n{0,1} from a given set of examples (training set), which (approximately) realizes the hidden oracle y:{0,1}n{0,1} describing the phenomenon under consideration. For this problem, a number of approaches are already known in computational learning theory; e.g., decision trees, support vector machines (SVM), and iteratively composed features (ICF). The last one, ICF, was proposed in our previous work (Haraguchi et al., (2004)). A feature, composed of a nonempty subset S of other features (including the original data attributes), is a Boolean function fS:{0,1}S{0,1} and is constructed according to the proposed rule. The ICF algorithm iterates generation and selection processes of features, and finally adopts one of the generated features as the classifier, where the generation process may be considered as embodying the idea of boosting, since new features are generated from the available features. In this paper, we generalize a feature to an extended Boolean function fS:{0,1,*}S{0,1,*} to allow partial knowledge, where * denotes the state of uncertainty. We then propose the algorithm ICF* to generate such generalized features. The selection process of ICF* is also different from that of ICF, in that features are selected so as to cover the entire training set. Our computational experiments indicate that ICF* is better than ICF in terms of both classification performance and computation time. Also, it is competitive with other representative learning algorithms such as decision trees and SVM.

  • Complexity of the Optimum Join Order Problem in Relational Databases

    Yushi UNO  Toshihide IBARAKI  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E74-D No:7
      Page(s):
    2067-2075

    Optimizing the computing process of relational databeses is important in order to increase their applicability. The process consists of operations involving many relational tables. Among basic operations, joins are the most important because they require most of the computational time. In this paper, we consider to execute such joins on many relational tables by the merge-scan method, and try to find the optimum join order that minimizes the total size of intermediate tables (including the final answer table). The cost is important in its own right as it represents the memory space requirement of the entire computation. It can be also viewed as an approximate measure of computational time. However, it turns out that the problem is solvable in polynomial time only for very restricted special cases, and in NP-hard in general.

  • An LP-Based Local Search to the One Dimensional Cutting Stock Problem Using a Given Number of Cutting Patterns

    Shunji UMETANI  Mutsunori YAGIURA  Toshihide IBARAKI  

     
    PAPER

      Vol:
    E86-A No:5
      Page(s):
    1093-1102

    The one dimensional cutting stock problem (1D-CSP) is one of the representative combinatorial optimization problems, which arises in many industries. As the setup costs of cutting patterns become more dominant in recent cutting industry, we consider a variant of 1D-CSP, in which the total number of applications of cutting patterns is minimized under the constraint that the number of different cutting patterns is specified in advance. We propose a local search algorithm that uses the neighborhood obtained by perturbating one cutting pattern in the current set of patterns, where the perturbations are done by utilizing the dual solution of the auxiliary linear programming problem (LP). In this process, in order to solve a large number of LPs, we start the criss-cross variation of the simplex algorithm from the optimal simplex tableau of the previous solution, instead of starting it from scratch. According to our computational experiment, it is observed that the proposed algorithm obtains a wide variety of good solutions which are comparable to the existing heuristic approaches.