The search functionality is under construction.

Keyword Search Result

[Keyword] parallelizing compilers(4hit)

1-4hit
  • Efficient Loop Partitioning for Parallel Codes of Irregular Scientific Computations

    Minyi GUO  

     
    PAPER-Software Systems

      Vol:
    E86-D No:9
      Page(s):
    1825-1834

    In most cases of distributed memory computations, node programs are executed on processors according to the owner computes rule. However, owner computes rule is not best suited for irregular application codes. In irregular application codes, use of indirection in accessing left hand side array makes it difficult to partition the loop iterations, and because of use of indirection in accessing right hand side elements, we may reduce total communication by using heuristics other than owner computes rule. In this paper, we propose a communication cost reduction computes rule for irregular loop partitioning, called least communication computes rule. We partition a loop iteration to a processor on which the minimal communication cost is ensured when executing that iteration. Then, after all iterations are partitioned into various processors, we give global vs. local data transformation rule, indirection arrays remapping and communication optimization methods. The experimental results show that, in most cases, our approaches achieved better performance than other loop partitioning rules.

  • Parallel Molecular Dynamics in a Parallelizing SML Compiler

    Norman SCAIFE  Ryoko HAYASHI  Susumu HORIGUCHI  

     
    PAPER-Software Systems and Technologies

      Vol:
    E86-D No:9
      Page(s):
    1569-1576

    We have constructed a parallelizing compiler for Standard ML (SML) based upon algorithmic skeletons. We present an implementation of a Parallel Molecular Dynamics (PMD) simulation in order to compare our functional approach with a traditional imperative approach. Although we present performance data, the principal benefits from our approach are in the modularity of the code and the ease of programming. Extant FORTRAN90 code for an O(N 2) algorithm is translated, firstly into imperative SML and then into purely functional SML which is then parallelized. The ease of programming and the performance of the FORTRAN90 and SML code are compared. Modest parallel performance is obtained from the parallel SML but with a much slower sequential execution time compared to the FORTRAN90. We then improve the implementation with a ring topology implementation which gives much closer performance to the FORTRAN90 implementation.

  • Enhanced Look-Ahead Scheduling Technique to Overlap Communication with Computation

    Dingchao LI  Yuji IWAHORI  Tatsuya HAYASHI  Naohiro ISHII  

     
    PAPER-Sofware System

      Vol:
    E81-D No:11
      Page(s):
    1205-1212

    Reducing communication overhead is a key goal of program optimization for current scalable multiprocessors. A well-known approach to achieving this is to map tasks (indivisible units of computation) to processors so that communication and computation overlap as much as possible. In an earlier work, we developed a look-ahead scheduling heuristic for efficiently reducing communication overhead with the aim of decreasing the completion time of a given parallel program. In this paper, we report on an extension of the algorithm, which fills in the idle time slots created by interprocessor communication without increasing the algorithm's time complexity. The results of experiments emphasize the importance of optimally filling idle time slots in processors.

  • Data-Localization Scheduling inside Processor-Cluster for Multigrain Parallel Processing

    Akimasa YOSHIDA  Ken'ichi KOSHIZUKA  Wataru OGATA  Hironori KASAHARA  

     
    PAPER

      Vol:
    E80-D No:4
      Page(s):
    473-479

    This paper proposes a data-localization scheduling scheme inside a processor-cluster for multigrain parallel processing, which hierarchically exploits parallelism among coarsegrain tasks like loops, medium-grain tasks like loop iterations and near-fine-grain tasks like statements. The proposed scheme assigns near-fine-grain or medium-grain tasks inside coarse-grain tasks onto processors inside a processor-cluster so that maximum parallelism can be exploited and inter-processor data transfer can be minimum after data-localization for coarse-grain tasks across processor-clusters. Performance evaluation on a multiprocessor system OSCAR shows that multigrain parallel processing with the proposed data-localization scheduling can reduce execution time for application programs by 10% compared with multigrain parallel processing without data-localization.