The search functionality is under construction.

IEICE TRANSACTIONS on Information

Offline Permutation on the CUDA-enabled GPU

Akihiko KASAGI, Koji NAKANO, Yasuaki ITO

  • Full Text Views

    0

  • Cite this

Summary :

The Hierarchical Memory Machine (HMM) is a theoretical parallel computing model that captures the essence of computation on CUDA-enabled GPUs. The offline permutation is a task to copy numbers stored in an array a of size n to an array b of the same size along a permutation P given in advance. A conventional algorithm can complete the offline permutation by executing b[p[i]] ← a[i] for all i in parallel, where an array p stores the permutation P. We first present that the conventional algorithm runs $D_w(P)+2{nover w}+3L-3$ time units using n threads on the HMM with width w and latency L, where Dw(P) is the distribution of P. We next show that important regular permutations including transpose, shuffle, and bit-reversal permutations run $2{nover w}+2{nover kw}+2L-2$ time units on the HMM with k DMMs. We have implemented permutation algorithms for these regular permutations on GeForce GTX 680 GPU. The experimental results show that these algorithms run much faster than the conventional algorithm. We also present an offline permutation algorithm for any permutation running in $16{nover w}+16{nover kw}+16L-16$ time units on the HMM with k DMMs. Quite surprisingly, our offline permutation algorithm on the GPU achieves better performance than the conventional algorithm in random permutation, although the running time has a large constant factor. We can say that the experimental results provide a good example of GPU computation showing that a complicated but ingenious implementation with a larger constant factor in computing time can outperform a much simpler conventional algorithm.

Publication
IEICE TRANSACTIONS on Information Vol.E97-D No.12 pp.3052-3062
Publication Date
2014/12/01
Publicized
Online ISSN
1745-1361
DOI
10.1587/transinf.2014PAP0010
Type of Manuscript
Special Section PAPER (Special Section on Parallel and Distributed Computing and Networking)
Category
GPU

Authors

Akihiko KASAGI
  Hiroshima University
Koji NAKANO
  Hiroshima University
Yasuaki ITO
  Hiroshima University

Keyword