The search functionality is under construction.

Keyword Search Result

[Keyword] GPU(89hit)

41-60hit(89hit)

  • BLM-Rank: A Bayesian Linear Method for Learning to Rank and Its GPU Implementation

    Huifeng GUO  Dianhui CHU  Yunming YE  Xutao LI  Xixian FAN  

     
    PAPER

      Pubricized:
    2016/01/14
      Vol:
    E99-D No:4
      Page(s):
    896-905

    Ranking as an important task in information systems has many applications, such as document/webpage retrieval, collaborative filtering and advertising. The last decade has witnessed a growing interest in the study of learning to rank as a means to leverage training information in a system. In this paper, we propose a new learning to rank method, i.e. BLM-Rank, which uses a linear function to score samples and models the pairwise preference of samples relying on their scores under a Bayesian framework. A stochastic gradient approach is adopted to maximize the posterior probability in BLM-Rank. For industrial practice, we have also implemented the proposed algorithm on Graphic Processing Unit (GPU). Experimental results on LETOR have demonstrated that the proposed BLM-Rank method outperforms the state-of-the-art methods, including RankSVM-Struct, RankBoost, AdaRank-NDCG, AdaRank-MAP and ListNet. Moreover, the results have shown that the GPU implementation of the BLM-Rank method is ten-to-eleven times faster than its CPU counterpart in the training phase, and one-to-four times faster in the testing phase.

  • Dynamic Rendering Quality Scaling Based on Resolution Changes

    MinKyu KIM  SunHo KI  YoungDuke SEO  JinHong PARK  ChuShik JHON  

     
    LETTER-Computer Graphics

      Pubricized:
    2015/09/17
      Vol:
    E98-D No:12
      Page(s):
    2353-2357

    Recently in the mobile graphic industry, ultra-realistic visual qualities with 60fps and limited power budget for GPU have been required. For graphics-heavy applications that run at 30 fps, we easily observed very noticeable flickering artifacts. Further, the workload imposed by high resolutions at high frame rates directly decreases the battery life. Unlike the recent frame rate up sampling algorithms which remedy the flickering but cause inevitable significant overheads to reconstruct intermediate frames, we propose a dynamic rendering quality scaling (DRQS) that includes dynamic rendering based on resolution changes and quality scaling to increase the frame rate with negligible overhead using a transform matrix. Further DRQS reduces the workload up to 32% without human visual-perceptual changes for graphics-light applications.

  • Implementation of Viterbi Decoder toward GPU-Based SDR Receiver

    Kosuke TOMITA  Masahide HATANAKA  Takao ONOYE  

     
    PAPER

      Vol:
    E98-A No:11
      Page(s):
    2246-2253

    Viterbi decoding is commonly used for several protocols, but computational cost is quite high and thus it is necessary to implement it effectively. This paper describes GPU implementation of Viterbi decoder utilizing three-point Viterbi decoding algorithm (TVDA), in which the received bits are divided into multiple chunks and several chunks are decoded simultaneously. Coalesced access and Warp Shuffle, which is new instruction introduced are also utilized in order to improve decoder performance. In addition, iterative execution of parallel chunks decoding reduces the latency of proposed Viterbi decoder in order to utilize the decoder as a part of GPU-based SDR transceiver. As the result, the throughput of proposed Viterbi decoder is improved by 23.1%.

  • 3D Objects Tracking by MapReduce GPGPU-Enhanced Particle Filter

    Jieyun ZHOU  Xiaofeng LI  Haitao CHEN  Rutong CHEN  Masayuki NUMAO  

     
    PAPER

      Pubricized:
    2015/01/21
      Vol:
    E98-D No:5
      Page(s):
    1035-1044

    Objects tracking methods have been wildly used in the field of video surveillance, motion monitoring, robotics and so on. Particle filter is one of the promising methods, but it is difficult to apply to real-time objects tracking because of its high computation cost. In order to reduce the processing cost without sacrificing the tracking quality, this paper proposes a new method for real-time 3D objects tracking, using parallelized particle filter algorithms by MapReduce architecture which is running on GPGPU. Our methods are as follows. First, we use a Kinect to get the 3D information of objects. Unlike the conventional 2D-based objects tracking, 3D objects tracking adds depth information. It can track not only from the x and y axis but also from the z axis, and the depth information can correct some errors in 2D objects tracking. Second, to solve the high computation cost problem, we use the MapReduce architecture on GPGPU to parallelize the particle filter algorithm. We implement the particle filter algorithms on GPU and evaluate the performance by actually running a program on CUDA5.5.

  • An Optimal Implementation of the Approximate String Matching on the Hierarchical Memory Machine, with Performance Evaluation on the GPU

    Duhu MAN  Koji NAKANO  Yasuaki ITO  

     
    PAPER-GPU

      Vol:
    E97-D No:12
      Page(s):
    3063-3071

    The Hierarchical Memory Machine (HMM) is a theoretical parallel computing model that captures the essence of computing on CUDA-enabled GPUs. The approximate string matching (ASM) for two strings X and Y of length m and n is a task to find a substring of Y most similar to X. The main contribution of this paper is to show an optimal parallel algorithm for the approximate string matching on the HMM and implement it on GeForce GTX 580 GPU. Our algorithm runs in $O({nover w}+{mnover dw}+{nLover p}+{mnlover p})$ time units on the HMM with p threads, d streaming processors, memory band width w, global memory access latency L, and shared memory access latency l. We also show that the lower bound of the computing time is $Omega({nover w}+{mnover dw}+{nLover p}+{mnlover p})$ time units. Thus, our algorithm for the approximate string matching is time optimal. Further, we implemented our algorithm on GeForce GTX 580 GPU and evaluated the performance. The experimental results show that the ASM of two strings of 1024 and 4M (=222) characters can be done in 419.6ms, while the sequential algorithm can compute it in 27720ms. Thus, our implementation on the GPU attains a speedup factor of 66.1 over the single CPU implementation.

  • Offline Permutation on the CUDA-enabled GPU

    Akihiko KASAGI  Koji NAKANO  Yasuaki ITO  

     
    PAPER-GPU

      Vol:
    E97-D No:12
      Page(s):
    3052-3062

    The Hierarchical Memory Machine (HMM) is a theoretical parallel computing model that captures the essence of computation on CUDA-enabled GPUs. The offline permutation is a task to copy numbers stored in an array a of size n to an array b of the same size along a permutation P given in advance. A conventional algorithm can complete the offline permutation by executing b[p[i]] ← a[i] for all i in parallel, where an array p stores the permutation P. We first present that the conventional algorithm runs $D_w(P)+2{nover w}+3L-3$ time units using n threads on the HMM with width w and latency L, where Dw(P) is the distribution of P. We next show that important regular permutations including transpose, shuffle, and bit-reversal permutations run $2{nover w}+2{nover kw}+2L-2$ time units on the HMM with k DMMs. We have implemented permutation algorithms for these regular permutations on GeForce GTX 680 GPU. The experimental results show that these algorithms run much faster than the conventional algorithm. We also present an offline permutation algorithm for any permutation running in $16{nover w}+16{nover kw}+16L-16$ time units on the HMM with k DMMs. Quite surprisingly, our offline permutation algorithm on the GPU achieves better performance than the conventional algorithm in random permutation, although the running time has a large constant factor. We can say that the experimental results provide a good example of GPU computation showing that a complicated but ingenious implementation with a larger constant factor in computing time can outperform a much simpler conventional algorithm.

  • Parallelization of Dynamic Time Warping on a Heterogeneous Platform

    Yao ZHENG  Limin XIAO  Wenqi TANG  Lihong SHANG  Guangchao YAO  Li RUAN  

     
    LETTER-Algorithms and Data Structures

      Vol:
    E97-A No:11
      Page(s):
    2258-2262

    The dynamic time warping (DTW) algorithm is widely used to determine time series similarity search. As DTW has quadratic time complexity, the time taken for similarity search is the bottleneck for virtually all time series data mining algorithms. In this paper, we present a parallel approach for DTW on a heterogeneous platform with a graphics processing unit (GPU). In order to exploit fine-grained data-level parallelism, we propose a specific parallel decomposition in DTW. Furthermore, we introduce an optimization technique called diamond tiling to improve the utilization of threads. Results show that our approach substantially reduces computational time.

  • Toward Concurrent Lock-Free Queues on GPUs

    Xiangyu ZHANG  Yangdong DENG  Shuai MU  

     
    LETTER-Fundamentals of Information Systems

      Vol:
    E97-D No:7
      Page(s):
    1901-1904

    General purpose computing on GPU (GPGPU) has become a popular computing model for high-performance, data-intensive applications. Accordingly, there is a strong need to develop highly efficient data structures to ease the development of GPGPU applications. In this work, we proposed an efficient concurrent queue data structure for GPU computing. The GPU based provably correct, lock-free FIFO queue allows a massive number of concurrent producers and consumers. Warp-centric en-queue and de-queue procedures are introduced to better match the underlying Single-Instruction, Multiple-Thread execution model of modern GPUs. It outperforms the best previous GPU queues by up to 40 fold. The correctness of the proposed queue operations is formally validated by linearizability criteria.

  • Efficient Parallel Interference Cancellation MIMO Detector for Software Defined Radio on GPUs

    Rongchun LI  Yong DOU  Jie ZHOU  Chen CHEN  

     
    PAPER-Digital Signal Processing

      Vol:
    E97-A No:6
      Page(s):
    1388-1395

    The parallel interference cancellation (PIC) multiple input multiple output (MIMO) detection algorithm has bit error ratio (BER) performance comparable to the maximum likelihood (ML) algorithm but with complexity close to the simple linear detection algorithm such as zero forcing (ZF), minimum mean squared error (MMSE), and successive interference cancellation (SIC), etc. However, the throughput of PIC MIMO detector on central processing unit (CPU) cannot meet the requirement of wireless protocols. In order to reach the throughput required by the standards, the graphics processing unit (GPU) is exploited in this paper as the modem processor to accelerate the processing procedure of PIC MIMO detector. The parallelism of PIC algorithm is analyzed and the two-stage PIC detection is carefully developed to efficiently match the multi-core architecture. Several optimization methods are employed to enhance the throughput, such as the memory optimization and asynchronous data transfer. The experiment shows that our MIMO detector has excellent BER performance and the peak throughput is 337.84 Mega bits per second (Mbps), about 7x to 16x faster than that of CPU implementation with SSE2 optimization methods. The implemented MIMO detector has better computing throughput than recent GPU-based implementations.

  • Throughput and Power Efficiency Evaluation of Block Ciphers on Kepler and GCN GPUs Using Micro-Benchmark Analysis

    Naoki NISHIKAWA  Keisuke IWAI  Hidema TANAKA  Takakazu KUROKAWA  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E97-D No:6
      Page(s):
    1506-1515

    Computer systems with GPUs are expected to become a strong methodology for high-speed encryption processing. Moreover, power consumption has remained a primary deterrent for such processing on devices of all sizes. However, GPU vendors are currently announcing their future roadmaps of GPU architecture development: Nvidia Corp. promotes the Kepler architecture and AMD Corp. emphasizes the GCN architecture. Therefore, we evaluated throughput and power efficiency of three 128-bit block ciphers on GPUs with recent Nvidia Kepler and AMD GCN architectures. From our experiments, whereas the throughput and per-watt throughput of AES-128 on Radeon HD 7970 (2048 cores) with GCN architecture are 205.0Gbps and 1.3Gbps/Watt respectively, those on Geforce GTX 680 (1536 cores) with Kepler architecture are, respectively, 63.9Gbps and 0.43Gbps/W; an approximately 3.2 times throughput difference occurs between AES-128 on the two GPUs. Next, we investigate the reasons for the throughput difference using our micro-benchmark suites. According to the results, we speculate that to ameliorate Kepler GPUs as co-processor of block ciphers, the arithmetic and logical instructions must be improved in terms of software and hardware.

  • An Efficient Parallel SOVA-Based Turbo Decoder for Software Defined Radio on GPU

    Rongchun LI  Yong DOU  Jiaqing XU  Xin NIU  Shice NI  

     
    PAPER-Digital Signal Processing

      Vol:
    E97-A No:5
      Page(s):
    1027-1036

    In this paper, we propose a fully parallel Turbo decoder for Software-Defined Radio (SDR) on the Graphics Processing Unit (GPU) platform. Soft Output Viterbi algorithm (SOVA) is chosen for its low complexity and high throughput. The parallelism of SOVA is fully analyzed and the whole codeword is divided into multiple sub-codewords, where the turbo-pass decoding procedures are performed in parallel by independent sub-decoders. In each sub-decoder, an efficient initialization method is exploited to assure the bit error ratio (BER) performance. The sub-decoders are mapped to numerous blocks on the GPU. Several optimization methods are employed to enhance the throughput, such as the memory optimization, codeword packing scheme, and asynchronous data transfer. The experiment shows that our decoder has BER performance close to Max-Log-MAP and the peak throughput is 127.84Mbps, which is about two orders of magnitude faster than that of central processing unit (CPU) implementation, which is comparable to application-specific integrated circuit (ASIC) solutions. The presented decoder can achieve higher throughput than that of the existing fastest GPU-based implementation.

  • Accelerating Extended Hamming Code Decoders on Graphic Processing Units for High Speed Communication

    Md Shohidul ISLAM  Jong-Myon KIM  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E97-B No:5
      Page(s):
    1050-1058

    Emerging networks characterized by growing speed and data insensitivity demand faster and scalable error handling. Prevalent decoders are based on dedicated hardware, offering considerable processing speed, but limited flexibility, programmability and scalability. This paper proposes an efficient approach to accelerate the extended-Hamming code decoder using a graphics processing unit (GPU), chosen for its low cost and extremely high-throughput parallel-computing capability. This paper compares the performance of the GPU-based approach with the equivalent sequential approaches that are performed on a central processing unit (CPU) and Texas Instruments TMS320C6742 digital signal processor (DSP) with varying packet sizes and error tolerances. Experimental results demonstrate that the proposed GPU-based approach outperforms the sequential approaches in terms of execution time and energy consumption.

  • Probabilistic Frequent Itemset Mining on a GPU Cluster Open Access

    Yusuke KOZAWA  Toshiyuki AMAGASA  Hiroyuki KITAGAWA  

     
    PAPER

      Vol:
    E97-D No:4
      Page(s):
    779-789

    Probabilistic frequent itemset mining, which discovers frequent itemsets from uncertain data, has attracted much attention due to inherent uncertainty in the real world. Many algorithms have been proposed to tackle this problem, but their performance is not satisfactory because handling uncertainty incurs high processing cost. To accelerate such computation, we utilize GPUs (Graphics Processing Units). Our previous work accelerated an existing algorithm with a single GPU. In this paper, we extend the work to employ multiple GPUs. Proposed methods minimize the amount of data that need to be communicated among GPUs, and achieve load balancing as well. Based on the methods, we also present algorithms on a GPU cluster. Experiments show that the single-node methods realize near-linear speedups, and the methods on a GPU cluster of eight nodes achieve up to a 7.1 times speedup.

  • Asynchronous Memory Machine Models with Barrier Synchronization

    Koji NAKANO  

     
    PAPER-Parallel and Distributed Computing

      Vol:
    E97-D No:3
      Page(s):
    431-441

    The Discrete Memory Machine (DMM) and the Unified Memory Machine (UMM) are theoretical parallel computing models that capture the essence of the shared memory and the global memory of GPUs. It is assumed that warps (or groups of threads) on the DMM and the UMM work synchronously in a round-robin manner. However, warps work asynchronously in real GPUs, in the sense that they are randomly (or arbitrarily) dispatched for execution. The first contribution of this paper is to introduce asynchronous versions of these models in which warps are arbitrarily dispatched. In addition, we assume that threads can execute the “syncthreads” instruction for barrier synchronization. Since the barrier synchronization operation may be costly, we should evaluate and minimize the number of barrier synchronization operations executed by parallel algorithms. The second contribution of this paper is to show a parallel algorithm to the sum of n numbers in optimal computing time and few barrier synchronization steps. Our parallel algorithm computes the sum of n numbers in O(n/w+llog n) time units and O(log l/log w+log log w) barrier synchronization steps using wl threads on the asynchronous UMM with width w and latency l. Since the computation of the sum takes at least Ω(n/w+llog n) time units, this algorithm is time optimal. Finally, we show that the prefix-sums of n numbers can also be computed in O(n/w+llog n) time units and O(log l/log w+log log w) barrier synchronization steps using wl threads.

  • A GPU Implementation of Dynamic Programming for the Optimal Polygon Triangulation

    Yasuaki ITO  Koji NAKANO  

     
    PAPER

      Vol:
    E96-D No:12
      Page(s):
    2596-2603

    This paper presents a GPU (Graphics Processing Units) implementation of dynamic programming for the optimal polygon triangulation. Recently, GPUs can be used for general purpose parallel computation. Users can develop parallel programs running on GPUs using programming architecture called CUDA (Compute Unified Device Architecture) provided by NVIDIA. The optimal polygon triangulation problem for a convex polygon is an optimization problem to find a triangulation with minimum total weight. It is known that this problem for a convex n-gon can be solved using the dynamic programming technique in O(n3) time using a work space of size O(n2). In this paper, we propose an efficient parallel implementation of this O(n3)-time algorithm on the GPU. In our implementation, we have used two new ideas to accelerate the dynamic programming. The first idea (adaptive granularity) is to partition the dynamic programming algorithm into many sequential kernel calls of CUDA, and to select the best parameters for the size and the number of blocks for each kernel call. The second idea (sliding and mirroring arrangements) is to arrange the working data for coalesced access of the global memory in the GPU to minimize the memory access overhead. Our implementation using these two ideas solves the optimal polygon triangulation problem for a convex 8192-gon in 5.57 seconds on the NVIDIA GeForce GTX 680, while a conventional CPU implementation runs in 1939.02 seconds. Thus, our GPU implementation attains a speedup factor of 348.02.

  • HiCrypt: A Specialized Translator for Symmetric Block Cipher and GPGPU

    Keisuke IWAI  Naoki NISHIKAWA  Takakazu KUROKAWA  

     
    PAPER

      Vol:
    E96-D No:12
      Page(s):
    2575-2586

    Many-core computer systems with GPUs are coming into mainstream use from high-end computing, including supercomputers, to embedded processors. Consequently, the implementation of cryptographic methods on GPGPU is also becoming popular because of such systems' performance. However, many factors affect the performance of GPUs. To cope with this problem, we developed a new translator, HiCrypt, which can generate an optimized GPGPU program written in both of CUDA and OpenCL from a cipher program written in standard C language with directives. Users must annotate only variables and an encoding/decoding function, which are characteristics of cipher programs, with directives. To evaluate the translator, five representative cipher programs are translated into CUDA and OpenCL programs by the translator. Generated programs perform high throughput almost identical to hand optimized programs for all five cipher programs. HiCrypt will contribute to development and evaluate of new and various symmetric block ciphers using GPGPU.

  • Optimal Parallel Algorithms for Computing the Sum, the Prefix-Sums, and the Summed Area Table on the Memory Machine Models

    Koji NAKANO  

     
    PAPER

      Vol:
    E96-D No:12
      Page(s):
    2626-2634

    The main contribution of this paper is to show optimal parallel algorithms to compute the sum, the prefix-sums, and the summed area table on two memory machine models, the Discrete Memory Machine (DMM) and the Unified Memory Machine (UMM). The DMM and the UMM are theoretical parallel computing models that capture the essence of the shared memory and the global memory of GPUs. These models have three parameters, the number p of threads, and the width w of the memory, and the memory access latency l. We first show that the sum of n numbers can be computed in $O({nover w}+{nlover p}+llog n)$ time units on the DMM and the UMM. We then go on to show that $Omega({nover w}+{nlover p}+llog n)$ time units are necessary to compute the sum. We also present a parallel algorithm that computes the prefix-sums of n numbers in $O({nover w}+{nlover p}+llog n)$ time units on the DMM and the UMM. Finally, we show that the summed area table of size $sqrt{n} imessqrt{n}$ can be computed in $O({nover w}+{nlover p}+llog n)$ time units on the DMM and the UMM. Since the computation of the prefix-sums and the summed area table is at least as hard as the sum computation, these parallel algorithms are also optimal.

  • Window Memory Layout Scheme for Alternate Row-Wise/Column-Wise Matrix Access

    Lei GUO  Yuhua TANG  Yong DOU  Yuanwu LEI  Meng MA  Jie ZHOU  

     
    PAPER-Computer System

      Vol:
    E96-D No:12
      Page(s):
    2765-2775

    The effective bandwidth of the dynamic random-access memory (DRAM) for the alternate row-wise/column-wise matrix access (AR/CMA) mode, which is a basic characteristic in scientific and engineering applications, is very low. Therefore, we propose the window memory layout scheme (WMLS), which is a matrix layout scheme that does not require transposition, for AR/CMA applications. This scheme maps one row of a logical matrix into a rectangular memory window of the DRAM to balance the bandwidth of the row- and column-wise matrix access and to increase the DRAM IO bandwidth. The optimal window configuration is theoretically analyzed to minimize the total number of no-data-visit operations of the DRAM. Different WMLS implementationsare presented according to the memory structure of field-programmable gata array (FPGA), CPU, and GPU platforms. Experimental results show that the proposed WMLS can significantly improve DRAM bandwidth for AR/CMA applications. achieved speedup factors of 1.6× and 2.0× are achieved for the general-purpose CPU and GPU platforms, respectively. For the FPGA platform, the WMLS DRAM controller is custom. The maximum bandwidth for the AR/CMA mode reaches 5.94 GB/s, which is a 73.6% improvement compared with that of the traditional row-wise access mode. Finally, we apply WMLS scheme for Chirp Scaling SAR application, comparing with the traditional access approach, the maximum speedup factors of 4.73X, 1.33X and 1.56X can be achieved for FPGA, CPU and GPU platform, respectively.

  • Accelerating Range Query Processing on R-Tree Using Graphics Processing Units

    Boseon YU  Hyunduk KIM  Wonik CHOI  Dongseop KWON  

     
    PAPER-Data Engineering, Web Information Systems

      Vol:
    E96-D No:12
      Page(s):
    2776-2785

    Recently, various research efforts have been conducted to develop strategies for accelerating multi-dimensional query processing using the graphics processing units (GPUs). However, well-known multi-dimensional access methods such as the R-tree, B-tree, and their variants are hardly applicable to GPUs in practice, mainly due to the characteristics of a hierarchical index structure. More specifically, the hierarchical structure not only causes frequent transfers of small volumes of data but also provides limited opportunity to exploit the advanced data parallelism of GPUs. To address these problems, we propose an approach that uses GPUs as a buffer. The main idea is that object entries in recently visited leaf nodes are buffered in the global memory of GPUs and processed by massive parallel threads of the GPUs. Through extensive performance studies, we observed that the proposed approach achieved query performance up to five times higher than that of the original R-tree.

  • Offline Permutation Algorithms on the Discrete Memory Machine with Performance Evaluation on the GPU

    Akihiko KASAGI  Koji NAKANO  Yasuaki ITO  

     
    PAPER

      Vol:
    E96-D No:12
      Page(s):
    2617-2625

    The Discrete Memory Machine (DMM) is a theoretical parallel computing model that captures the essence of the shared memory access of GPUs. Bank conflicts should be avoided for maximizing the bandwidth of the shared memory access. Offline permutation of an array is a task to copy all elements in array a into array b along a permutation given in advance. The main contribution of this paper is to implement a conflict-free permutation algorithm on the DMM in a GPU. We have also implemented straightforward permutation algorithms on the GPU. The experimental results for 1024 double (64-bit) numbers on NVIDIA GeForce GTX-680 show that the straightforward permutation algorithm takes 247.8 ns for the random permutation and 1684ns for the worst permutation that involves the maximum bank conflicts. Our conflict-free permutation algorithm runs in 167ns for any permutation including the random permutation and the worst permutation, although it performs more memory accesses. It follows that our conflict-free permutation is 1.48 times faster for the random permutation and 10.0 times faster for the worst permutation.

41-60hit(89hit)