GPGPUs

What are GPGPUs?

GPGPUs are general-purpose graphics processing units. This means using graphics cards for doing general purpose computing. Because of the demand of modern graphics-intensive programs, graphics cards have become very powerful computers in their own right.

GPGPUs are particularly good at matrix multiplication, random number generation, FFTs, and other numerically intensive and repetitive mathematical operations. They can deliver 5–10 times speed-up for many codes with careful programming.

For more examples of applications that are well-suited to CUDA, a language that enables use of GPUs, see NVIDIA’s CUDA pages at http://www.nvidia.com/object/cuda_home.html.

Submitting Batch Jobs

To use a GPU, you must request one in your PBS script. To do so, add a node attribute to your #PBS -l line. Here is an example that requests one GPU.

#PBS -l nodes=1:gpus=1,mem=2gb,walltime=1:00:00,qos=flux

Note that you must use nodes=1 and not procs=1 or the job will not run.

Programming for GPGPUs

The GPGPUs on Flux are NVIDIA graphics processors and use NVIDIA’s CUDA programming language. This is a very C-like language (that can be linked with Fortan codes) that makes programming for GPGPUs straightforward. For more information on CUDA programming, see the documentation at http://www.nvidia.com/object/cuda_develop.html.

NVIDIA also makes special libraries available that make using the GPGPUs even easier. Two of these libraries are cudablas and cufft.

cudablas is a single-precision BLAS library that uses the GPGPU for matrix operations. For more information on the BLAS routines implemented by cudablas, see the documentation at http://www.nvidia.com/object/cuda_develop.html.

CUFFT is a set of FFT routines that use the GPGPU for their calculations. For more information on the FFT routines implemented by cudafft, see the documentation at http://www.nvidia.com/object/cuda_develop.html.

To use the CUDA compiler (nvcc) or to link your code against one of the CUDA-enabled libraries, load the cuda module by typing:

module load cuda

This will give you access to the nvcc compiler and will set the environment variable CUDA_INSTALL_PATH which can be used to link againstlibcudablaslibcufft, and other CUDA libraries in ${CUDA_INSTALL_PATH}/lib.

CUDA-based applications can be compiled on the login nodes, but cannot be run there, since they do not have GPGPUs.

To install the sample code type NVIDIA_CUDA_SDK_3.0_linux.run after loading the CUDA module; answer the questions about where you want it installed, then you can go into that directory and type make to compile the sample code. Note that sample codes that require an X Windows interface will not work on Flux.