Oak Ridge National Laboratory (ORNL) is using CAPS' HMPP compiler to get the best computing power from a GPU-based hybrid cluster
ORNL is preparing the future of next-generation petascale computing with machines that will combine general-purpose central processing unit (CPU) cores with hundreds of GPU cores, which can deliver tremendous performance. In this context, ORNL have selected HMPP to enable the programming of high-performing parallel GPU/CPU hybrid applications.
Based on a set of directives for programming and tuning GPU-accelerated applications, HMPP is a C and Fortran source-to-source compiler that gives developers a high level of abstraction for programming GPUs in scientific applications. HMPP offers developers an incremental way of programming from minimal expertise to advanced and expert. HMPP works with standard compilers and hardware vendor tools to create the application binary.
'We like the way HMPP addresses manycore programming: in addition to GPU programming directives that efficiently define and optimise computations offloaded in GPUs, the tuning directives give us control over the fine-tuning of the GPU-accelerated kernels,' said Richard Graham, group leader for the Applications Performance Tools group within ORNL’s Computer Science and Mathematics Division. 'Also, by letting us use our standard compilers, HMPP really seamlessly integrates in our development environment.'