SPEC Unveils New Standard Benchmark Suite for Supercomputer GPU Accelerator Devices

First Posted: Apr 07, 2014 04:08 PM EDT
Close

When evaluating computers and comparing performance, consumers, popular magazines, and websites rely on benchmarks to measure how powerful computer processors are. One well-known, standardized test is the Standard Performance Evaluation Corporation (SPEC) CPU2006 benchmark.  Hardware engineers, developers, and vendors have been using SPEC benchmarks since 1988 — when a small number of workstation vendors realized that the marketplace was in desperate need of realistic, standardized performance tests.

This month, SPEC released a new benchmark suite — SPEC ACCEL V1.0 — that measures the performance of systems using hardware accelerator devices and supporting software. The new benchmark suite is comprised of 19 application benchmarks running under OpenCL and 15 under OpenACC. Both suites evolved from well-respected research groups and tests derived from high-performance computing applications.

A number of previous, uncoordinated efforts have set out to develop a benchmark suite relevant to running scientific codes that use a GPU’s computing capabilities instead of its rendering capabilities. However, this is the first endorsed by a major organization. "We believe that this type of standardized benchmark, based on real-world, computationally-intensive parallel applications, will be a valuable asset to a wide range of organizations in the high-performance computing community," says Kalyan Kumaran, SPEC High-Performance Group (HPG) chair.

While SPEC HPG has initiated attempts to lower or eliminate the benchmark cost, the current price is $2,000 for non-SPEC members and $800 for qualified non-profit and not-for-profit organizations. Some would argue the results are well worth the cost. “The benchmark puts the numbers in context,” says Robert Henschel, SPEC HPG secretary and manager of scientific applications and performance tuning at Indiana University in Bloomington, US.

“What you get in return is a professional benchmark that validates the results, comes in a standard harness, can be reproduced at will with the required hardware, and ensures confidence in the numbers. If you use SPEC ACCEL benchmark results to market a product, then those results must be published. This also gives the purchaser confidence that the numbers are correct."

“Prior to publication, each result is peer-reviewed by SPEC HPG — further enhancing the usefulness of the published results,” notes Henschel. To test the performance of underlying accelerators and software (compilers in particular), Huian Li, principal systems analyst at Indiana University, ran the suite on a variety of platforms contributing significantly to the benchmark suite. “We were able to reproduce others’ results easily using the published SPEC ACCEL information.”

While High-Performance Linpack — the other well known, widely used benchmark throughout the high-performance computing community — may include the math kernel library, for example, concrete details about how it is compiled and linked are nonexistent. Overall, there is not enough context to reproduce the results. In contrast, SPEC ACCEL output is fairly robust, enabling users to understand the system build; the required version numbers; configuration of the compute system; and the exact software, user environment, storage, operating system, and network topology.

GPUs are currently the premier way of accelerating systems, but the new SPEC ACCEL benchmark will run on MICs (many integrated cores) as well as other hardware platforms using OpenACC or OpenCL. This provides a unique opportunity to compare GPUs and MICs side by side.

“SPEC ACCEL was designed particularly so that it is not tied to any vendor-specific development,” says Henschel. “It will even run on a non-accelerated system; you can treat your CPU as an accelerator if you like. Given the accelerator battle taking place in the high-performance computing community, SPEC HPG has made every effort to be as neutral as possible. AMD, Intel, and NVIDIA have all contributed to the development of the benchmark.”

SPEC takes special pride in its application level benchmarks, and the benchmark suites produced by SPEC HPG are no exception. The benchmarks are comprised of applications focused on scientific and technical computing and coded using standard parallel programming interfaces. SPEC ACCEL is designed to objectively compare not only the performance of accelerator hardware systems, but also accelerator programming models and accelerator-enabled compilers.

SPEC ACCEL includes 34 applications that do everything many scientific applications do — such as reading an input file, performing computations, and writing to output. It is more representative of how real-world applications perform on a certain platform, in relation to compute kernel or synthetic benchmarks. “It’s really hard to understand how a machine will perform at a certain workload just by looking at its specifications. You get a rough idea, but there’s nothing like truly running a representative workload,” Henschel says.

Together with Mathew Colgrove from NVIDIA and Guido Juckeland, Technische Universität Dresden, Germany,Henschel delivered a talk on the new SPEC ACCEL suite at the GPU Technology Conference in San Jose, California, US, on March 26, 2014. Their presentation featured an in-depth look at what benchmarks reveal about the performance characteristics of various accelerators. -- by Amber Harmon, © i SGTW

See Now: NASA's Juno Spacecraft's Rendezvous With Jupiter's Mammoth Cyclone

©2017 ScienceWorldReport.com All rights reserved. Do not reproduce without permission. The window to the world of science news.

Join the Conversation

Real Time Analytics