The recent news that China took the top spot in a global supercomputer speed competition has put supercomputers back in the minds of IT. The Chinese Tianhe-2 supercomputer scored the top spot in the semi-annual TOP500 ranking of supercomputers, with a score of 33.86 petaflops per second on the Linpack benchmark for sustained performance.
That kind of computing power is overkill for the needs of most corporations. But, increasingly, even companies that are not globe-spanning colossi are adding supercomputing to the list of IT assets they need to do everything from complex big data analytics to simulating in minute detail changes in appearance and performance of dish soap in a sink or new lids on detergent containers designed not to spill when they're used.
"HPC technical servers, especially supercomputers, have been closely linked not only to scientific advances but also to industrial innovation and economic competitiveness," Earl Joseph, head of technical computing at IDC, said in a prepared statement. Sales of supercomputers rose 65% between 2009 and 2010 and another 29% between 2011 and 2012, according to Joseph.
Supercomputer sales--HPC servers that sell for more than $500,000 by IDC's definition--have been rising because companies like Procter & Gamble are using them to perform tasks like simulating the aerodynamics of Pringles in danger of blowing off the manufacturing line. Boutique car and truck manufacturers also use them to model the aerodynamic impact of new parts or changes in old ones, according to American Public Media's Marketplace Tech.
At the very highest end of computing, the competition is fierce for these massive systems. The former Top500 title holder, a Cray XK7 system named Titan, managed a Linpack rating of only 17.59 petaflops per second, according to TOP500. It's now ranked No. 2.
Not that 17.59 thousand trillion floating-point operations per second is anything to sneeze at, even for a supercomputer almost a year old in its current configuration. But that's long enough in supercomputer time to go from cutting edge to ho-hum.
Titan, which was installed at the Department of Energy's Oak Ridge National Laboratory last October, runs on 18,688 16-core AMD Opteron processors and 261,632 NVIDIA Kepler K20 GPUs. Titan also runs 600 Tbytes of memory and stores its thoughts via a 240-GByte per second Spider file system.
The No. 3 system is a Sequoia, an IBM BlueGene/Q system installed at DOE's Lawrence Livermore National Laboratory, which does 17.17 petaflops per second, followed by a Fujitsu "K computer" installed in Kobe, Japan, which came in at a comparatively pokey 10.51 petaflops per second.
Tianhe-2 was originally expected to run as high as 100 petaflop/s, and still has a top-end performance rating of between 53 petaflops per second and 55 petaflops second second.
The Linpack benchmark, which measures sustained performance, is considered a more reasonable metric for supercomputers than peak capacity.
China last held the top spot in the TOP500 ratings with Tianhe-2's older brother, the Tianhe-1A, which has a base architecture similar to that of the Tianhe-2.
Tianhe-2 has 16,000 nodes, each of which runs two Intel Xeon Ivy Bridge processors and three Xeon Phi graphics processors. It runs a total of 3,120,000 computing cores.
It includes 1.404 petabytes of memory, uses 12.4 petabytes of storage, a high-performance interconnect called the TH-Express2 and 4,096 Galaxy FT-1500 CPUs--six-core processors developed by the Chinese National Supercomputer Center at the National University of Defense Technology in the Hunan province.
While this kind of power may be more than what most companies need, do you see a place for supercomputing in your data center? Is your company embarking on projects that require supercomputing? I'd like your input. Use the comments section to share your feedback.