Supercomputing ‘arms race’ could prove costly: Obama

China‘s rise to the Top 500 supercomputing list in November got the attention of President Barack Obama , who’s using the news to underscore concerns about America’s global standing as a competitive power.

Obama has has twice mentioned China’s move to the top of the supercomputer list in recent days — once in a speech and again a press conference. Other administration officials have made note of China’s displacement of the U.S. atop the list as well.

The increasing attention on high performance computing rankings as an indication of global leadership is raising a broader question. How is supercomputer performance best measured?

The answer is not so simple.

Rankings on the Top500 list are determined by the performance of a system’s running the Linpack test, which measures floating point computing power. But that test is but one metric, and in a report issued last week , the President’s Council of Advisors on Science and Technology expressed concern that a focus on building systems to do well on the Linpack test may divert money from research that could lead to new breakthroughs in supercomputing. 

 

“The goal of our investment in HPC should be to solve computational problems that address our current national priorities, and this one-dimensional benchmark measures only one of the capabilities relevant to those priorities,” said the report.

Despite China’s recent ranking atop the list, the U.S. is still dominant in high performance computing. Of the 500 systems ranked on the Top 500, 280 are in the U.S., and American companies, such as IBM, Intel, supply many of the systems and the technologies. The U.S. currently occupies five of the top 10 positions of the Top 500 list, which does represent a decline from recent years.

The White House advisory council isn’t suggesting that the U.S. retreat from its dominant position in high performance computing. It argues, instead, that focusing on a single metric for determining global rankings could divert resources away from research that could better lead to new breakthroughs.

“While it would be imprudent to allow ourselves to fall significantly behind our peers with respect to scientific performance benchmarks that have demonstrable practical significance, a single-minded focus on maintaining clear superiority in terms of flops count is probably not in our national interest,” the report said.

“Engaging in such an ‘arms race’ could be very costly, and could divert resources away from basic research aimed at developing the fundamentally new approaches to HPC that could ultimately allow us to ‘leapfrog’ other nations, maintaining the position of unrivaled leadership that America has historically enjoyed in high performance computing,” it added.

The White House view could help accelerate acceptance of other metrics that can determine system performance.

Alternative to the Linpack tests are emerging. For instance, an international team led by Sandia National Laboratories released a new rating system, Graph 500, in November.

 

Richard Murphy, a researcher at Sandia who helped develop Graph 500, said the benchmark is complementary to the one used by the Top500 operation.

“Top500 ranks compute intensive applications, and the point of Graph 500 is to rank data intensive applications, which are a major new scientific and engineering research challenge,” said Murphy said in an e-mail response to questions from Computerworld.

Business areas that the Graph 500 metric could benefit include cybersecurity, data enrichment, medical informatics and social networks, said Murphy.

The Graph 500 benchmark had nine submission to the first list released in November, “which is about what we were hoping for,” said Murphy.

“There were probably another 8-10 groups that inquired about submitting but didn’t do so because they didn’t know how to baseline the performance numbers they got. Basically, because nobody had put a performance stake in the ground, they didn’t know if they performed well or not and didn’t want to risk submitting,” he added.

Graph 500 measures both performance and problem size, which means a smaller HPC system could outperform a larger one, but only to solve a smaller problem in many cases.

Jack Dongarra, a professor of computer science at the University of Tennessee and a distinguished research staff member at Oak Ridge National Laboratory and one of the international researchers who maintains the Top500 list, said he agrees with the White House report. A benchmark he helped developed, the High Performance Computing Challenge (HPCC), tries to address this problem by completing seven performance tests to measure multiple aspects of a system. HPCC was originally developed for DARPA.

“The benchmark that goes into the generating the Top 500 is only one number measuring only one aspect of a system,” said Dongarra, in response to email questions. “The Graph 500 will have the same problem. HPCC attempts to provide many more numbers and test points,” he said.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now