Site icon IT World Canada

Tracking tornadoes, botnets and the solar system

TORONTO—Exascale computing, or one million trillion calculations per second, is the next barrier to beat in the realm of high-performance computing (HPC). But one expert said this target is of little good if the enormous amount of data generated is not used to its fullest value.

 

Eng Lim Goh, chief technology officer with Fremont, Calif.-based HPC vendor Silicon Graphics Inc., said that even at the current state of petascale HPC, or one thousand trillion calculations per second, much of the data produced is not utilized. One organization he knows leaves 95 per cent of the generated data “on the floor, untouched because they can’t deal with the data produced,” said Goh.

 

“We’re still not dealing with it today even with Petascale,” said Goh to the audience during the five-day HPCS 2010 conference.

 

HPC is used to run massive and intensive computations for things like simulating the path of a tornado and where it will touchdown in an effort to save lives. Register for our Webcast: Reduce Total Cost of Ownership with Third Party IT Maintenance

 

But complicating the matter is that data is generated not just from HPC but also from the myriad sensors such as satellites and cameras. Goh noted that the NASA Solar Dynamics Observatory alone generates 1.4 terabytes of data daily.

 

With so much data generated, Goh said storage systems quickly fill up and, often, data analysts will give the data a mere cursory glance before throwing it out to avoid the cost of storing it.

 

Goh said the data analysts themselves often work with an abstract goal and aren’t even sure what they are looking to discover.

 

Another speaker, Alan Gara, an architect with the IBM BlueGene SuperComputer, said there are challenges for the builders and users of an exascale HPC platform. Ideally, said Gara, the system would have to fulfill the following requirements: sustained performance per dollar, sustained performance per watt, and reliability and ease of use.

 

“All of these dimensions represent a large challenge especially when all areas are taken at the same time,” Gara told the audience.

 

For one, chipmakers cannot continue to build chips the way they do if exascale computing is to be achieved, said Gara. The current model whereby memory is a constraint limits performance growth, he added. “Otherwise, we’re going to be building systems that could have potentially extremely high compute potential but very little memory bandwidth to support that,” said Gara.

 

That said, Gara believes the industry will witness how the future of HPC will play out in the 2011-2012 time period where large computing machines will already have the architecture necessary to overcome memory challenges.

 

Sandia National Laboratories in Livermore, Calif., in 2009 emulated a mini-Web to investigate the behaviour of botnets on an HPC platform. Don Rudish, a researcher with Sandia National Laboratories, told the audience the project will help develop counter-measures to combat botnet attacks.

 

But it’s not just about botnets. Rudish said valuable lessons learned in building an HPC system comprising one million nodes will contribute to future exascale computing. For instance, the team had to learn how to deal with challenges such as monitoring, rebooting, visualizing all the nodes and storing the generated data.

 

Wednesday concluded the HPCS 2010 conference.

 

Follow Kathleen Lau on Twitter: @KathleenLau

Exit mobile version