Big flops wanted: high performance computing on the rise

Last summer the Department of Energy’s US$120 million dollar IBM Roadrunner supercomputer at Los Alamos National Laboratory was declared the fastest computer in the world, churning out an incredible 1.026 petaflops, the first system ever to break the petaflops barrier.

The word “flops” is an acronym for Floating Point Operations Per Second, and 1 petaflops represents 1,000 trillion calculations per second. The Roadrunner accomplishment is all the more remarkable given the fastest machine the year before was another IBM supercomputer, BlueGene/L at Lawrence Livermore National Laboratory, which could only process 280 teraflops.

Two other supercomputers, both from Cray (and housed at Oak Ridge and Sandia National Laboratories) barely made it past 100 teraflops. In fact, the raw computational power of Roadrunner is so staggering that it exceeds the combined performance of the top 10 systems from the contest of June 2007.

HPC in the enterprise

The market for High Performance Computing (HPC) has been steadily growing over the last 15 years, increasing by a multiple of 1000 in the last decade alone thanks to vertical (more CPUs and cores) and horizontal (more nodes) scaling.

Big iron’s breakthrough:

HPC in the enterprise

On the hardware side, HPC is benefiting from the advancements in chip technologies from Intel and AMD (with their multi/many core CPUs). Roadrunner uses about 6,000 dual-core AMD Opteron CPUs, in addition to 12,000 of IBM’s proprietary PowerXCell 8i chip.

Supercomputers are becoming less expensive, greener, more open and clustered, making them suitable for markets beyond the traditional government labs, universities and energy (oil and gas) companies. Now, industries such as aerospace (and manufacturing as a whole), and finance (where analysis is heavily dependent on processor-intensive algorithms) are being drawn to HPC.

HPC, for example, can be used for enterprise solutions such as elimination of physical prototyping for autos and airplanes, simulation of climatic control and other features for the design of luxury cars, analysis of customer behavior, identification of business trends to manage global supply chains, data mining solutions involving neural networks and so on.

Almost all hardware vendors are readying HPC offerings, including mainstream server vendors such as Dell, Sun and HP, supercomputing leaders such as IBM, and specialized vendors like Cray (who is partnering with Microsoft), Bull, Fujitsu, SGI, Siemens, Unisys and others.

Other key members of the HPC ecosystem include hardware vendors that specialize in memory, video and communication equipment, such as NVIDIA, Seagate and CISCO, and enablers such as Intel and AMD. In addition, hardware appliance vendors like Azul Systems are addressing niche application areas of HPC, such as memory resident eXtreme Transaction Processing (XTP) for financial markets.

It is likely HPC clusters will be the central building blocks of many on-premise and cloud computing infrastructures, helping established enterprises and start-ups leverage the throughput of the platform in their efforts to accelerate their go-to-market programs.

Platform

Linux can easily be branded as the de facto operating system for HPC clusters. Other variants of UNIX, such as Solaris or AIX, also have a reasonable presence. Windows has recently entered the fray with the revised Windows HPC Server 2008, which replaces Windows Compute Cluster Server 2003 (HPCS). Windows HPC Server 2008 is meant to provide a platform for service-oriented architecture-based applications as well.

Whether it is UNIX/Linux or Windows, the platform for HPC would include virtualization software from VMware, Citrix (Xen) and others as an integral component.

Applications

While languages like Parallel Fortran are used to develop high-performance scientific applications for HPC environments, these specialized languages are complemented by Java and .Net.

Portability, network-centricity and security are perhaps the strongest features of Java in the context of HPC application development. Aspects of high performance programming in Java have been underway for quite a while (sometimes under the heading of ‘HP Java’), addressing topics such as threading and concurrency in the new world of multi-core architectures. Likewise on the .Net front, following the release of Windows HPCS in September of 2008, Microsoft released developer tools for parallel computing, concurrency and coordination in a clustered environment. In the June 2008 contest, a build of Windows HPC Server 2008 (with 1,000+ nodes and 9,000+ cores) was ranked No. 23.

In the realm of data warehousing, HPC is a much-needed platform for complex data mining involving multi-dimensional datasets (also called “cuboids”). Parallel computing can be taken advantage of to split the tasks into independent tracks such as mapping and reducing, cleansing and analyzing, querying and mining, and so on. Procter & Gamble is one of the marquee accounts that is leading the way with a Windows-based HPC program that is being rolled out to support a variety of applications, including engineering, scientific research, financial analysis and marketing.

Future

Even though the petaflops barrier was just broken last June, last February nuclear physicists announced plans to build with IBM a 20-petaflops machine, nicknamed Sequoia, by 2012. It will have 1.6 million cores, a memory of 1.6PB, and around 98,000 nodes.

The machine is expected to have a footprint of a 3,400 square foot home and be quite ‘green’, consuming only 6 megawatts per year. Initially Sequoia will be used for implementing elements of Comprehensive Nuclear Test-Ban Treaty, such as developing and testing nuclear weapons without actually detonating them.

If Sequoia were to be put to civilian use, such as weather science, it is said that with its astounding power of 20 petaflops, it would be possible to get local weather down to the precision of predicating conditions in a 100 meter range, allowing for timely evacuations in the event of a twister or a hurricane.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now