One of technology’s most interesting battles is occurring among server chip makers like IBM Corp., AMD Inc. and Intel Corp. These vendors are continually looking to one-up each other — pushing out eight-, 12- and 16-core processors — and bringing unprecedented levels of high performance computing power to enterprise IT shops.
But while the focus is often on the amount of cores these new chips offer, enterprises must also pay attention to the servers, systems and applications used to take advantage of this new processing power.
Neil Bunn, a technology architect for deep computing at IBM Canada Ltd., said that one of the challenges with technology in the last couple of years has been the dramatic increase in the number of processor cores available to applications. The software industry, he said, has lagged in developing apps to correctly parallelize workloads to take advantage of multiple cores.
“Being able to run a single application or a single job against an extremely large system is definitely a very large issue in HPC today,” Bunn said. “In fact, it’s an issue where there are a lot of perspectives on how we’re going to solve it, but no clearly defined path on which one is going to win out.”
Carl Claunch, a vice-president and analyst covering servers and storage for Gartner Research Inc., agreed, adding that the problem will only get more dramatic as computing cores and clusters increase.
“It’s kind of easy to hack at a program and say, ‘I can take this part that’s doing something and this other part that’s doing something else and separate them into two pieces,’” he said. “But when you’re trying to do that over 64 or 128 cores, the bigger the number gets, the harder it is to divide evenly.”
Claunch added that even though almost every major tech vendor has pumped serious resources into developing a general solution for this problem, no quick fix yet exists.