Discover 2014: The next-generation of servers could have memory pools the size of exabytes. But is it science fiction?

LAS VEGAS — In the movies, there’s no shortage of imagination for what computers with unlimited power can do — run time machines, star ships or let people virtually walk through data.

Hewlett-Packard Labs think some of that is “a couple of years away” if a new server architecture its working on can be commercialized.

For some that’s a big if.

At a briefing for reporters Wednesday, officials gave a peek at what they dub “the machine,” which if it comes to pass would see a computer the size of a credit card that includes a system on a chip, what HP calls memristors for memory and silicon photonics for internally transferring data by light.

Put tens of thousands of them together and you’d get a supercomputer that can store exabytes of data in memory for ultra-fast processing, but using tremendously less power than current technology.

“There’s no more opening and closing of files, worrying about clusters,” Kirk Bresniker, HP Labs chief architect of systems research told reporters. “A data scientist no longer has to worry about how to break up a problem into manageable chunks.” Instead the data can be directly queried.

The concept also envisions systems on a chip built for specific workloads using the architecture.

dell-for-site

What “the machine” will also need is a new operating system — HP hopes it will be provided by the open source community — new analytics software and new data visualization software “so we can literally live inside the information,” he said.

In addition to being able to query huge amounts of data faster, he said, this new type of computing could be put to practical uses where machines spit out (or potentially spit out) vast amounts of data that could be processed immediately. For example, he said, it could be installed in a cellular network to instantly decide how to route traffic yet still maintain quality of service. Or airlines could use it to send and process more real-time data from aircraft in the air.

Is this a real vision? John Sontag, vice-president and director of HP systems research says these computers are “a few years away.” HP is already discussing the memory part of the concept with SK Hynix on memristors and others on photonics.

There’s no doubt that the road we’re on in computing has to end at some point. Using existing technology only so much density can be squeezed into a rack.

But microprocessor analyst Nathan  Brookwood of Insight 64 thinks this HP concept will end up like another piece of fiction.
“There’s a lot to be said for placing more memory close to the CPUs in a system, and there are thousands of engineers in memory companies and CPU companies working on that problem already,” he said in an email. “Eventually one of those engineers will hit on a scheme that works in practice and can be scaled to large manufacturing processes. That engineer might work for HP or IBM or Intel or Oracle or some small startup you’ve never heard of.

“I seriously doubt that whenever the industry comes up with a key technology that allows higher density memories to be located closer to a CPU complex, that it will necessitate starting all over with new CPU architectures or operating systems. There’s just too much code out there running on the current hardware infrastructure to allow a totally different approach to gain ground. Instead, the innovations will be applied incrementally, and in a backward-compatible manner.”

To be fair in the press briefing HP officials said their concept of the computer board would include a small amount of DRAM for older applications.

But Brookwood pointed out that others have noted HP has been touting memristors for years, “and has little besides research papers at conferences to show for its efforts.

“The “Machine” has a catchier name,” he said, “but I’d be very surprised if it ever amounts to much.”

In theory, HP said, a supercomputer built around its vision would whip existing behemoths. It believes a supercomputer based on its concept with 122,000 nodes would beat the current GUPS (giga update per second, a measure of how frequently a computer can issue updates to randomly generated RAM locations) record held by a Fujitsu K supercomputer with 73,000 SPARK nodes. The HP design would be six times faster at one-eightieth of the energy.

Or, HP said, in theory going against the current champ in TEPS (trillions of edges per second) for making graph analytics held by IBM’s Blue Gene Q supercomputer with 64,000 nodes, an HP design with 122,000 nodes would be slightly better in performance but use 400kw of power to do the crunching versus 7,900 kW by Blue Gene.

But even HP admits there are at least three obstacles, including the operating system. Its so-far hidden work is now being revealed to interest others in the industry to help “in co-design, pretty much from the business process all the way down to the box.”

Sontag says there have been “simulations and emulations and studies so we have fairly high confidence that we know what the bounds look like, and we’ve got a lot of hard work to do.”

Share on LinkedIn Share with Google+ Comment on this article
More Articles