Sun redirects server technology

Sun Microsystems Inc. is pushing a processor technology that it says will let single chips within servers handle multiple tasks at the same time. This will increase system performance by as much as 30 times of today’s boxes, the company says.

The idea, called throughput computing, is propelling Sun’s microprocessor strategy, which the company recently modified to give greater attention to its multithreading efforts. Earlier this month, Sun announced it was scrapping two “conventional” microprocessor projects it had in the works for years to focus on its throughput computing technology.

Gone are the UltraSparc V, code-named Millennium, a new RISC design for midrange and high-end servers, and Gemini, a dual-core chip aimed at low-end systems. Sun concedes that tough economic times — the company recently laid off 3,300 employees and says it expects to lose as much as US$810 million in the upcoming quarter — have pressured it to make the shift.

But David Yen, executive vice-president of processor and network products at Sun, also says that the company made the change to direct its resources to the throughput computing technology.

“You talk with CIOs throughout the world and more and more the challenge to them is not the speed of getting one job done. Instead, it’s how to maintain the ability to deal with the mass volume of requests,” he says. Technology such as 3G mobile data transmission and radio frequency identification, combined with the move toward utility computing, will increase exponentially the number of networked devices and the workload servers handle, Yen says.

Conventional chip architectures have been too focused on optimizing for one job, Yen says. Analysts note that while processor speeds are increasing, memory limitations can slow throughput. Sun’s approach is to let multiple cores on one piece of silicon handle multiple simultaneous threads, software-based instructions that must be processed.

Analysts seem to like the idea.

“Today, servers can be idle up to 75 per cent of the time while their processors stall waiting for data from memory, leaving considerable room for improvement in processor design,” IDC analysts wrote in a Sun-sponsored white paper on the company’s throughput computing strategy. “By focusing on increased application workload throughput instead of clock speed, Sun’s (chip multithreading) processors could deliver significant increases in application performance.”

But the white paper also notes that challenges remain such as competitive threats from companies such as IBM Corp., which says it will include multithreading technology in the upcoming Power5, and alternative architectures such as clusters, blades and grids that are aimed at tying together smaller systems to improve throughput.

Sun took the first step in its throughput computing effort with the introduction of the UltraSparc IV earlier this year, which contains two cores and a dual-threaded architecture. However, in 2006, customers can expect their first real glimpse of throughput computing with the advent of systems based on Niagra, a chip design that includes technology from Afara Websystems, which Sun acquired in 2002.

The first generation of Niagra, which will be optimized for networks that face workloads such as security processing, will have eight cores, each capable of handling four threads.

“So one chip could run 32 threads of execution simultaneously,” Yen said.

Later generations of Niagra and a multithreaded chip for midrange and high-end systems, code-named Rock, are expected to debut in 2007. Rock is intended to run applications such as large databases or data warehouses and so will have multithreaded capabilities, and also will be optimized for single-threaded workloads.

The decision to scrap the Millennium and Gemini chips “fundamentally simplifies Sun’s road map,” says Nathan Brookwood, an analyst at Insight 64. “There were just a lot of products, and even if Sun had met all its schedules, these things would be coming on at relatively short intervals. And the systems guys would have had a hard time keeping up, and so would the end users who would have to make choices between the systems.”

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now