Q&A: A chat with Intel

Gordon Graylish, vice-president of sales a marketing and general manager for enterprise solutions sales with Santa Clara, Calif.-based Intel Corp., was recently in Toronto where he spoke with ComputerWorld Canada about data centre sustainability, and how Intel is innovating to keep up with Moore’s Law

ComputerWorld Canada: How has Intel kept up with co-founder Gordon Moore’s prediction, Moore’s Law?

Gordon Graylish: Intel as a company is based on Moore’s Law. Our entire raison d’etre is Moore’s law which says every 18 months you can double the capabilities of silicon or you can cut the costs equivalently. What you’ve seen in the last year or so has been really a transformation where you’ve got to the point where we can afford not only to break some of the world’s great problems, but also you’re starting to see the proliferation of intelligent devices around the world. And moving from hundreds of millions of PCs and smart phones and servers and the like to now getting intelligent signs, automotive intelligence, intelligent machines and really see the number of endpoints grow to the billions.

CWC: But is there a limit to how small cores can get?

GG: We constantly have been looking at that. For years, we’ve looked out and said six, seven years from now we can see that far. We actually think we can see about 10 years now. And we go down through 22 nanometres, 11 nanometres, and it keeps going. We’re already at single-atomic layers between lines and obviously these will require novel techniques and a lot of those novel techniques are being worked in our labs today. But we’re highly confident that we can keep Moore’s Law going for a significant amount of time.


CWC: With servers, it’s not only about the processor but the capabilities of the entire system like virtualization, memory, I/O. What’s Intel perspective on offering a holistic platform of capabilities?

GG: It’s never just been about the processor because clearly you can’t do anything with just the processor. So we’ve always the approach of looking at the problem and say what problem are people trying to solve and then work back from that to the chipsets, the memory, the interconnect, networking, etc., required to solve that. So we’ve had enormous focus in the past several years on virtualization, making it secure, making it fast, on improving the memory access, on improving the interconnect. For example moving the industry up to 10gig, the point of that is we’re constantly shifting the bottleneck with the end result being that we can solve problems that frankly weren’t imaginable even a couple of years ago.

CWC: We’re seeing chip makers cramming as many cores as they can into a processor. While aiming for maximum performance, what’s Intel doing to help customers manage data centre operations given the cooling and powering requirements of those servers?

GG: For the last six years or so, we’ve really focused on the total solution. For example with our latest processor technology, Xeon 5500 and 7500, you can achieve 15 to 20 times consolidation from where we even four years ago. The effect of that is I can either reduce my power consumption for the same amount of work by 95 per cent or I can do 15 times as much work and still have a lower power budget. So the net result is, it’s not just about how many cores you have, it’s a matter of how much work I can get done not just for the space but also for the number of Watts consumed.

CWC: Chip-makers were building chips for maximum performance users in high-performance computing environment. But now a new group of users who are more concerned with a balance of power efficiency and performance. How is Intel looking at that?

GG: If you think back to even the last year or so there have been a number of studies that have looked at the utilization of computers and typically within a data centre people are running high single digit utilization. In other words, 90 per cent of the processors are idle. What we’ve focused on with virtualization and high-performance computing has been on really ramping up the utilization whether you’re trying to solve the world’s great problems in medicine or weather forecasting or whether you just want to host more mail users on the smallest number of systems possible, the same logic appeals.

CWC: Some industry observers say the true value of a processor can only be reaped with a rip and replace of the data centre. Do you agree?

GG: There’s been huge advances in the last five years with data centre design. Some of these things are really very simple like putting a vinyl curtain between the hot and cold aisles in my data centre. Others require a real architectural view of the data centre, where I put my storage, processing, etc. In that environment, there’s clearly a balance here because in many countries and in parts of Canada there’s a limit on how much energy you can consume so we have to be very efficient with the data centres we have. We find that people who actually replace their existing four-year old inventory with new products can get a payoff of five, six months, which is pretty amazing. That’s been verified hundreds of times with different companies.

CWC: Given the recession has shrunk IT budgets, how is this affecting server sales and refresh cycles for customers?

GG: We’ve seen pretty consistent growth and the reason is this huge different in performance. The fact that you have a 15:1 consolidation says it’s pretty easy to justify how I can do more and have it cost me less. For years, people have asked ‘Won’t virtualization result in fewer servers?’ To be honest, it hasn’t. We’ve seen a growth in the server market at the same time particular in the last 12 months we’ve seen an absolute acceleration of virtualization, of people utilizing their computers better which is what we want. We would rather people spent the money on innovating rather than new buildings and expanding the real estate required to do the compute load. We’re actually working very closely with the industry on how to optimize the working environment for higher levels for virtualization.

CWC: What can enterprise expect from Intel next year? I know there is talk of Westmere-EX coming out sometime next year.

GG: We’re on a tick-tock model. Every year we will either shrink the device, so we’ve now moved to 32 nanometres so that’s ramping into the millions. And the following year we go through a micro architectural change to continue to get more per instruction. So you can expect us to continue to do that for the next several years. There’s no end in sight that we can see.

CWC: Some in the industry has described the rivalry between Intel and AMD Inc. as a “leapfrog rivalry,” where one company is the front runner for a period of time only until the other launches a product that upstages the former. Do you agree?

GG: First of all you have to look at this and say all companies in this industry are going to try and come out with the best products that they can. We’ve done that, we’ve been very successful I think and had a sustainable lead in terms of our process technology that has enabled us to come up with new processors and new innovative ways of solving compute problems. This is not an industry where we can say “Great, we had a hit. Now let’s just sit back and relax.” Instead, it’s “I had a hit, let me get working. I’ve got to move to the next one.” 

Follow Kathleen Lau on Twitter: @KathleenLau

Related Download
EMC Data Protection For VMWare-Winning In The Real World Sponsor: EMC
EMC Data Protection For VMWare-Winning In The Real World
Download this white paper for a deep dive analysis based on truly real world comparison of EMC data protection vs. Veritas NetBackup for VMware backup and recovery.
Register Now