SAN FRANCISCO – Today’s computer industry is “way too complex,” but Sun Microsystems Inc.’s partnership with Oracle Corp. will help customers alleviate that problem, according to Santa Clara, Calif.-based Sun’s CEO Scott McNealy.
During his Tuesday morning keynote at the this year’s OracleWorld conference, McNealy said Sun’s and Oracle’s strategies compliment each other in key areas such as enterprise grid computing, security, high availability and clustering.
McNealy said the companies are teaming to help evolve grid computing from simply scientific research to a commercial enterprise deployment. Redwood Shores, Calif.-based Oracle’s 10g database product takes advantage of the data centre environment Sun is building with its N1 strategy, McNealy said.
According to Sun, its N1 architecture comprises foundation resources, virtualization, provisioning, policy and automation, and monitoring. By making a data centre work like a single system, N1 turns previously siloed resources into a pool of virtual resources. Services can be mapped across this resource pool and customers can create policy-driven services and assign priority to critical services.
Sun and Oracle plan to increase overall system performance by utilizing new functionality in Oracle’s 10g database, McNealy explained.
He lamented the complexities of data centres that require so many employees to keep things running so that organizations can deliver their services every day.
He likened the problem to someone deciding to fly from one city to another after first handcrating a “jalopy airplane” – buying all the parts and custom-building the aircraft. Today’s data centres are built like that, he said. “No two are alike – they’re not even close…they’re like different species,” he said.
In the case of airplanes, while it might be cheaper to buy all the parts separately, the total cost of delivery, by the time they are assembled and tested, would be higher than buying ready-built machines, McNealy said, adding that the same goes for data centres.
Sun’s solution is to offer Intel Corp.-based servers running Linux or Solaris, which, according to the firm, are ideal for clustering together in a customer-ready setup that could be used for grid computing purposes. Those servers compete with Dell Computer Corp., which peddles its own Intel Corp.-based servers running Linux or Windows as another low-cost hardware option that fits in with the grid model.
Going the customer-ready Sun route – running system software on the firm’s x86 servers – will result in a better deal for customers than going with a components-focused Dell solution, McNealy suggested. “We’re not going to (provide the same pricing) on Dell equipment,” he said, adding that Sun will still sell its server software on the Dell servers, but “it’ll be a little more expensive.”
Frank Lauritzen, manager of database operations for the Downsview, Ont.-based Meteorological Service of Canada, an Environment Canada operation, said Sun’s is one offering that seems to be “much cheaper…than what the mainframe used to offer.” He added that there are always organizations that would benefit from buying a complete customer-ready server software and hardware package – it’s just a matter of deciding when that choice is appropriate for one’s organization.
“It’s a tough call – I’ve made calls both ways. Sometimes we’ve done things ourselves and then we’ve looked at how things have gone and said we should have left it up to someone else – other times it’s been the other way around,” he said. In general, Lauritzen said he’s been seeing a trend toward complete services, which “at times is valid.”
Oracle wants to help organizations address one of the major challenges that comes with grid setups: creating the illusion that all these machines are one machine. That was one of the messages in Oracle CEO Larry Ellison’s keynote address Tuesday afternoon.
Ellison said data centre employees working for companies that have switched to the grid model have a lot more work on their hands managing all these servers. For example, they now have to install software and patches on 100 to 200 two-processor machines, whereas before they only had to worry about five or six larger servers.
This management problem has the potential to defeat the purpose of the grid. “If there are no management or provisioning tools, whatever savings we get in hardware will be lost in labour,” Ellison said.
Oracle’s Grid Control software was designed to help customers monitor and manage entire Oracle grid infrastructures, from databases and applications to storage, within a single console, Ellison said.
The grid management product will provide users with advice on how to plan for capacity, availability and performance needs within a grid. The software can make comparisons for users between different database servers in the grid and reveal how they’re different, Ellison said. It can tell the system to automatically load balance and tune itself to adapt resource usage to patterns.
The software features a “control repository” that contains performance, availability and configuration data about the enterprise, as well as a set of centralized management capabilities that transform and configure data into valuable information, the firm said.
Using Grid Control, administrators can reduce the complexity of managing multiple servers in a cluster and automate the management of computing resources.
Grid computing is the biggest wave to hit the IT industry since the introduction of IBM’s 360 mainframe in 1964, according Ellison, who reminisced about the quest many hardware companies have embarked on these last 40 years: building bigger servers to get more computing capacity.
There’s a problem with that approach, Ellison said: “Once you have the largest server there is, you’re done; there’s no place to go (to get more capacity) if you have a single-machine architecture.” Many organizations are now at a point where their applications have outgrown their machines, he said.
Ellison reiterated the downsides to large-server setups, including the idea that they are “very expensive” and that once capacity in one large server is maxed out, the customer has to “throw it out and spend millions” again on the next biggest machine that comes out.
Perhaps the “worst of the Achilles’ heels” of large server environments is the problem of a single point of failure, he added – that when one machine goes down, the users go down with it.
Grid computing addresses both these issues by requiring low-cost four or two-processor machines that users can just plug in to increase their capacity as they need it. The fault tolerance and load-balancing capabilities of grids means that if one small server in the grid goes down, “users don’t see any interruptions in their services at all,” he said.