Site icon IT World Canada

Analyzing Oracle’s database clustering innovation

The clustering feature that’s part of Oracle Corp.’s upcoming 9i database release could prove to be a breakthrough in creating large transaction databases to power e-commerce Web sites. But a complete evaluation of the technology feature won’t be possible until Oracle delivers the technology feature outside its own labs, most likely later this year.

The “cache fusion” architecture, revealed earlier this month at Oracle OpenWorld, will let companies tie together an array of computers, adding more processors, memory and storage as transaction loads and the number of end users grow.

Oracle’s Roger Bamford, principal database architect, says the new technology will give customers “infinite headroom” when it comes to building database clusters. He claims the technology will enable cluster building to be less expensive than alternative technologies.

Oracle has 27 patents either issued or pending for cache fusion, which has been in development for about five years.

Bamford claims existing methods for scaling databases have significant shortcomings. For example, CPUs, memory, I/O capacity and disk storage can be added to multiprocessor servers. But such systems are constrained by the computer’s memory bus, which shuttles data around internally.

Another approach links computers to a group of shared disk drives. Oracle’s Parallel Server is a “shared disk” version of the Oracle database for this purpose. This approach can work fine for applications that mainly read data, such as decision-support programs. But for transaction-oriented applications, the interactions between memory and disk can choke performance as traffic and users soar, Bamford says.

A third option is to use separate computers, each with their own storage. However, this approach involves complex partitioning of the database, which becomes more difficult as the database increases in size and as transactions need data from different parts of a database.

Simple solution

Oracle’s solution is deceptively simple. Third-party, high-speed computer interconnects can tie together a group of separate computers.The cache fusion code then lets computers share data by moving it from one PC’s memory, known as a buffer cache, to another’s. This eliminates the need of one computer first writing the data to disk, and another computer then reading it from disk, both time-consuming tasks.

Users can increase performance by adding computers to the cluster.

One of the biggest challenges was creating the complicated algorithms that let cache fusion handle the loss of one or more computers in the cluster without losing data, Bamford says. “It took us years,” he says.

Being able to scale the database to handle updates is critical to e-commerce sites, says Richard Winter, president of Winter Corp., a Waltham, Mass., consulting firm specializing in large database design. “With today’s e-commerce sites, you have updates [to the database] by very large numbers of users,” he says.

Running the database on large multiprocessor systems has limitations. While Web and application servers can be replicated, copying data for a site with intensive update traffic makes it hard to scale the database, Winter says.

The 9i version of the database is due for general release in March 2001. Beta testing is expected to begin late this year or early next year.

Exit mobile version