Site icon IT World Canada

The LP2000r packs a punch

Hewlett-Packard Co.’s LP2000r packs a lot of power into a 2U space, with six drives and two 933MHz Pentium III processors. The unit, which began shipping in April, scored high enough in our tests to be a strong contender in the battle at the top of the two-processor rack-optimized server space.

The LP2000r scored well in our file size, CPU database and network I/O tests. We used new benchmarks with this server, leaving us with no servers to compare with because the older benchmarks were used in previous tests.

However, comparing the Dell Computer Corp. 6400 with four 700MHz Pentium III Xeon processors reviewed last November, the two-way HP delivers 93 per cent of the Dell’s four-way performance, according to our CPU tests. Looking at the results against Windows 2000, the LP2000r delivers 67 per cent of the Dell’s performance, which is not bad for a two-processor machine.

Our LP2000r came with two 933MHz Pentium III processors, 1GB of RAM, six 18GB hard drives, two embedded Intel Corp. Pro100+ Ethernet network interface cards and one Intel Pro1000 Gigabit Ethernet PCI NIC.

The six hard drives are plugged into two three-slot drive cages. The drive carriers allowed the hard drives to be swapped without a hitch.

The two hard drive cages can be connected to different SCSI controllers or plugged into the same SCSI controller. The one we tested came configured with the two drive cages connected to both ports of the NetRAID RAID controller.

The RAID controller configured the first drive in the left drive cage to be the operating system partition. The remaining five drives were configured into two RAID-zero stripe sets for the data partitions.

The LP2000r has two Symbios SCSI controllers on the motherboard. Neither of the SCSI controllers on the motherboard were used, but one SCSI controller was connected to an external SCSI port on the back of the server. We would have liked to have seen the addition of hot-swap PCI slots and key locks on the chassis.

Availability

The availability features of the server are adequate. The server supports two redundant load-balancing power supplies that can be removed from outside the chassis without having to open the chassis. Our LP2000r came with one power supply. The NICs can be configured in a redundant failover arrangement to make the server tolerant of a NIC failure. The LP2000r has hot-swappable hard drives to replace a failed drive. Again, we would have liked to have seen some hot-plug PCI slots.

While the layout of the internal parts is good, upgrading or fixing the server could be difficult. The major components of the server can be accessed via the top cover without using tools, but internal cabling and the fit between components can make removing and adding components a challenge.

For example, we could remove the PCI cage, get to the PCI cards and reinstall the PCI cage, but not without a few problems.

When adding, removing or swapping a PCI card, a piece of sheet metal with a large fan mounted on it for cooling the system RAM and processors had to be removed, and then we could get to the PCI cage. The first time we removed the PCI cage, it took a few minutes to figure out how to unplug it from the motherboard. Then we had to juggle the parts we removed to get to the PCI cage. Reinstalling the PCI cage was also a challenge. It took a few attempts to line up the metal guides to slide the unit back in place.

Racking the server, on the other hand, is much easier. All LP2000rs ship with hardware to mount the server in a two-post or four-post rack. This can be an advantage when the type of rack the server will be mounted in isn’t known. As a bonus, the rack rails mount to a four-post rack without tools.

In addition to the ease of installation into a rack, the LP2000r has a relatively small chassis. Rack dimensions define the height and width of 2U servers, but depth is not as clearly defined. The LP2000r is only 24-inches deep – about four inches shorter than some other 2U servers we have seen. A short chassis depth can make it easier to service the unit in the rack.

Manageability

The LP2000r had good manageability, as it ships with HP’s Top Tools software for managing the server. It includes hooks to the major management platforms such as Tivoli Systems, CA Unicenter and OpenView.

The LP2000r has hardware on the motherboard to allow for remote reboot over the LAN. HP also includes ManageX software to manage system alerts; HP Instant Support to provide automated troubleshooting; PC Anywhere for accessing the server console remotely; and HP Navigator – an operating system installation aid and hardware diagnostic utilities.

HP Navigator is somewhat cumbersome to use. Its implementation of the NetWare installation aid is no more than a driver-disk creation wizard. It doesn’t automatically coordinate the installation of the operating system and the hardware drivers. The management platform and utilities are good, but a more unified approach would be an improvement.

The HP LP2000r is a fine, rack-optimized, performance-oriented server with good features and manageability. The LP2000r would be a good choice for a Web server or file server in an enterprise network.

Our test bed consists of 13 clients with a minimum configuration of two 400MHz Pentium II processors with 128MB of RAM. Each client has one 100M bit/sec Ethernet network interface card for connection to a Cisco 2948Gb Ethernet switch. The server under test is connected to a Gigabit Ethernet Port of the switch.

We used Quest Communications International Inc.’s Benchmark Factory software to coordinate the test development, client load, result gathering and archiving for all the tests. We ran a series of file, network and database tests against Windows NT 4.0, Windows 2000 and Novell NetWare 5.1.

Using the Benchmark Factory software, we defined several tests to look at the performance of the file subsystem.

For the small file transfer tests, we used a three-dimensional test matrix of transfer direction (read/write), block size (1KB/8KB), and transaction type (random/sequential). This test matrix resulted in eight tests. We separated all combinations into individual tests to see how each server would react in each situation. The small file transfer tests used a file size mix of 80 per cent 1KB file, 10 per cent 10KB files and 10 per cent 50KB files.

For the large file transfer tests, we combined the reads and writes together in the same tests. We then created a set of tests that covered all combinations of the transfer type (random/sequential), and block size (1KB/8KB). This resulted in four tests. The reads and writes were combined in order to emulate large file service behaviour for services such as FTP and home space services. The reads and writes were distributed as 90 per cent reads and 10 per cent writes. For each of the large file transfer tests, the file size distribution was 80 per cent 500K-byte files and 20 per cent 1MB files. Ninety per cent of each of the file sizes were reads and 10 per cent were writes.

Each of the files needed for each virtual user was created at the beginning of each test. Each test ran five iterations of increasing load. The number of virtual users started on each client controlled the load. The number of virtual users for each step was determined by running each of the tests against each network operating system to find where the knee of the performance curve lay. From there, we determined a standard set of load parameters to run the tests.

The CPU database test used Microsoft SQL Server 7.0 with NT and Win 2000. We increased the number of virtual users from two to 30. The number of virtual users in no way implies a limitation the database server. Each virtual user in our test is atypical of a real world user in that the load generated by these virtual users is much larger. Each virtual user calls a computationally heavy SQL statement, thus reducing the number of network transactions per unit time, to concentrate the load on the processors. The virtual users then waited nine seconds before executing the SQL statement again.

We used another database test employing a much less intensive transaction to test the network I/O performance of the server. This was achieved by generating a large amount of transactions against the server. Like the CPU database test, the number of virtual users is increased from two to 30 to obtain a performance curve.

To test this server’s performance in a NetWare environment, we ran similar CPU and network tests against an Oracle8 i database running on NetWare 5.1.

Exit mobile version