Blade servers are evolving at a feverish pace, and Dell Inc.’s PowerEdge 1855 system shows just how far the technology has come.
Previous blade systems suffered from severe restrictions, mainly regarding expandability, connectivity and ease of use. Dell does a respectable job addressing those trade-offs in its second-generation blade platform — particularly network communications.
The Xeon-driven PowerEdge 1855, which replaces the Pentium III-based PowerEdge 1655MC, has a 7U chassis designed to fit into a standard rack. You can slide as many as 10 dual-processor servers into the front of each chassis. In the back are field-replaceable power supplies, fans and networking connections.
The blades and the network infrastructure are linked by a passive midplane that generally doesn’t require servicing. A KVM switch and a management processor are also built into each chassis.
A key consideration for many blade systems is density, and here Dell comes up short. The PowerEdge 1855 system handles as many as six chassis, or 60 servers, in a standard 42U-high rack.
This density is noticeably lower than other dual-processor systems: RLX Technologies Inc.’s 600ex system handles 70 servers per rack, for instance.
Along with their higher densities, many manufacturers offer the option of installing double-width quad-processor blades. Dell doesn’t, and according to the product manager, the PowerEdge 1855 platform won’t support that option in the future. Processor choice is also disappointing: Dell offers only Intel Xeon chips in the PowerEdge 1855.
Where Dell does present choice is in the back of the chassis. There are four bays for hot-swap power supplies, and each bay has integrated fans and two separate cooling modules. There are also four bays for communications modules, two of which are dedicated to a group of five servers. Each server group uses one communications bay for a Gigabit Ethernet pass-through or switch. The second bay is for optional daughter cards installed into the server, and can have either switches or pass-throughs.
There’s also a removable management module that contains the KVM switch’s connectors and an Ethernet management jack. The management module can be deployed in a pair.
The system I tested had three servers installed, each of which had a Fibre Channel (FC) daughter card. The second communications bay had an FC pass-through (you can also place a second Gigabit Ethernet switch in that bay).
The server blades are well-designed. Each of the three I received contained dual 3.6GHz Xeon processors, 1GB RAM, one onboard Gigabit Ethernet interface, and dual 73GB Ultra320 SCSI hard drives. All three servers were running Windows Server 2003, but Dell supports several Linux distributions.
The drives are hot-swappable, and there’s an onboard RAID controller that ties them together into a mirrored configuration. I’d like to see a less-expensive SATA option, but according to a Dell product manager, there has been little customer demand.
Fast I/O expandability is the PowerEdge 1855’s best design feature. The two daughter card connectors inside each blade can be populated by PCI-X or PCI Express cards. Dell preinstalled a QLogic FC host-bus adapter in one bay of the review system. Gigabit Ethernet is also available, and InfiniBand will ship in March, according to Dell. There’s also a management processor chip on each blade, accessible via the LAN.
Overall, the PowerEdge 1855 blade system is a tremendous improvement on the first-generation PowerEdge 1655MC system, and it competes well with other dual-processor Xeon systems.
If you can handle the lower density, Dell’s system is a strong contender.
Quick Link 055934