How to build solid, reliable networks

SAN FRANCISCO  – While almost every part of a modern data centre can be considered mission-critical, the network is the absolute foundation of all communications.

That’s why it must be designed and built right the first time. After all, the best servers and storage in the world can’t do anything without a solid network.

To that end, here are a variety of design points and best practices to help tighten up the bottom end.

The term “network” applies to everything from LAN to SAN to WAN. All these variations require a network core, so let’s start there.

The size of the organization will determine the size and capacity of the core. In most infrastructures, the data centre core is constructed differently from the LAN core. If we take a hypothetical network that has to serve the needs of a few hundred or a thousand users in a single building, with a data centre in the middle, it’s not uncommon to find that there are big switches in the middle and aggregation switches at the edges.

Ideally, the core is composed of two modular switching platforms that carry data from the edge over gigabit fiber, located in the same room as the server and storage infrastructure. Two gigabit fiber links to a closet of, say, 100 switch ports is sufficient for most business purposes. In the event that it’s not, you’re likely better off bonding multiple 1 Gigabit links rather than upgrading to 10 G for those closets. As 10 G drops in price, this will change, but for now, it’s far cheaper to bond several 1Gbit ports than to add 10G capability to both the core and the edge.

In the likely event that VoIP will be deployed, it may be beneficial to implement small modular switches at the edge as well, allowing PoE (Power over Ethernet) modules to be installed in the same switch as the non-PoE ports. Alternatively, deploying trunked PoE ports to each user is also a possibility. This allows a single port to be used for VoIP and desktop access tasks.

In the familiar hub-and-spoke model, the core connects to the edge aggregation switches with at least two links, either connecting to the server infrastructure with direct copper runs or through server aggregation switches in each rack. This decision must be determined site by site, due to the distance limitations of copper cabling.

Either way, it’s cleaner to deploy server aggregation switches in each rack and run only a few fiber links back to the core than try to shoehorn everything into a few huge switches. In addition, using server aggregation switches will allow redundant connections to redundant cores, which will eliminate the possibility of losing server communications in the event of a core switch failure. If you can afford it and your layout permits it, use server aggregation switches.

Regardless of the physical layout method, the core switches need to be redundant in every possible way: redundant power, redundant interconnections, and redundant routing protocols. Ideally, they should have redundant control modules as well, but you can make do without them if you can’t afford them.

Core switches will be responsible for switching nearly every packet in the infrastructure, so they need to be balanced accordingly. It’s a good idea to make ample use of HSRP (Hot Standby Routing Protocol) or VRRP (Virtual Routing Redundancy Protocol). These allow two discrete switches to effectively share a single IP and MAC address, which is used as the default route for a VLAN. In the event that one core fails, those VLANs will still be accessible.

Finally, proper use of STP (Spanning-Tree Protocol) is essential to proper network operation. A full discussion of these two technologies is beyond the scope of this guide, but correct configuration of these two elements will have a significant effect on the resiliency and proper operation of any Layer-3 switched network.


Once the core has been built, you can take on storage networking. Although other technologies are available, when you link servers to storage arrays, your practical choice will probably boil down to a familiar one: Fibre Channel or iSCSI?

Fibre Channel is generally faster and delivers lower latency than iSCSI, but it’s not truly necessary for most applications. Fibre Channel requires specific FC switches and costly FC HBAs in each server — ideally two for redundancy — while iSCSI can perform quite well with standard gigabit copper ports. If you have transaction-oriented applications such as large databases with thousands of users, you can probably choose iSCSI without affecting performance and save a bundle.

Fibre Channel networks are unrelated to the rest of the network. They exist all on their own, linked only to the main network via management links that do not carry any transactional traffic. iSCSI networks can be built using the same Ethernet switches that handle normal network traffic — although iSCSI networks should be confined into their own VLAN at the least, and possibly built on a specific set of Ethernet switches that separate this traffic for performance reasons.

Make sure to choose the switches used for an iSCSI storage network carefully. Some vendors sell switches that perform well with a normal network load but bog down with iSCSI traffic due to the internal structure of the switch itself. Generally, if a switch claims to be “enhanced for iSCSI,” it will perform well with an iSCSI load.

Either way, your storage network should mirror the main network and be as redundant as possible: redundant switches and redundant links from the servers (whether FC HBAs, standard Ethernet ports, or iSCSI accelerators). Servers do not appreciate having their storage suddenly disappear, so redundancy here is at least as important as it is for the network at large.


Speaking of storage networking, you’re going to need some form of it if you plan on running enterprise-level virtualization. The ability for virtualization hosts to migrate virtual servers across a virtualization farm absolutely requires stable and fast central storage. This can be FC, iSCSI, or even NFS in most cases, but the key is that all the host servers can access a reliable central storage network.

Networking virtualization hosts isn’t like networking a normal server, however. While a server might have a front-end and a back-end link, a virtualization host might have six or more Ethernet interfaces. One reason is performance: A virtualization host pushes more traffic than a normal server due to the simple fact that as many as dozens of virtual machines are running on a single host. The other reason is redundancy: With so many VMs on one physical machine, you don’t want one failed NIC to take a whole bunch of virtual servers offline at once.

To combat this problem, virtualization hosts should be constructed with at least two dedicated front-end links, two back-end links, and, ideally, a single management link. If this infrastructure will service hosts that live in semi-secure networks (such as a DMZ), then it may be reasonable to add physical links for those networks as well, unless you’re comfortable passing semi-trusted packets through the core as a VLAN. Physical separation is still the safest bet and less prone to human error. If you can physically separate that traffic by adding interfaces to the virtualization hosts, then do so.

Each pair of interfaces should be bonded using some form of link aggregation, such as LACP (Link Aggregation Control Protocol) or 802.3ad. Either should suffice, though your switch may support only one form or the other. Bonding these links establishes load-balancing as well as failover protection at the link level and is an absolute requirement, especially since you’d be hard-pressed to find a switch that doesn’t support it.

In addition to bonding these links, the front-end bundle should be trunked with 802.1q. This allowed multiple VLANs to exist on a single logical interface and makes deploying and managing virtualization farms significantly simpler. You can then deploy virtual servers on any VLAN or mix of VLANs on any host without worrying about virtual interface configuration. You also don’t need to add physical interfaces to the hosts just to connect to a different VLAN.

The virtualization host storage links don’t necessarily need to be either bonded or trunked unless your virtual servers will be communicating with a variety of back-end storage arrays. In most cases, a single storage array will be used, and bonding these interfaces will not necessarily result in performance improvements on a per-server basis

However, if you require significant back-end server-to-server communication, such as front-end Web servers and back-end database servers, it’s advisable to dedicate that traffic to a specific set of bonded links. They will likely not need to be trunked, but bonding those links will again provide load-balancing and redundancy on a host-by-host basis.

While a dedicated management interface isn’t truly a requirement, it can certainly make managing virtualization hosts far simpler, especially when modifying network parameters. Modifying links that also carry the management traffic can easily result in a loss of communication to the virtualization host.

So if you’re keeping count, you can see how you might have seven or more interfaces in a busy virtualization host. Obviously, this increases the number of switchports required for a virtualization implementation, so plan accordingly. The increasing popularity of 10G networking — and the dropping cost of 10 G interfaces — may enable you to drastically reduce the cabling requirements so that you can simply use a pair of trunked and bonded 10G interfaces per host with a management interface. If you can afford it, do it.

(From InfoWorld)   http://www.infoworld.com/node/129872

 

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now