Ever since the mid-’90s, when switched and Fast Ethernet started to appear on campus networks, we’ve witnessed a bandwidth boom the likes of which we’d not seen before. This bandwidth glut played a significant role in suppressing interest in implementing LAN-based quality-of-service schemes. In a somewhat bizarre twist, though, the move to wireless may well cause this situation to reverse itself.
For all that wireless brings – and I’m a strong proponent of it – it is not a panacea. Like most technologies, there are trade-offs. Gone is the world where your desktop or notebook commands a dedicated 100 Mbps connection to a LAN switch that, by the way, can offer similar connectivity to dozens, if not hundreds, of your colleagues.
In its place is the access point. Linked “upstream” to a single switch port via Fast Ethernet, it becomes the “hub” (figuratively and literally) of your wireless LAN. And if you are thinking hub as in the old-fashioned 10 Mbps Ethernet hubs that used to be bottlenecks in your network, you’ve got a good handle on the situation.
While not identical, the comparison works well. With wireless, you and your neighbours must share a very limited amount of bandwidth. Given that virtually all of the wireless that is installed today is based on the 802.11b standard – that “pool” of bandwidth is but 11Mbps.
Thus, we find ourselves in much the same situation we were in during the pre-switching, pre-Fast Ethernet age, only worse. With today’s faster PCs, streaming applications and monster PowerPoint files, a single application could consume every bit of that bandwidth.
Network executives need to do something – but what?
Ironically, most of the fancy QoS we’ve got built into our LAN switches won’t help us much. Those switches typically implement class-based queuing. This sorts individual packets into different priority queues but doesn’t kick in until the LAN port is saturated, something that is never going to happen when an access point that has an 11Mbps downstream capacity is wired into a Fast Ethernet switch port.
Enter the “bandwidth manager.” Long used at the edge of the campus, where the Fast Ethernet/Gigabit Ethernet superhighway typically becomes a T-1 country road, bandwidth-management products excel at doling out precious bits of bandwidth.
Unlike the typical LAN switch, most bandwidth managers implement flow-based queuing algorithms. This lets them exercise precise control over the session traffic that traverses the bandwidth manager. Because most are sensitive to application-level characteristics, one can easily configure them to put a governor on, say, FTP or HTTP download streams. Some are sophisticated enough to allow “full throttle” when there are no other stations vying for the bandwidth.
So it would seem that a bandwidth manager is in order. But, as always, there are trade-offs.
For starters, bandwidth-management devices are not free. While typically easy to manage, the added cost will affect your capital budget.
The biggest concern will be with network design. Today, many network managers simply plug in wireless segments where needed. Given that every one of those segments needs to traverse the bandwidth manager, this presents a problem. There’s no way you’d want to buy a dedicated box for each segment.
So some network redesign might be required – at least until true bandwidth management gets bundled inside access points.
Tolly is president of The Tolly Group, a strategic consulting and independent testing company in Manasquan, N.J. He can be reached at firstname.lastname@example.org.