Site icon IT World Canada

There

“But experts cautioned that the test lacked some real-world conditions….”

Testing being at the core of what we do at The Tolly Group, a newspaper story with these words was certain to attract my attention. It might seem to some that any test is an incomplete test, and by extension, without value — but I don’t agree.

Interestingly, the quote above is not from a trade publication but from a recent front-page story in The Washington Post titled “Target Intercepted in Anti-Missile Test.” In this case, experts bemoaned that the test attack was not a surprise attack, did not involve multiple incoming missiles and did not involve an enemy trying to thwart the tracking system. The implication was that the Pentagon test was a pointless waste of time (and, to be sure, it cost a lot of money).

So, who is right? From reading about all of the flaws in the test, one might concur with the experts. Again, I would disagree.

When you think about it, a test that tries to do too much often accomplishes nothing. Imagine if the missile test had all of the extra real-world conditions listed above and it failed. The first thing one would want to know is why. With so many variables, the likely answer would be: Who knows — we tried to do too much.

The essence of testing, whether in IT or elsewhere, is to isolate certain elements to establish a baseline of performance or functionality. Subsequent tests can build upon the base knowledge and be used to exercise more sophisticated features.

We need to build our testing — as the Pentagon did — by testing core functions and then increasing complexity. It doesn’t make sense to conduct, say, a test trying to establish the maximum throughput of a wireless LAN (WLAN) in an environment that you know is loaded with interference and physical obstructions. What would be the point?

Just as we let athletes optimize their performance by wearing performance clothing at track and swim meets, it makes sense to determine the best-case performance of a technology before adding other elements. As important as other real-world elements are, they are often meaningless without some baseline numbers for comparison.

In the case of WLANs, for example, it didn’t take us long to realize that the best throughput, even under optimal conditions, was about half of the rated speed (we learned to expect no more than about 20Mbps to 22Mbps out of 802.11g 54Mbps LANs).

If we hadn’t established this in ideal, laboratory conditions, one might have thought interference was driving down throughput dramatically. (As it is, there are architectural reasons for this number.)

So, yes, let’s do WLAN tests in environments with obstructions and interference, but remember the essential of having controlled results with which to understand this. Let’s recognize that a test piling on complexity is not inherently better than a straightforward test of a single aspect of a product done under controlled conditions.

A test must be repeatable to carry much meaning. Given the nature of some technologies, reproducing a test environment isn’t always possible. Still, similar results should be expected for similar conditions.

Finally, and most importantly, we need to remember that numbers without analysis often tell us nothing. Look beyond the numbers; look for meaning.

QuickLink: 064291

Exit mobile version