How much do bugs cost us? We were recently given an estimate. The U.S. government’s National Institute of Standards and Technology (NIST) released a study that says software errors cost U.S. users US$59.5 billion each year. That’s more than Bill Gates is worth these days (US$52.8 billion, according to Forbes), but it’s still only about US$200 per American.
Of course, most corporate IT shops won’t care.
Then again, the NIST estimate is very conservative – it specifically excludes catastrophic software errors that might shut down a business or cost big money in some other way. That US$59.5 billion is just the cost of routine work-arounds and corrections by users, along with the added cost of buggy software that had to be fixed late in the development process. The real cost of bugs is much higher.
Most IT shops still won’t care.
Oh yeah, the NIST study also calculates that US$22.2 billion of that US$59.5 billion, or 37 per cent, could be saved through “feasible improvements” in software testing. Not pie-in-the-sky, exterminate-all-bugs campaigns, but practical approaches to catching and fixing bugs sooner in the development process.
Hear that yawn? That’s the sound of most of us still not caring.
And why don’t we? We should. We’re always hearing from analysts and the boss how important it is to get a return on investment from IT, and how IT’s No. 1 goal these days is cutting costs. But we know it’s hard to quantify the ROI of having fewer bugs. It’s tough to tote up the savings from time not wasted by employees using the applications we create.
Now, for once, we’ve got credible numbers that say we can get a quantifiable return if we invest in better ways of writing software. And we don’t have to become perfect to get that return. We just have to get better.
And the improved security from fewer bugs? Better customer retention on the Web site because of fewer bugs? Reduced downtime and lower risk of lost data or catastrophic failure due to fewer bugs? We’d get all that too, along with our share of that quantifiable US$22.2 million in low-hanging fruit.
So why don’t we do it? It’s not that we really don’t care. It’s that, at a gut level, we don’t believe that it’s possible, or that it’s necessary, or that it matters. We indulge ourselves with the idea that all software has bugs, so trying harder to get rid of them is pointless perfectionism.
Intellectually, we should know better. Our ways of building applications are outdated. They come from a time when users were grateful to have any real-time access to data at all, when customers were insulated from all our systems by at least one layer of employees, when our systems were important but not truly mission-critical if worst came to worst, employees could keep the business going using pencils and paper.
But now our users just plain can’t do their jobs if our systems fail. And we have real, revenue-generating customers using our Web applications and experiencing every error and flaw. And bugs aren’t just an annoyance anymore they’re a real threat to productivity and profitability.
Software is ever more critical and ever more complex. And with so much riding on it, we can’t afford the bug-ridden way most of us are still writing it. We’ve got to prevent the bugs we can and catch a lot more of the rest long before the code moves on to QA.
There are lots of ways of doing it: programming in pairs, unit testing early and often, radically increasing use of automated testing tools. There are straitjacket methodologies and lightweight best practices, approaches that will cost a bundle, and techniques that just require a little training and a lot of commitment on the part of programmers.
But none of them will make a difference until we really do start believing that bugs matter and that our software has far too many of them and that they’re costing us billions.
Hayes, Computerworld (U.S.) senior columnist, has covered IT for more than 20 years. Contact him at firstname.lastname@example.org.