I was having lunch the other day with a CIO whom I’ve known for a long time and we started talking about some of the organization’s major projects. This is a fairly departmentalized organization, so sometimes projects come up that are created by specific departments more or less independently, and the enterprise CIO has only review responsibility. A couple of these projects were running well into the tens of millions of dollars. One of them, still in an early stage, had a significant eight-digit price tag.
“How does anybody know early on that a project like that will cost x million?” I asked.
“These days, everything seems to cost x million,” my friend responded.
“Doesn’t it seem to you that somewhere in the mid-1990s, the price of everything — particularly large software projects — seemed to jump an order of magnitude?” I asked.
My friend agreed that things were certainly far more expensive than they were a decade ago. “But if you put one of these projects out for bid, all the quotes come in about the same.”
At that point, I did what I often do in times like this: I picked up the phone and called one of my other CIO friends who runs a very big IT shop and asked the same question: “Why has software gotten so expensive?”
This CIO’s response was quick and decisive. “It’s because things have become so complex and so interrelated. Think about three-tier applications, the Internet, and security — just to name three really big factors. More and more packages that we’re buying reach into more and more parts of our business. You can’t do much of anything for $20 million anymore. Some enterprise projects escalate into the hundreds of millions before you know it.”
This got me thinking. What if software (and a lot of other things) had gotten a lot more expensive not just because of the complexity or the sophistication of the system, but rather because of the complexity of the software process or the technology platform? What if we had recalibrated our expectations and that recalibration itself was part of the problem? Inflation is an amazing thing. When I was a kid I remember my parents talking about the Depression; about how in the 1930s and 1940s you could buy bread for a nickel and a quart of milk for a dime.
Of course, I listened to all this in a disbelieving fashion — I knew how much things really cost, didn’t I? Today, I find myself talking to my kids about a time when you could buy a new car for $5,000 and a good house for $40,000, and they look as disbelieving now as I did then. I wonder whether or not part of the problem with the extreme cost of software isn’t a certain mindset wrought in part by our decision to stop building software ourselves — our decision to oursource the tough stuff.
The FBI recently announced that it was going to abandon something called the “Virtual Case File System,” in which it had invested something around $110-$120 million. It didn’t meet the FBI’s needs and it wasn’t technologically what was needed, the Bureau concluded. In a number of articles that discussed this problem, industry experts commented that the FBI’s big mistake was trying to have a contractor build a law enforcement case system from scratch rather than buying one (good point). But one of these experts implied that you couldn’t expect to build a very good case management for “only” $100 million.
I still think that $10 million is a lot of money, and I think that $100 million is a whole lot of money. Unfortunately, I have come to think that a $100-million project is too big for most people in the software business to manage successfully. I think we need to start looking at breaking some of these really big projects into much smaller components, and we need to spend a lot more time on the overall architecture before we start hiring one of the three biggest ISVs in the world for 10s or 100s of millions of dollars.
— Ken Orr, Fellow, Cutter Business Technology Council