Looking back over the last few months, I see that I’ve ended more than one column with “More on this later…” or “Coming up, the question of x…”
Except that I’ve been negligent in providing either the more at a later date or that which I’ve said is coming up.
As a small amount of compensation for this oversight, I do want to turn back to a subject we didn’t quite finish with: the critical need to clearly define the end of a project (when we’re “done”), and the objective measures that tell us clearly and publicly that we’ve been successful, or that we’ve “won.”
There’s a lot of power in having everyone agree on the conditions (usually represented by a pre-agreed set of deliverables) that mean that we’re done, and the metrics for success: try this one on for size next time you’re trying to get a project approved: “Before we leave this meeting we will all agree in advance that if these objective conditions are met at the time we’re finished the project, there is no way that anyone can say (please sign here, senior management) that we haven’t won – hearty and public congratulations, promotions and bonuses will be expected accordingly.”
So far so good: A problem that I’ve seen arise on projects when the “done” and the “won” don’t line up – problems usually turn up when the measures that tell us we’ve “won” occur way the hell and gone after all the “dones” are really done.
That was crystal clear, wasn’t it?
How about an example: Imagine we’re building the systems to improve a call-handling centre. You know the drill – the current mishmash of systems makes call handling and client support on the phone awkward, inconsistent, and too slow, therefore expensive.
After consulting with the project stakeholders, we’ve got agreement that we’re “done” when:
- We’ve had the system operational in production (define these words carefully) for 90 days and …The help desk staff have been completely trained (another word that needs careful definition) on the new applications and …We’ve seen a call volume of at least 500 calls per day at least five times during the 90 days.
Note that they’re all “ands.”
Fair enough – these are the “dones” – we recognize that we could still do all these things and fail (i.e. not meet the “won”) criteria, but it does give us an agreed to and objective point at which to declare the project closed.
Now to the second question: at (or even before) the point we’re done, how will we know we’ve won? I’m looking for the performance metrics here.
After much twisting of arms, gnashing or teeth, rending of garments, we get our business partners to agree on (write down, sign off, publish widely) three objective measures of success – we’ve won on this project if:
- The average call handling time after the installation of the new system is reduced from 170 to 70 seconds and …The call centre staff is reduced by 30 per cent (this makes sense – faster calls requires fewer people) and …Our customer satisfaction level increases from 60 per cent to over 80 per cent (since I’m a stickler, by this I mean that at least 80 per cent of our customers give us a rating of four out of five or five out of five on our customer satisfaction survey).
So far so good. We’ve got a clean project end point, and clear metrics for success.
But wait a minute – we’re not finished until we’ve ensured that the “dones” and “wons” line up.
It’s pretty clear when the CEO says: “My success criteria for this project is substantial cost savings for the next five years” that you’ve got a project vs. ongoing operations definition bust. And of course we wouldn’t be caught out by this one – we would respond by saying “With all due respect sir/ma’am, unless you want me to manage this system as a project for the next five years we’re going to need some metrics for success well before that – success metrics at or before the point we finish the project and move the thing into a state of ongoing operations.” How smart we are.
It’s when the dones and wons are just a little off that causes problems. Back to our example and a critical review:
In the 90 days that the system is in operation, will we get enough transactions to give evidence that average call handling times are reduced from 170 to 70 seconds? If yes, we’re OK with this one.
In the 90 days that the system is in operation, will we be able to effect a staff reduction by 30 per cent? If yes, we’re good to go with this one too.
In the 90 days etc. etc., will we be able to move our customer sat. numbers up from 60 to over 80 per cent? Hold the phone on this one: how often do we do customer satisfaction surveys? The answer comes back: once every six months.
We have bust here ladies and gentlemen, a disconnect between the done and the won. We’re done after 90 days in production, but we can’t possibly know that we’ve won until at least six months after the system has gone live.
We have a choice here – extend the end of the project (expand the done) until we’ve been in production for at least six months or alternatively, take the third won measure off the table. One or the other, but these dones and wons can’t live side by side in the same project without causing trouble.
It comes down to a simple rule of thumb: make sure your wons all occur at or before the point the project is done. If they do, you’re in good shape, if not, your end point is going to drift – and how do you schedule for drift?
Hanley is an IS professional in Calgary. He can be reached at firstname.lastname@example.org.