Site icon IT World Canada

Case Study: Red light, green light

With hundreds of new IT initiatives rolled out in an average year, General Motors Corp. North America is up to its hubcaps in technology projects, but the venerable car manufacturer is wary of turning project tracking into a project unto itself. The solution: an easy to use, easy to interpret project “dashboard” that uses three signals instantly familiar to anyone who’s ever sat behind the wheel: green light, yellow light, red light.

In 1999, feeling that GM North America lacked a formal, common way to report metrics among workgroups and to management, several members of the IT leadership team developed a handful of instruments designed to track project status. However, useful as they were, those reports were still too detailed for senior managers that wanted only to keep an eye on the progress of projects many levels below them.

“When you get higher in management, your questions are more about the overall health of a project, not the details,” explains David S. Clarke, director of IT operations and infrastructure for GM North America in Detroit. “As we were reporting on projects to upper management, we found that we were giving them a view that was too detailed and not consolidated enough.”

So a group of leaders from the CIO’s project management office set to work developing a dashboard that colour-codes the status of all IT projects: green when it’s progressing as planned, yellow when at least one key target has been missed, red when the project is significantly even if just temporarily behind. “The dashboard is a signaling method; it’s a way to send a message fast,” Clarke says.

From inception, each and every project is tracked and rated monthly on four dashboard criteria: performance to budget, performance to schedule, delivery of business results and risk. The individual measures and triggers used to track the status of those four criteria are determined at the outset of the project by the project manager, a planning manager and other relevant executives.

Each of the four categories is then assigned a colour status each month by the project manager after he reviews that category’s relevant measures. Finally, the overall project is assigned a color for the month. By design, the technology itself is as simple to use as possible: an Excel spreadsheet and PowerPoint presentation template.

Four-way intersection

A few years ago GM North America’s midsize and luxury car unit, or Mid/Lux unit, upgraded from Windows 3.1 to 95. For that particular project, the four dashboard criteria were measured as follows.

– Performance to budget was determined by comparing the budgeted per-seat cost of the rollout against the actual costs, which included charges for the contractors to physically install the program, the per-seat cost of the software itself and the cost of training. GM North America had determined previously that it would be cheaper to maintain and support Windows 95 than it had been to maintain Windows 3.1, which meant that the cost of the rollout was expected to go down each month as the savings realized by the new installations accrued. This dividend was built into the budget; therefore, in order to gauge its performance to budget, the Mid/Lux unit needed to track how many seats were deployed each month. Then, if the cost per seat was on or under budget for the month, the rating was green. A certain, predetermined percentage over budget automatically warranted either a yellow or red rating.

– Performance to schedule was determined by counting the number of seats deployed per month and comparing that to the rollout schedule established at the project’s inception. Parameters were ahead of schedule, on schedule or behind schedule.

– Delivery of business results was gauged in this case by looking at two variables: if the rollout was on target to achieve the anticipated cost savings associated with moving to Windows 95, and if the team was migrating users successfully. The former was measured, as it was in the performance-to-budget criterion, by counting the savings generated by the number of seats deployed per month. The latter in essence, customer satisfaction was measured by user surveys and help desk activity, if printer mappings were installed correctly, if shared-driver access was maintained after the upgrade and if new Windows 95 users were able to do at minimum everything they had been able to do previously. While user satisfaction is difficult to measure quantitatively, Clarke says, planning managers tried to stick to hard-and-fast numbers where possible a certain number of negative responses in the user surveys would automatically require a yellow or red rating, for example.

– The dashboard’s risk-mitigation measure isn’t meant to take the place of a full-blown weighted risk-assessment methodology, which is calculated separately for larger projects. Rather, it serves simply as a way to track potential hazards to a project, explains Clarke, who was chief information officer for the Mid/Lux unit at the time. Project managers identify potential risks at the outset, establish plans to mitigate those risks, then measure each month how well the risk is being controlled. For the Windows 95 rollout in the Mid/Lux unit, the risk was that the core build of the new OS wouldn’t be available on time. Parameters were determined by measuring whether Microsoft Corp. was meeting its promised delivery targets. Of course, risks can and do surface suddenly mid-project. For example, Clarke discovered in the early months of the Windows 95 rollout that the software his team had installed wasn’t fully Y2K compliant, which meant that they had to go back and modify the workstations they had already upgraded.

True colours

Generally a green score overall indicates all four criteria underneath are green as well, though there is some latitude in that judgment. Similarly, if there’s red anywhere in the dashboard, it’s usually red “on top,” or overall, as well. Yellows and greens are a bit harder to call. “It’s discretionary. People will get cute and put in notes saying: This is orange,” Clarke says.

What happens when a project goes red? Dashboard proponents at GM North America have worked hard to educate project managers and team members that red doesn’t mean “bad,” it means: “help.” “Red means, I need more money or more people or better business buy-in or a business champion or help with a vendor,” Clarke explains. The dashboard provides an early warning system that allows IT managers to identify and correct problems before they become big enough to derail a project.

It’s not unusual for projects to show a red month or two and the dashboard includes a text area where mitigating factors can be mentioned, but three red months in a row is normally the outer limit before intervention is necessary, and on critical projects, a single red can be enough to bring out the troops.

At one point, the Mid/Lux Windows 95 rollout was red simultaneously in three different areas: budget, schedule and risk, Clarke says. The group needed additional funding, additional resources to perform necessary software and hardware upgrades, and additional help with training and migration from a different group within Information Systems & Services (IS&S).

“Presenting that status in not just one meeting but several helped everybody understand the difference between the program we planned and the work that was happening on the front lines,” Clarke says. “That red status was a neutral way to communicate that our two groups needed to work more closely together.”

Frequently, red status triggers a corrective action meeting between relevant executives on the project and a CIO from the business-unit level or higher up. Mark Thompson, director of planning for GM North America Information Technology, recalls one case where a project hit a yellow light when a supplier wasn’t meeting its delivery schedule. Because the project was considered strategically crucial, GM went into immediate alert mode, with the supplier and other project executives being called into a meeting with the high-ranking CIO for GM North America. “That’s one meeting everyone aspires never to be in,” says Thompson. “It can be very painful. Let’s just say there were some immediate behavior adjustments all the way around.”

Give them what they need

Thompson and Clarke emphasize that the project dashboard succeeds because it’s supported by a host of more in-depth reporting mechanisms in the background. In effect, the dashboard is the tip of the iceberg for those times when managers need to see only the tip. If and when they want to see more, they most often turn next to the “4-up report,” a one-page, four-quadrant report that gives a detailed synopsis of a project’s status by financial, deliverables, milestones and risk activities. And large, complex projects like Y2K are often subject to an additional earned-value evaluation, where points are assigned to each task and a number value, obtained at certain milestones, objectively indicates how well the project is performing.

Additionally, a project-approval process that Thompson calls “grueling” weeds out weak project proposals at the outset. And once approved, major initiatives are subjected to additional scrutiny in the form of a project-development methodology borrowed from systems development that defines six “toll gates” (planning, definition, design, build, deploy and close) during which performance is gauged and analyzed, according to Thompson. Projects with major flaws would falter at these gates long before their dashboards turned red.

When it comes time to distribute project-tracking assessments obtained with its project dashboard, GM North America’s tool is as flexible as it is simple. An individual project manager might distribute to her immediate supervisors the entire dashboard, with status colors showing for all four categories it measures. By contrast, a report to a business unit CIO or regional CIO summarizing the status of all the projects of a particular business unit would most likely show just a single color per project, with a small text comment when appropriate.

In addition, GM North America’s Process, Integration and Quality Assurance group, which administers the dashboard, assigns point values to colors (green, 2; yellow, -1; and red, -3). That helps maintain rolling, 12-month views of all projects and all operational metrics, such as network downtime and help desk response times, for a given division or business unit. Those reports allow IT to obtain a comparative picture of overall performance over time. “The scores out of context mean nothing. But over the course of a year, it helps us gauge stability,” Clarke says.

GM North America’s IS&S department likes the red-yellow-green dashboard metaphor so much that it uses it to track operations as well. A separate but identical dashboard reports on network performance, Notes availability, help desk satisfaction, delivery of new user PCs and IDs, availability of the company’s GM online Intranet, lost hours of productivity and so on.

“We started with a separate way to do this, but the conclusion we came to is that we can use the dashboard method to portray both projects and operations,” Clarke says. “Whether we’re sustaining operations or starting something new, we still need to send a status message, and we all understand the meaning of red-yellow-green.”

Have a value methodology you’d like to share and have analyzed? Contact us at casefiles@cio.com. Tracy Mayor is a writer who specializes in business, technology and parenting.

Valuation flow chart

GM North America’s dashboard rating process

– Rank all IT projects on four criteria performance to budget, performance to schedule, delivery of business results and risk using predetermined metrics.

– Assign a colour code to each criterion for each project.

– Assign a colour code to each project for overall status that month.

– Report to upper management.

– Assign point values to color codes for rolling, 12-month views.

Expert Analysis: Give regression a try

By Douglas Hubbard

GM North America’s dashboard tool takes an intuitive approach to summarizing large amounts of project status data with a color scheme that everyone understands. I’m sure it goes a long way toward helping upper management keep an eye on projects underway.

The one thing I’d caution is that GM should make sure that the right data is being summarized and represented. Project tracking is, after all, a forecasting problem the goal is to reliably point out projects that need help (that is, projects whose outlook is poor without help) without bringing undue attention to projects that are getting along just fine. I’m not certain that the four metrics included in GM’s tool forecast problems reliably. Evaluations of business results and risk, in particular, are made fairly subjectively. As a test of the metrics’ predictive power, GM North America might look at historical data for all projects that were consistently coded green in the risk category, for example, and see if more of them were actually successful implementations than those where risk was coded red. If not, the data being summarized isn’t of much use for helping manage project success.

An “actuarial” analysis of GM’s extensive project history might also be useful. A formula could be derived that considers only objective criteria and computes probabilities of potential disasters. Factors such as changes in project management, skill levels of staff, level of sponsorship and number of business units serviced could all be put into a regression model to see if they are correlated to future undesirable outcomes, such as cancellation of the project. This is a practical analysis that has already been implemented at other companies.

Furthermore, with an objective model of risks, GM’s Process group wouldn’t need its system of assigning arbitrary point values to the colors for its 12-month views. The color codes are already somewhat arbitrary and subjective, but assigning numbers to them would, in the field of decision theory, be considered an “information destroying” step. That’s because the forecasting ability of this numerical score is probably even worse than the colour assignments themselves. A regression method would certainly produce better forecasts. I’m also certain that the size of GM North America’s project portfolio would easily justify the effort to create a more statistically sound approach to project tracking.

Douglas Hubbard is president of Hubbard Decision Research in Glen Ellyn, Ill., and inventor of the applied information economics method for valuing IT projects. He can be reached at dwhubbard@hubbardresearch.com.

Exit mobile version