What really went wrong with the project? Here is the history, according to the technical lead:
"We pitched the client an economy car, and they came back and asked us to build a stealth fighter in the same time frame. I said it was impossible, but my manager said to try anyway. And now here we are, with about 10 percent of the stealth fighter completed instead of 100 percent of the economy car."
That eyewitness account is a lot more useful than a color code, right? Based on this, it looks like the project collapsed in the first month, during the design phase.
Yet, the project plan showed the design phase as successfully completed, and the earned value system had the project green for months afterward.
It sounds like the project manager should have called the design phase a failure and sent everyone back to the drawing board.
But before we point the finger, let's think about what would have happened.
The project would have turned red barely three weeks out of the gate. It would have been the quickest trip to the woodshed in project management history.
This raises a very interesting possibility. We've already established that the project manager cheated the earned value system to make his project look better.
But did the earned value system actually encourage the PM to make bad decisions?
Could a simple scoring system designed to track project performance actually contribute to a major project failure?
Speed vs. Quality
Let's say that we have two project managers, Jack and Jill. They are both experienced, but Jack has a tendency to cut corners on tasks and ignore potential problems, while Jill is extremely cautious and is very careful to check work quality and avoid risk. Who is the better project manager?
According to our earned value management scoring system, Jack is better.
He completes his tasks more quickly and at a lower cost. The system ignores the fact that Jill's team consistently delivers higher quality work.
It's not that earned value promotes poor qualityit is just blind to quality.
From the perspective of earned value, quality on all tasks and all projects is equal and absolute.
When a task is marked complete, the system assumes it will meet or exceed the required level of quality for the project.
This assumption is necessary in order for the earned value metrics to be used as common yardsticks.
However, it is obvious that there's a relationship between cost, schedule and quality. We've all seen co-workers rush through tasks and cut corners in order to meet a deadline.
As a rule, rushed work is sloppy work. The problem is that earned value rewards the rush.
As project managers start to understand the system, they soon realize that they don't get any points for "extra" quality.
They get credit for racing through tasks as quickly as possible, meeting only the minimum accepted level of quality. Any extra time spent on any single task threatens their earned-value metrics.
The conflict between our technical lead and the project manager now seems predictable.
The project manager's evaluation depended on speed, but the technical lead was evaluated on design quality.
And because the project manager was the one with the power to decide when the task was complete and to move on, speed won over quality.
How this conflict is resolved in other organizations that rely on earned value depends upon the quality control procedures the organization has in place.
In the above case, an independent design review could have saved the project by sending it back to the drawing board right away.
Next Page: Better late than never.
The Role of Standards in Cloud Security
Security is often cited as a primary cause for concern...Watch Now
Ensuring Resources for Mission Critical Workloads
Application workloads can thrive in cloud environments,...Watch Now
Improving Security in the Public Cloud
One of the main concerns about moving data to a public...Watch Now