Ben Compton posts what he calls a "real life scenario" and closes with the
following:
>I am open to other views on this scenario but it will take a lot to
>convince me that employee ranking couldn't have done something to drive
>performance to new levels instead of thrusting it in a constant decline.
I think the scenario Ben provides is indeed worth discussing and I'll
happily offer up my thoughts.
Ben indicates that "quality support" at Novell was measured by:
>1- Number of calls resolved within five minutes
>2- Number of calls resolved within one half hour
>3- Number of calls resolved within one hour
>4- Number of calls resolved within four hours
>5- Number of calls resolved within 24 hours
These did not fully assess what was meant by "quality support," which
appears to have been defined as "To provide the customer with a
responsive, error-free experience without wasting company resources." Nor
did the set of measures above address what Ben views as other critical
factors, for instance:
- the accuracy of the answer
- if the answer actually solved the problem
- how many times a customer had to call to get their problem solved
Ben also says, "Another factor absent from our organization were clearly
defined values. This left people confused when they confronted novel or
disturbing situations. Furthermore it left people confused about how they
should actually interact with each other at work. Expectations varied,
which led to constant and energy-sapping conflicts within the department."
Ben also points out that "there were other measurements that engineers
took into account that were never measured:"
> - How many times does a tech have to ask the same question before he/she
>"gets" it?
> - How many calls does a tech solve on their own? How much of other peoples
>time do they consume?
> - How quickly is a tech able to isolate a problem within a complex system?
> - Is a tech able to explain why what they did fixed the problem?
Ben then observes that both the formal and informal measurements affected
pereceptions of who was and who wasn't competent, that management was
afraid to make these distinctions, and that when the layoffs came the
competents got the boot. He then relates a recent incident with Novell
and concludes by asking, "How could employee ranking have changed the
outcome? Is the current state of Novell's tech support due to the fact
that the competents have left? Why have they left?"
I'll make it clear from the outset that I have no knowledge of Novell's
inner workings and will not come at this from the perspective of what
Novell did or didn't do.
I'll also say that I think Ben's example makes the case that I have argued
here and elsewhere: performance appraisal systems are destructive, they
cost more than they're worth, and they should be scrapped.
The "official" measures listed above are all rate measures, that is, they
ratio calls resolved again units of time. They are the kinds of measures
commonly found in customer support operations and, while they drive calls
per unit of time up (and costs per call down), they typically destroy
customer service because the customer and service are conspicuously absent
from those measures.
Most customer service reps are bright enough to know this. Most customer
service reps I know also want to be of help to the customer. They know
that the company has saddled them with a set of measures that are targeted
more on their individual production and the costs of calls than on being
of service to the customer. This is not a case of missing values, it is a
case of conflict between the values of the management (probably making
money) and the values of the customer service rep (wanting to be of help
to the customer). Such an environment is hardly conducive to learning or
to looking at better ways of supporting the tech reps. Instead, the
engineers got out of sorts because the tech reps kept pestering them for
answers to questions they'd already answered (perhaps on several
occasions). You couldn't ask for a better example of Deming's notion that
the system is the problem, not the people.
What we have in Ben's case study is a set of lousy measures used to drive
lousy performance. I can't tell if the people who got the boot were the
ones who did well on the informal measures and the ones who were kept did
well on the formal measures but, in the last analysis, that's just icing
on the cake.
Thanks, Ben, for a truly wonderful example of just how destructive such
systems can be.
There will be some, I am sure, who will say, "But if they'd done it right,
it would have worked." My final point is this: There is no evidence
indicating to me that we can get it right often enough to make the attempt
worthwhile. We keep shooting ourselves in the foot with performance
appraisal systems and it's time to stop that lunacy.
--Regards,
Fred Nickols The Distance Consulting Company nickols@worldnet.att.net http://home.att.net/~nickols/distance.htm
Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>