Richard Goodale wrote:
> If an individual's level of contribution to an organization is determined
> by how far his performance is ahead of his co-workers' performances, than
> a mediocre performer surrounded by total slackers is more valuable than a
> high achiever surrounded by other high achievers. This, I propose, is
> total nonsense. Ranking cannot, by itself, identify "good" or "bad"
> performers because it doesn't compare performance to criteria.
Many people probably won't believe this, but the scenario described by
Richard, actually became embodied in U.S. Air Force policy for a period of
time (and it was DEFINITELY TOTAL NONSENSE!!)
About 20 years ago (I'm not going to pin down the actually years), the Air
Force Personnel System adopted a ranking policy keyed to the annual
performance reviews (called the Officer Effectiveness Report -- OER). They
had always had a ranking system, but they took it to new heights by
instituting a forced-choice-with-quotas ranking system. It was a simple
system, yet amazingly destructive in its impact. Those writing an OER on
a subordinate, were forced to assign an overall ranking of 1, 2, or 3 to
the report. Within an organization, the quota system worked as follows:
Only 20 percent could get a 1, 30 percent could get a 2, and the remaining
50 percent would get a 3 or lower (there were actually 4s and 5s, but who
cared at that point!). It doesn't take a rocket scientist to figure out
some of the nasty fallout from such a system -- especially since the Air
Force uses an up-or-out system (i.e. get promoted to the next rank at the
right interval, or you are asked to leave).
This ranking system only survived a couple (maybe 3) years. It literally
collapsed because of the chaos it created and the severe morale damage it
caused. One of the ridiculous scenarios the system created is related
directly to Richard's quote. The Air Force, like any large, complex
organization, will tend to concentrate talent and expertise where and when
it is needed. There were (are) many organizations, from small to large,
deliberately staffed with the BEST people available in their fields to get
high priority or difficult goals accomplished effectively, efficiently,
quickly. Do you see the problem -- you can bet that the people in these
organizations did!! Can you imagine being selected for one of these jobs
and then being ranked in the lower half of the organization on your OER!
I won't go into the bloody details, but suffice it to say that this
ranking system died a swift death, but not before causing a lot of grief.
Years later, I am still amazed and appalled that a group of "experts"
(tongue firmly in cheek) put together such a system and actually got it
implemented as POLICY!
Let me say that on the overall issue of TO RATE or NOT TO RATE, I come
down on the side of rating. But I am not in favor of the more traditional
annual performance rating type of system. I am a practitioner of the
daily feedback type of rating system where performance review is an
on-going, continuous, and inseparable part of every day business. I am
also a proponent of a rating system whose PURPOSE and FOCUS is personal
and organizational improvement, NOT pigeon holing, cataloging, or ranking
(especially force-choice type of systems).
Regards
Doug Jones <djones@asheville.cc.nc.us>
Coordinator, Quality Programs
--Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>