Employee Ranking Systems LO16802

Richard Goodale (fc45@dial.pipex.com)
Mon, 02 Feb 98 17:14:24 GMT

Replying to LO16751 and LO16734--

Rol

Thanks for supporting me with your comments, including the one that stated
"the really important point about ranking... is what to do with the superb
performers."

I agree, but.... I also think that the points in your original post
(about the value of "ranking" systems in identifying "sub-par" performers,
AND helping them to grow) are equally valuable. And, I also think that
ranking systems, if properly designed and employed can also be very
valuable in improving the performance and personal satisfaction of the
"5-sigmas (+/-)" (those of us who inhabit the great belly (underbelly?) of
the infamous bell curve).

Let me elaborate by describing something of my most direct (if least time
proximate) experience in the area. I'm inspired to put down these
thoughts by Doc Holoway's comments in his post (LO 16734), to wit:

"Finally, I'm convinced that any and all attempts at ranking will always
be subjective and capricious at some point in the ranking process. In my
many year's experience working with promotion systems in one of the most
rank conscious organizations (the military), I found that the process of
ranking people within the top ranked 2 or 3 percentile (in order to choose
the top .5 percentile who would be promoted) was simply reliant on human
judgment. Human, subjective, judgment--which is the best that we can rely
on . . . and that's the crux of the problem. Mental models, cultural
filters, biases and preferences all play havoc with selecting the right
people--those factors and the point that we rank past behavior/performance
as indicators of future behavior/performance. Sort of like the vaunted
Heisman Trophy being an indicator of how well the recipient will do in
professional football (I think that there success rate is around 50%,
isn't it)."

As I've mentioned/implied before, in 1969-1970 I managed the personnel
data base for 350-400 US Army officers (Majors and Lt. Colonels of a
specific branch). I continue with this anecdote largely for my own
purposes (I've never writtten it down before), but also in the hope that
others may find some value in it and/or may sharpen my
recollection/interpretation through their comments. Those of you who wish
to tune out to the arcana of one's nearly forgotten experiences, please
feel free to do so.

The primary performance evaluation system in those days was the Officer
Efficiency Report (OER). It consisted of a detailed set of questions
relating to what the Army thogh was important to command efficiency,
ranging from the obvious (i.e. "Does X inspire respect from his
subordinates?" to the superficially trivial (i.e. "Does X dress
smartly?"). Each question on the OER could be answered per a 5 point
scale, ranging from Outstanding to Unsatisfactory. (Note: All these
recollections are approximate). I do not recall the exact number of
questions asked, but I do think that some had more weighting than others.
I do know that the maximum score was 240, and tenths of a point were
possible. So, scores on the OER's could range from 0.0 to 240.0,
nominally a 2400 point scale. (I can feel the hackles rising from some
readers already!).

In reality, scores submitted tended to range from 236-239. (The average of
the group of officers for whose records I was responsible was 237.6).
Those whose reports fell below that range did not get promoted. Those
whose scores fell above that range got early promotion (5-10% of the
population going from Major to LTC, 1-2% of those going from LTC to full
Colonel).

All OER's were supported by relatively detailed verbal descriptions from
superiors and their superiors, each of which was reviewed with the officer
invloved (180 degree feedback).

Some (all?) of this may seem silly, but......

1. It did efficiently/ruthlessly weed out the people that did not meet
the values/objectives/visions of the organisation.

2. It did identify the areas needed for "improvement" from those in the
great belly of the bell cure, and did so in a relatively non-threatening
manner (i.e. if a LTC got an OER with a score of 238, his CO could both
congratulate him/her (for getting 238 out of 240), while finding ways to
work with him/her on the areas where "minor" deficiencies were noted.

3. It did, very effectively, identify the top performers. 240's were
given out very rarely, and only after lots of scrutiny in the 180 degree
feedback loop. (Of course, whether the standards used to identify what
constituted "top" performance were correct or not, is another, important
question.)

I'm not sure what Doc's experiences were with this system, but from my
perspective, long, long ago, as it may have been, it seemed to work, and
my experiences continue to give me insights which are valuable, at least
to me.

Cheers from Caledonia

Richard Goodale
The Dornoch Partnership
"Discover, Creativity, Leadership"

PS--Doc, much as I am wary of venturing again on this list into the
morass of sports analogies, I would point out that the Heisman trophy
was not created for nor has it ever been considered to be an
indicator of professional football prowess. Check out the
correlation between first round draft picks and NFL success and
you'll see a much better fit--but not a perfect one, of
course--that's why any ranking sytem MUST be
tempered/ameliorated/suppemented by human judgement.

R

-- 

Richard Goodale <fc45@dial.pipex.com>

Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>