Laurence Smith, who previously (LO21853) introduced himself as "an
Englishman living in Washington DC and Consulting to the Knowledge and
Learning Council at the World Bank" subsequently (LO21867) posted what he
terms "a work in progress" that deals with knowledge management metrics.
(The subject he used -- "A Learning Process - Knowledge Management Metrics
-- has been shortened to the one in the subject line of this message.)
In general, Laurence inquires after meaningful metrics for knowledge
management (KM) initiatives. He points out, and rightly so, that "web
metrics" (e.g., page hits, etc) possess some utility but that business
impact is of far more interest. He also indicates that his model
"proposes that the most important and useful metrics are those that
directly inform the improvement of business performance..." Along the
way, he touches on a Before-During-After evaluation schema, citing the BP
Amoco framework that involves "Peer Assist," "Learn During" (which
Laurence claims is a version of the Army's "After Action Review"), and
"Retrospect" (which seems to be BP Amoco's term for what would otherwise
be called a "post-mortem").
What follows are some reactions to Laurence's posting.
First off, the basic problem Laurence raises is one of linking ends and
means.
For a given intervention (e.g., one or more KM initiatives) what effects
or impact does it produce? Conversely, and much more important from the
perspective of an executive contemplating an investment of scarce
resources in any initiative, KM or otherwise, for a desired effect or
impact, what actions will produce it?
I know of two methodologies that directly link ends and means.
One is a technique called "measurement-based analysis (MBA)," which is a
way of analyzing various measures of business performance (financial and
operational) so as to identify the underlying activities and processes
that drive those measures. In this way, operational aspects can be
targeted for improvement and the business impact can be reliably estimated
instead of simply confirmed or disconfirmed through subsequent
evaluations. (Interested parties will find a paper on measurement-based
analysis at my articles web site:
http://home.att.net/~nickols/articles.htm)
The second methodology for linking ends and means (i.e., tying
interventions to results) is an emerging one developed by Geary Rummler,
formerly of Rummler-Brache. That method is known as "Performance Logic
(PL)." Based on Geary's presentation of PL at the most recent ISPI
conference, I would say that PL has a lot in common with MBA, especially
at the start of an analysis. The main differences I see at this point
have to do with level of detail and the ways in which the underlying
operational factors are determined and subsequent interventions framed.
(Geary and I will be having some discussions on this matter.)
Both methods have a lot to do with being able to engineer solutions to
business problems but I won't go into that here.
The Before-During-After framework to which Laurence refers is a useful
one. It is worth noting, however, that this kind of evaluation, whether
prospective, in situ, or retrospective, relies on reflection by the
participants and is limited by the knowledge base of the participants. In
short, we know what we know and no more (and even that is extremely
unreliable -- making who "we" is a very important issue).
Although he provides an example or two, Laurence does not comment on the
value or utility of anecdotal evidence. For what it's worth, anecdotal
evidence and unsolicited testimonials are among the most important forms
of information/data for influencing high-level decisions. I spoke just
yesterday with a fellow from NCR who is intimately involved in that
company's KM initiatives and the value of anecdotal evidence and
unsolicited testimonials is almost unquestioned in his eyes, too. Such
evidence might not be "scientific" but the value of having a good story to
tell is well known.
Laurence cites "Nine Symptoms of A Knowledge Problem" which he attributes
to David Smith at Unilever. Although I appreciate the ideas in those nine
items, my personal reality is such that my reaction was essentially one
of, "So what? Who doesn't have these problems or concerns?" The lone
exception was item #9, which dealt with how to price for service -- to
which my reaction was, "Who does know how to do this?"
>From my perspective, the "bottom line" here is that KM metrics are the
wrong starting point. The proper place to begin is with desired business
results -- with the business metrics one wishes to affect. The next point
in the value chain is to identify the operational aspects of the business
that affect those desired results. The tertiary point is with the ways in
which people (which is where knowledge is applied) affect the operational
aspects of the business and, through those, the desired business results.
How does all this tie to the LO list and to Learning Organizations? Well,
I think it has something to do with recognizing that even if we accept
that learning is situated in group interactions, that what the parties to
that learning learn is highly individualized (you and I, for instance,
come away from a given project with some things in common but many that
are unique to each of us). We can package information until we're blue in
the face but until we get someone to apply information it is information
not knowledge. It requires a human being to convert information into
knowledge. Only people know. With that as a given, the linkages between
business results and KM initiatives eventually reduce to the linkages
between individual and organizational performance. These in turn tie to
the relationships between the operational and financial aspects of the
business.
The "net-net" (a term a kindly Xerox executive once impressed upon me) is
that interventions (KM or otherwise) should be guided by an "architecture
of results" -- a framework that depicts the "structure" of any particular
result. That structure consists of the elements, connections and
relationships that are measured and those that are manipulable -- and that
must be manipulated -- to produce this or that result. In this way,
interventions are targeted instead of simply implemented with fond hopes
for good effects.
--Regards,
Fred Nickols Distance Consulting "Assistance at A Distance" http://home.att.net/~nickols/distance.htm nickols@worldnet.att.net (609) 490-0095
Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>