A Learning Process - Knowledge Management Metrics LO21867

lsmith@worldbank.org
Tue, 08 Jun 1999 12:41:09 -0400

For all the interest and money spent on KM there seems to be relatively
few attempts to actually quantify the impact and results in business
terms. Most Metrics I've seen are more 'web metrics' than KM metrics, in
that they count page hits etc. but don't have any way to consider the
actual impact of the use of the knowledge.

Anybody got any ideas?

Enclosed below is a short piece I wrote to clarify my ideas, it is a
work in progres, but I'd love feedback and dialogue.

Knowledge Management Metrics; a Learning Process

Many existing efforts to quantify the impact and value of
organisations huge investments in Knowledge Management fall afoul of an
over focus on measuring technology metrics rather than business
improvements; web site hits rather than business performance. This model
proposes that the most important and useful metrics are those that
directly inform the improvement of business performance and that these can
best be considered within the context of a learning process that embeds
the metrics within the work process.

The Learning process in question is that used by BP - Amoco as a
central part of their KM strategy, - 'Learn Before, Learn During, Learn
After.' Essentially BP embeds Knowledge Management within the everyday
work process by making it a normal part of doing business. At the
beginning of any project they conduct a 'Peer Assist' (alternatively known
as 'Prior Art'), where they get knowledgeable colleagues together to
consider all that BP - Amoco knows about this particular subject. 'Learn
During' involves a version of the US Army's well known 'After Action
Review' (AAR). BP use the AAR after each 'identifiable event' rather than
at the end of a project, thus it becomes a 'live' learning process that
constantly informs the direction of the project. The third part is what BP
call a 'Retrospect,' which is a team meeting designed to identify 'what
went well,' 'what could have gone better' and 'lessons for the future.'

By ensuring that time is made available within the actual project and
that this learning process does not become extra work, BP have managed to
make it a normal part of doing business. The results have been real
tangible business benefits visible in dollar terms that have turned around
critics; "the Schiehallen oil field, a North Sea field considered too
expensive to develop until a team spent six months pestering colleagues to
share cost-saving tips. They were called wimps for not rushing out to
"make hole"-but the learn-before-doing approach saved so much time on the
platform (at $100,000 to $200,000 a day, not counting drilling costs) that
they brought the field into production for $80 million less than anyone
thought possible." Indeed Tom Stewart recently stated that the CKO of BP
"Greenes is, as best I can figure, knowledge management's top moneymaker."
(1).

This learning Cycle then becomes the facilitating infrastructure for
developing a process of KM Metrics which allows the identification of real
business value in each aspect of a KM investment.

Given the assumption that the Knowledge Management initiative is
being driven by a real business need, then as the key areas are identified
where knowledge can have an impact, the first step must be to examine who
has done this before.? How? Who knows about this? Who is currently working
on something similar? In other words, how do we not re-invent the wheel
and duplicate effort? How do we learn from those who have gone before?
This is clearly the role, played by the 'Peer Assist.'

The Peer Assist, enables identification of 'what BP knows about'
previous projects of this type. By identifying the state of the art and
best practices, it is possible to begin to quantify the expected savings
in terms of time, money or quality that can be expected. Quite literally,
for a project of this type, given these lessons learned, what are the
expected savings or performance improvements from the application of
knowledge to this project?

This form of 'state of the art' review combined with identification
of the anticipated value of the knowledge investment is a necessary first
step in valuing and justifying KM. It is also worthwhile to quantify the
time and money saved by conducting the 'Peer Assist' in terms of effort
not duplicated, wheels not re-invented, mistakes not repeated. A useful
check list at this stage is the 'Nine Symptoms of a Knowledge Problem'
developed by

David Smith Head of Knowledge Management and Development at Unilever;-
1. You repeat mistakes
2. You duplicate work
3. You have poor customer relations
4. Good ideas don't transfer between departments, units or countries
5. You're competing on price
6. You can't compete with market leaders
7. You're dependent on key individuals
8. You're slow to launch new products or enter new markets
9. You don't know how to price for service (2).

Quantifying the cost of these nine symptoms & the impact of lessons
learned, knowledge shared & applied will help establish the anticipated
savings & ROI from the KM project.

The next stage, and this is the focus of most existing KM Metrics
initiatives, is to begin to measure the 'Web Metrics.' Counting the actual
page hits, file downloads, unique visitors etc does provide a useful
pattern of usage and indicate historical trends as well enabling a
quantitative analysis of usage of knowledge resources. Much has been
written elsewhere on this, however it is just worthwhile to remember that
it is not usage of knowledge resources alone that is important, but rather
than the value contribution derived from that usage. So whilst it is
important to track such 'web metrics' of usage, they are merely one step
on the path to demonstrating the value and impact of knowledge.

Some of the measures in this area include, the most down-loaded
files, pages most often accessed, weekend usage, use of knowledge objects
by country or region, page first accessed, path of navigation. Some
organizations have begun to incentivise collection and codification of
knowledge assets and lessons learned, for example, American management
Systems awards prizes for the most frequently re-used knowledge assets and
makes contribution to knowledge networks a condition of membership to
important internal networks.

After each identifiable event in the project, an After Action Review
should be conducted. The AAR asks 4 key
questions;
7 What was supposed to happen?
7 What actually happened?
7 Why were there differences?
7 What can we learn?

These questions should be informed by the best practices identified
by the Peer Assist and identify the actual value delivered in each area.
Given the lessons learned, and anticipated value of each aspect of the KM
project, what was the actual impact on business performance?

The KM impact should be assessed in each area of Time, Quality,
Re-use of Knowledge and Transferability of Knowledge, and as a whole, what
impact have improvements in the above had on Management effectiveness?
What decisions were made better or faster? What is the increase in
management time dedicated to value added activities as a result of the
application of this knowledge?

At the end of the project, a Retrospect should be conducted to
examine 'what went well,' 'what could have gone better,' and what are the
'lessons for the future.' This is an examination of the contribution
actually made by each aspect of the KM project, and identification of the
lessons learned and new best practices. These should be captured and
incorporated into the next learning cycle so that there is a steady
evolution in best practice. Indeed over time, another metric becomes the
speed of evolution of the 'state of the art,' quite literally does each
new location break even quicker, is each new product launched quicker, is
there continuous growth in customer satisfaction?

Indeed an interesting Metric would be to measure the volatility or
speed of knowledge. At what speed do best practices become outdated? What
is the 'freshness' of knowledge? How is this changing?

Some projects will produce true innovations. Not incremental
improvements in existing best practices, but radically new concepts, ideas
or products. These in turn will be fed back into the beginning of the
business cycle as new business drivers or imperatives which in their turn
will demand a whole new cycle of KM support, with different points of
value and new metrics.

References

1. 'Telling Tales at BP Amoco,' Fortune Magazine, Tom Stewart,
June7,1999
2. 'Why Dumb Things happen to Smart Companies,' Fortune Magazine, Tom
Stewart, June 23, 1997

-- 

lsmith@worldbank.org

Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>