Rick wrote:
> Or, a related idea, suppose that on the web site readers could vote on the
> quality and relevance of each message. Give each posting a 1-10 rating...
> - Authors could get quant feedback.
> - And, later readers, if they wished, could select what to read based on
> quality ratings.
> - Or, for each msg, the average quality scores of that author's past msgs
> could be displayed as a guide.
> - Or (...back to ranking) we could post the sorted list of authors'
> quality ratings.
>
> This is all quite practical, just a little recreation programming
> required... would it be valuable?
Thanks, Rick I think, your suggestion offers an interesting approach to
ranking.
Ranking done by readers appears to be quite sound. But as every ranking
system, it might tend to turn author's attention to writing messages
with respect to an anticipated ranking result, this may happen either
consciously or unconsciously. That's what ranking is for, isn't it?
Thus the question would be : Do we want such a, rather hidden, quality
agenda?
A next step would be an analyses of the ranking behaviour of people.
Thus an outline for quality messages could be developed ( A quality
message includes :
1. blabla 2.ablabla, 3. ...).
Such a procedure can make sense, depending on what an organisation wants
to achieve, e.g. production of a standardised quality product.
However, IMO more regulation means less creativity. As Creativity is
necessarily a blessing, some regulation makes usually sense. Any
organisation must decide how much creativity it wants and how much it
can bear.
Rick does regulate this list. Principally, we trust him to keep
everything on the list that relates to learning, which is quite a lot. A
quality ranking could indeed simplify the learning effort, but could
streamline it as well. The list could lose some of it's own quality in
terms of the diversity it offers and consequently potential for
creativity.
In some way we rank what we perceive anyway. I, at least, tend to read
some author's more often than others. But this is solely my choice. My
ranking is done out of interest, knowledge about the subject or the lack
of it and so on.
A all reader's ranking could influence my personal ranking and
streamline authors to write according to anticipated ranking measures.
Thus lead to a loss of creativity.
On the other hand, a quality measure could speed up the learning. Their
is probably no real trade off between creativity and learning speed. At
the end it is trial and error. The problem is to realise when the ranking
system is not working. Favouring some, hurting others, surpressing
creativity. It is important to develop the next trial out of the last
error.
All the best
--Thomas Struck <t.struck@bham.ac.uk>
Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>