I am looking at models that are light, portable, and environmentally
friendly (congruent with what we are), and that tend to stay out of the
way of what we want to get done.
For example, it has been said, as a zero order approximation, that we
are machines.
And, as a first order approximation, we are intelligent machines. We are
programmed with parameters, thought to have something to do with the
game, with signs and relative weights unknown and unspecified. (The last
is from a book on AI, about a checkers playing machine, circa 1964, which
I long ago consigned to St. Vinnies). Presumably, we adjust the
weights of the parameters after each game. And, presumably, we can
generate new parameters, on our own. (I make no pretensions on knowing
how, or caring how to code such a machine.)
As a second order approximation, we are intelligent machines that have
difficulty in adjusting to a new game (ignoring for now the difficulty in
slicing a game from the continuum of our experience). For example, take
a chess playing machine that has been raised on a diet of games opening
with P-K4. The first time some one opens P-Q4 against the machine, the
machine will take a terrible drubbing. And it will adjust and eventually
figure out the P-Q4 game.
We, the people, on the other hand, seem to have difficulty in adjusting to
a new game, choosing instead to see it as another instance of the same
old game, and getting drubbed every time out, and declaring victor
nonetheless. I guess I will leave it at that point with the question,
Why?
I know the foregoing is simplistic. And, I do want it simple. Whats out
there?
Jim Battell
--Jim Battell <jbattell@mediaone.net>
Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>