Hailed as a pioneer of artificial intelligence, Marvin Minsky died at 88 (January 24.) Combining “a scientist's thirst for knowledge with a philosopher's quest for truth,”# Minsky was the co-founder of the M.I.T. Artificial Intelligence Project (later the Artificial Intelligence Laboratory.)
His contributions to the field of cognitive science are many. His ideas on human intelligence and thinking are still relevant.
In a 1994 article titled Negative Expertise, Minsky puts forth a thesis on knowledge and learning:
We tend to think of knowledge in positive terms -- and of experts as people who know what to do. But a 'negative' way to seem competent is, simply, never to make mistakes. How much of what we learn to do -- and learn to think -- is of this other variety? It is hard to tell, experimentally, because knowledge about what not to do never appears in behavior. And it is also difficult to assess, psychologically, because many of the judgments that we traditionally regard as positive -- such as beauty, humor, pleasure, and decisiveness -- may actually reflect the workings of unconscious double negatives.
We tend to shy away form all things negative, or potentially so. Feedback, reviews, even conversations. Yet, it is through these vehicles that we learn the most. Some highlights below.
Why negative knowledge is important
Steven Johnson discusses several examples from nature in Where Good Ideas Come From to explain how good ideas are born and get done. The role of error is intertwined with serendipity and many of the other patterns that facilitate good ideas. Minsky's first argument also draws from nature:
The world is a dangerous place for life. For example, biologists tell us that most mutations are deleterious. This because each animal is already near a sort of local optimum (with regard to its local environment) in the space of mutational variants. And near the top of any hill, most steps go down.
But why is each animal close to a local peak? Simply because evolution itself is a learning machine that is engineered to climb hills. All existing animals had ancestors that avoided enough accidents to have descendants, and those ancestors were just the ones that acquired machinery that enabled them to learn to avoid poisons, diseases, predators, competitors, and other dangerous situations. Of course we also evolved to learn positive goals and ways to achieve them; still, to the extent that our world offers more perils than opportunities, our topmost goal must be -- don't get killed!
There are many ways to avoid dangers You can escape your enemies by destroying, controlling , or evading them. Perhaps our societies, cultures, and governments themselves originated in negative goals, namely, for protection against the most common causes of accidents.
The evolution of intelligence brought great new opportunities -- but also gave us great new ways to fail. As soon as we were capable of reasoning, we became susceptible to fallacies. As we extended the range of our plans, we fell prone to more intricate kinds of mistakes. As the arts of speech evolved, this increased the risk of infection by more bad ideas from other minds. The mental, as well as the physical world may also contain more bad than good. Of course, communication can also transmit ideas that give immunities to other, good and bad, ideas.
We evolved to stretch our boundaries, including beyond our area of competence. Minsky's ideas on the configuration of the brain are still considered with skepticism. In The Society of Mind, a book published in 1980, as in this article, he says “that intelligence is not the product of any singular mechanism but comes from the managed interaction of a diverse variety of resourceful agents.”
That means that there is no difference between human and machines. A defining statement for advancing computing. During my years learning about neurological development, we referred to the brain as the wet system as opposed to the dry system of machines.
How much of human knowledge is negative?
We spend our lives at learning things, yet always find exceptions and mistakes. Certainty seems always out of reach. Except in worlds we invent for ourselves (such as formal systems of logic and mathematics) we can never be sure our assumptions are right, and must expect eventually to make mistakes and entertain inconsistencies. To keep from being paralyzed, we have to take some risks. But we can reduce the chances of accidents by accumulating two complementary types of knowledge:
- We search for 'islands of consistency' within which commonsense reasoning seems safe.
- We also work to find and mark the unsafe boundaries of those islands.
Both as cultures and as individuals, we learn to avoid patterns of thought reputed to yield poor results.
Philosophers come to the rescue, reporting on their paradox-discoveries. They teach us what we shouldn't think: Liars who admit to lying, barbers who shave all, but not themselves, and so on. His observation was that jokes are a way to communicate difficulties, “when we look closely, we find that most jokes are concerned with taboos, injuries, and other ways of coming to harm,” he says. Negative knowledge thus is emotional.
But we have forgotten to ask the experts how they manage to “define and defend the margins, their islands of consistency.”
On creative machines
It annoys me how frequently people suggest that the 'secret' of making creative machines might lie in providing some sort of random or chaotic kind of search generator. Nonsense! Certainly, there must be a source of variation -- but that can be supplied by all sorts of algorithmic generators. What distinguishes the performance of a 'smart' or 'creative' artist or problem-solver is not how many trials precede a success, but how few. So the secret lies not in disorderly search, but in pre-shaping the search space so as reduce the numbers of useless attempts.
On understanding negative knowledge
We take for granted that when we learn what to do, we are experiencing positive emotions. When confronted with beauty, we suspend our ability to articulate how something makes us feel, thus we “stop evaluating, selecting, and criticizing.” Likewise, we consider humor positive while it may not be.
We normally think of beauty, humor, pleasure, and decisiveness as positive; is it then paradoxical to claim the opposite? No, not at all -- because we're dealing with things complex enough to constitute 'double negatives'.
Putting something in a folder labeled 'negative' can't keep it there, because we then can re-enclose it in a second sign-changing shell!
Thus pleasure can seem positive to the agency now in control -- no matter that your other agencies are suffering under its yoke. Thus, enjoying something very much can mean that you've engaged machinery that (i) makes you think even more about that something and (ii) keeps you from thinking of other things.
He concludes with many questions still unanswered:
We tend to think of knowledge in positive terms -- and of experts as people who know what to do. But a 'negative' way to seem competent is, simply, never to make mistakes.
How much of what we learn to do -- and learn to think -- is of this other variety? How much of human competence is knowing methods for solving problems, and how much of it is knowing how to intercept and interdict unproductive lines of thought?
It is hard to assess the importance of these, experimentally, because knowledge about what not to do never appears in behavior. And it is also difficult to assess them psychologically, because many of the feelings and judgments that we traditionally regard as positive may result from forms of censorship of other ideas, inhibition of competing activities, or suppression of more ambitious goals.
What are the roles of biases, assumptions, and omissions in masking what is actually happening? We are immersed in a culture of positive and we “discount the value of experiencing frustrations, failures, and disappointments,” real or apparent. Yet it is from them that we learn the most.