For every hard question we encounter there seems to be a simple answer, common sense. We are seduced by the idea that what looks obvious must be so. Yet common sense fails us. Being that right and wrong are not absolutes, to put it with Asimov, what seems so right is often more on the side of wrong.
Duncan Watts has been engaged in debunking the myth of common sense for the better part of his career, starting as a physicist, then going on to engineering and math, then sociology, and eventually computer science.
In Six Degrees: The Science of the Connected Age Watts says that for social reality to be understood we need to look at how people are connected but also how they behave. Watts is a Columbia University sociology professor and his vision with Six Degrees was to combine his own research in network theory with summaries of the work of others who he says, are “collectively solving problems which cannot be solved by any single individual or even any single discipline.”
It turns out that even if we experience coincidences in our lives, there exists what we call the “small world” phenomenon, an effect or result of having weak ties with others we discover casually when bumping into people at parties. However, if it seems easy to guess what led to these instances, the process of understanding where influence comes from and how it spreads is a little more complex.
The “small world” claim is that we are all connected by only six degrees of separation on average. The book shows how daily life is influenced by properties of networks.
Some of the questions —Why do some viruses, computer and biological, spread, and why others come to a quick halt? Why do airline hub-and-spoke networks often break down? How do computer searches work and what makes them effective?— demonstrate the complexity.
The section on gossip, which was fodder for my solo talk at SxSW a few years back, continues to be a source of interesting linguistic insights when we conduct qualitative analysis of social conversations. Gossip is different than rumor or secrets.
When we pay attention to the social clues we see the differences —and how each has its own purpose in human grooming. Everything is Obvious* once you know the answer says Duncan Watts in the work that followed.
A good complement to the work of psychologist Toru Sato who set out to understand the universal language of human relationships, Watts discusses thinking about thinking, tackles the wisdom (and madness) of crowds, explains why there are no special people in networking chains but logical circumstances —like geographical proximity, similar occupation, or willingness to pass information along— how history is a fickle teacher, and much more.
We have a frame problem
But we mistake it for an information problem. Watts says:
The frame problem isn't just a problem for artificial intelligence— it's a problem for human intelligence as well. As the psychologist Daniel Gilbert describes in Stumbling on Happiness, When we imagine ourselves, or someone else, confronting a particular situation, our brains do not generate a long list of questions about all the possible details that might be relevant.
Rather, just as an industrious assistant might use stock footage to flesh out a drab PowerPoint presentation, our “mental simulation” of the event or the individual in question simply plumbs our extensive database of memories, images, experiences, cultural norms, and imagined outcomes, and seamlessly inserts whatever details are necessary in order to complete the picture.
Survey respondents leaving restaurants, for example, readily described the outfits of the waiters inside, even in cases where the waitstaff had been entirely female.
In general, people systematically overestimate both the pain they will experience as a consequence of anticipated losses and the joy they will garner from anticipated gains.
The “filling in” process is instant and so effortless that we are rarely aware it is happening at all. We don't know that something is missing because it doesn't reach out attention threshold.
The frame problem should warn us that when we do this, we are bound to make mistakes. And we do, all the time. But unlike the creations of the AI researchers, humans do not surprise us in ways that force us to rewrite our whole mental model of how we think.
In hindsight, once we know the outcome of something, “we can almost always identify previously overlooked aspects of the situation that then seem relevant.” We have a frame problem, and not an information problem.
no matter how many times we fail to predict someone's behavior correctly, we can always explain away our mistakes in terms of things that we didn't know at the time. In this way, we manage to sweep the frame problem under the carpet—always convincing ourselves that this time we are going to get it right, without ever learning what it is that we are doing wrong.
This pattern in thinking is the most evident —and most difficult to expunge— in the relationship between financial rewards and incentives. As numerous studies show, it's complicated.
It's obvious that we care about benefiting financially from our success, but it is equally obvious that we also care about intrinsic rewards in our work. Things like recognition, feeling of advancement in our career, a sense of purpose, and pure enjoyment of our pursuits contribute to performance.
When a task is multifaceted or hard to measure, for example, workers tend to focus only on those aspects of their jobs that are actively measured, thereby overlooking other important aspects of the job— like teachers emphasizing the material that will be covered in standardized tests at the expense of overall learning.
What do we do once we realize a particular scheme didn't work? We go ahead and conclude that we got the incentives wrong. Thinking we know the answer prevents us from taking a second look at the question.
Like rational choice theory, common sense insists people have reasons for doing what they do. Maybe, but predicting what they will do and their reasons is anything but simple. We can only do that when we look back —it's then that we can rationalize or collect it into a neat story about what worked and why.
Herein lies the dilemma, says Watts:
- the frame problem tells us we can never know everything that could be relevant to a situation
- a huge psychological literature tells us that what could be relevant is beyond the reach of our conscious minds
We can attribute many of our failures in common sense reasoning to the difference between making sense of behavior and predicting it. Everything would be highly logical, if it weren't for the people. But it is the people that make it super interesting.
[image courtesy wikimedia]