Duncan Watts is supremely qualified to ask the question. He has worked in a number of the sciences -- starting as a physicist, then going on to engineering and math, then sociology, and eventually computer science (image above.)
When we say, “it's not rocket science,” we typically mean that it's not that hard, or as hard as rocket science. Yet, “rocket science is not that hard,” says Duncan Watts, we have figured it out.
So what is it that makes social sciences harder than rocket science?
Why does rocket science seem hard and social science seem like common sense?
That has to do with the nature of common sense.
What is common sense?
Is a question that has (almost) as many answers as people. Watts' answer is:
Common sense is the kind of intelligence we rely on to navigate concrete, everyday situations.
For example, how to dress appropriately to the situation, like beachwear for the beach, suit for work. We don't think about these rules every morning, because it's just common sense. Or speaking from a podium vs. speaking 1:1 -- they are different forms of interaction.
Other examples range from the way we face the door in elevators vs. facing other people or standing close to someone on a train when it is crowded vs. leaving room when not crowded.
This also applies to abstract things like how to balance fairness and reciprocity, financial transactions, even personal transactions, says Watts. All of these unspoken rules are common sense, we don't have to think about them.
The problem is when we try to use common sense -- otherwise useful to deal with all kinds of life situations -- to reason about situations that are not concrete, everyday situations.
The economy, marketing programs, etc. these are all kinds of situations that involve many people -- hundred, thousands -- who are very different from each other and interact with each other in widely varying contexts, over extended periods of time.
When we use common sense in these kinds of situations, it can mislead us. Watts says:
It turns out this is a problems sociologists have worried about for a long time. Way back in 1947, American sociologist Paul Lazarsfeld, who also taught at Columbia, described a series of findings from a recent study, “The American Soldier.”
It was a study commissioned by the U.S. War Department during WWII and sociologists went into the U.S. Army and interviewed about 600,000 soldiers and asked them what life in the Army was like.
Lazarsfeld is reviewing the publication of this report and he gives the reader (of his book) of what it contains through writing some of the results. Result n.2 that he describes is that “Men from rural background fared better than man from cities.”
He steps back from the results and imagines the readers' response -- oh, that's obvious, men from rural backgrounds are used to sleeping outside, they're used to physical labor, they're used to getting up with the sun, etc. why did we need such an vast and expensive study to tell us what we could have figured out on our own? Just using our common sense.
Lazarsfeld actually gave the opposite of what the real answer was. It was city men that did better than rural men in the Army. Had we been given this answer rather than the other one, we would have reconciled with it by finding evidence of its veracity. Things like, used to wearing a uniform, to having a strict chain of command, etc.
This is the exact problem. According to Lazarsfeld, when every conclusion and its opposite appear equally obvious (once you know the answer), there is a problem with the concept of obviousness.
When we think about things that are happening now, they seem deeply complex and ambiguous but if we look back -- for example -- at the last financial crisis, we see a very different picture.
We can sift through the mix of everything that happened and construct the narrative to suit the outcome. It's kind of like when Steve Jobs said, “we connect the dots only in retrospect.”
Watts wants to make two points:
- we can always do this, no matter what the outcome is -- so many things could be relevant that we can go through the box and pick those things that fit
- we can only do this once we know the outcome -- this is a more subtle point, we cannot do it at the time something is happening
The things that we call “explanations” post hoc, after they something happened, are really stories that tell us what happened -- the sequence of events -- and not why it happened.
This is not always bad. There are many reasons we use stories effectively for inspiration, motivation, etc. The problem is that stories are so powerful, says Watts, that “we tend to generalize from them to make predictions.”
If we had Groundhog Day the way Bill Murray experienced it, we could learn from those situations. Why Groundhog Day and Customer service have many things in common.
For example, when we think that history repeats itself if we don't learn from it, we are dealing with a complex system -- things are rarely the same as they were in the past -- and so when we try to use the narrative approach to history to make predictions about the future we fail.
Other examples in business are predictions that making marketing campaigns go viral to touch upon a hot topic -- along with intended consequences, we hit the wall of unintended consequences.
Once we realize our intuition cannot help us predict certain kinds of situations, we can learn about the things we can do -- we can use science. Policy, marketing, strategy can all benefit from a more system-driven approach.
“The reason why some of these problems have not been solved yet is because they involve large numbers of people who interact in complex ways,” says Watts:
Historically, even ten years ago, it was impossible to observe these interactions. And it is very hard to do science when your cannot observe things. It's very hard to do science when you cannot measure the things you are interested in.
The Internet has made the invisible visible. It has given social scientists the ability to measure the interactions of hundreds of thousands and millions of people -- in real time and over extended periods of time.
Watts and colleagues use Twitter, Facebook, Amazon mechanical Turk, search results, email, and everything people do online to mine interactions to understand the world and what is happening on a very large scale. We don't really know where this is going to lead us, he says.
Everything would be going according to plan if it weren't for people -- mergers and acquisitions, consolidations, marketing programs, etc. -- people inject plenty of interesting-ness and unpredictability into the equation. Which is why it's a good idea to think about people -- customers and employees when writing a peace of communication, for example.
Social science will likely never resemble physics with its beautiful universal laws, or even medical science. Social is just messier than that.
Nevertheless, this revolution in data is already transforming social science. The hope is that this revolution will help with problems in business and policy.
In Everything is Obvious* once you know the answer Duncan Watts says:
Determining which features are relevant about a situation requires us to associate it with some comparable situations. Yet determining which situations are comparable depends on knowing which features are relevant.
This inherent circularity poses what philosophers and cognitive scientists call the frame problem, and they have beating their heads against it for decades.
The frame problem was first noticed in the field of artificial intelligence, when researchers started trying to program computers and robots to solve relatively simple everyday tasks like, say, cleaning a messy room.
An example of the frame problem is that we don't think the way we think we think -- not a typo. Watts says:
Rather than questioning our ability to make predictions about our future happiness (for example winning the Lottery), therefore, we simply conclude that we missed something important -- a mistake we surely won't make again. And yet we do make the mistake again.
In fact, no matter how many times we fail to predict someone's behavior correctly (for example relatives demanding a cut of the winnings), we can always explain away our mistakes in terms of things we didn't know at the time. In this way, we manage to sweep the frame problem under the carpet -- always convincing ourselves that this time we are going to get it right, without ever learning what it is that we are doing wrong.
This pattern is most difficult to correct in the relationship between financial rewards and incentives where the results are actually quite complicated. Here's my review of the book.
Scott Adams has a different take on the use of data. He says:
“You can defend an entirely different view of the world using the same data that's used to defend the standard model. So whenever I can do that, I'm so there. Because as soon as you realize that the model you've been looking at maybe isn't so firm as you thought... Then you're free.”
Watch the full video of Duncan Watts' talk below.