It was a story about how big data allows us to make decisions in real time that caught my eye a while ago that led me to say no, big data should not replace our thinking. The story contained another assumption about strategy, how it is fixed once and for all. And that is right because it involves choices based on assumptions. Once we make something a noun, it turns into a thing and ceases to be dynamic and changing.
The problem is not in our thinking, it is in our words—they've not kept up with our imagining.
When we look at the type of actions we take, they are all dynamic—for example, we keep an eye on the emergence of patterns meaningful to the business, understand how to develop, enact, and amplify influence, and navigate the transformation of the environment through growth. Hence strategizing, which involves choice based on time dependent information.
“When it comes to understanding human behavior,” said the article, “the capabilities of our own brain far outstrip any algorithmically driven model ever created—yet we still get it wrong all the time.” Things are rarely clean-cut, especially when it comes to decision-making.
The problem is that we get things wrong enough times to make it noticeable when we try to predict a future action. Predicting is hard, because we're engineered to be in the here and now. And the beauty of it is that when we make that realization, we can and do excel in our ability to understand what is going on, make sense of it and use that information to improve our odds.
But sometimes we become spotty in our decision-making. We know we should balance doing enough research with deciding at some point, but we have a hard time calculating when that time comes.
The answer relies on bringing together difference disciplines, including human cognition. Brian Christian and Tom Griffiths, a professor of psychology and cognitive science at UC Berkeley, have figured it out for us. As they explain in Algorithms to Live By, for optimal stopping, when we can find a decent apartment or a parking place with minimal regret, it's 37%:
If you want the best odds of getting the best apartment, spend 37% of your apartment hunt (eleven days, if you’ve given yourself a month for the search) noncommittally exploring options. Leave the checkbook at home; you’re just calibrating. But after that point, be prepared to immediately commit—deposit and all—to the very first place you see that beats whatever you’ve already seen. This is not merely an intuitively satisfying compromise between looking and leaping. It is the provably optimal solution.
We know this because finding an apartment belongs to a class of mathematical problems known as “optimal stopping” problems. The 37% rule defines a simple series of steps—what computer scientists call an “algorithm”—for solving these problems. And as it turns out, apartment hunting is just one of the ways that optimal stopping rears its head in daily life. Committing to or forgoing a succession of options is a structure that appears in life again and again, in slightly different incarnations. How many times to circle the block before pulling into a parking space? How far to push your luck with a risky business venture before cashing out? How long to hold out for a better offer on that house or car?
“Simple algorithms,” they say, offer solutions to “all such situations in life where we confront the question of optimal stopping.” This class of issues we need to figure out every day have a mathematical solution. they say:
There is a particular set of problems that all people face, problems that are a direct result of the fact that our lives are carried out in finite space and time. What should we do, and leave undone, in a day or in a decade? What degree of mess should we embrace—and how much order is excessive? What balance between new experiences and favored ones makes for the most fulfilling life?
These might seem like problems unique to humans; they’re not. For more than half a century, computer scientists have been grappling with, and in many cases solving, the equivalents of these everyday dilemmas. How should a processor allocate its “attention” to perform all that the user asks of it, with the minimum overhead and in the least amount of time? When should it switch between different tasks, and how many tasks should it take on in the first place? What is the best way for it to use its limited memory resources? Should it collect more data, or take an action based on the data it already has? Seizing the day might be a challenge for humans, but computers all around us are seizing milliseconds with ease. And there’s much we can learn from how they do it.
What we forget, is that algorithms are mathematical expressions that predate computers, “a four-thousand-year-old Sumerian clay tablet found near Baghdad describes a scheme for long division.” And they are not just confined to mathematics—when we cook using a recipe, we follow an algorithm, we do the same when we knit using a pattern.
Algorithms to Live By is an exploration of human algorithm design. In it Christian and Griffiths apply the lens of computer science to everyday life to offer concrete suggestions on how to solve problems:
Optimal stopping tells us when to look and when to leap. The explore/exploit tradeoff tells us how to find the balance between trying new things and enjoying our favorites. Sorting theory tells us how (and whether) to arrange our offices. Caching theory tells us how to fill our closets. Scheduling theory tells us how to fill our time.
Computer science today is working on challenges like conversing with people, fixing corrupted files, or winning a game of Go. “Tackling real-world tasks requires being comfortable with chance, trading off time with accuracy, and using approximations.” Working on this class of problems that involve uncertainty and taking chances can still show how four-year-olds are still better at a host of cognitive tasks, including vision, language, and causal reasoning than sophisticated computers.
But the process of trying to tackle these problems helps us how many of them are simply hard:
And the mistakes made by people often say more about the intrinsic difficulties of the problem than about the fallibility of human brains. Thinking algorithmically about the world, learning about the fundamental structures of the problems we face and about the properties of their solutions, can help us see how good we actually are, and better understand the errors that we make.
In fact, human beings turn out to consistently confront some of the hardest cases of the problems studied by computer scientists. Often, people need to make decisions while dealing with uncertainty, time constraints, partial information, and a rapidly changing world. In some of those cases, even cutting-edge computer science has not yet come up with efficient, always-right algorithms. For certain situations it appears that such algorithms might not exist at all.
The insights computer scientists have gathered to far, say Christian and Griffiths, are surprising even to the people working on them. Because they say:
Don’t always consider all your options. Don’t necessarily go for the outcome that seems best every time. Make a mess on occasion. Travel light. Let things wait. Trust your instincts and don’t think too long. Relax. Toss a coin. Forgive, but don’t forget. To thine own self be true.
Anyone who faces a new venture learn fairly quickly how real this is. Serial entrepreneurs have the insights encoded in how they operate. The book got my attention because it focuses on the interaction or conversation between brain and computer, both pattern-seeking entities.
Life is a game or probability, and yet we are still called to make decisions with imperfect and incomplete information. It is a miracle how many times our brains, the wet systems inside a hard skull, fire synapses to deliver right answers through deduction, which is incredibly efficient and quite fast.
One simple example: Pretend I'm showing you a set of flash cards, each with a name written on it. The task is to guess the grouping or category. Ready?
Okay, try and scroll down slowly:
Richard
George
Joseph
Claude
Franz
Johann
Giuseppe
Igor
Ludwig
Wolfgang
When did you know? At which name? How many reams of data points did you have to sift through to get there?