“Out beyond ideas of wrongdoing and rightdoing, there is a field. I'll meet you there.”
[13th Century Sufi poet]
When we were talking about uses of Artificial Intelligence (AI) ten years ago in mainstream conversations we didn't know as much as we do today. But that didn't mean we got it all wrong. Yet it is the degree of wrong that matters when we work with systems — a small tweak in one direction or another can make a big difference.
Which is why big ideas and ambitious scenario like Minority Report need tempering with a healthy dose of reality. This calls for focusing on the right things to do things right, to paraphrase Peter Drucker. It's not about who's wrong or right in the debate, it's about gaining a proper appreciation and understanding of what AI actually is, does, and could do.
Anything becomes more useful when properly understood, including our role in making things work for us rather than putting them in the way of improving our lot. We constantly need to balance talking about things so we can advance our knowledge with a proper comprehension of what we're talking about so we can be on point.
But being on point doesn't mean we must be extreme, because there are costs on each side of the spectrum. The reality usually sits somewhere in the middle. When it comes to any argument it's in our interest to figure out the middle ground rather than trying to be fair to all parties.
Alexis de Toqueville understood that fairness based on fear — the side talking ‘intelligence explosion’ as a serious risk — makes the world a smaller place. That's why he put his faith in education and focusing on self-interest properly understood. This principle applies to information as it does to technology.
To achieve better outcomes, we should become more strategic about the data we put in all systems, including our brains. Because, “the most basic concern of any strategy is to increase our ability to survive on our own terms in a threatening and confusing world,” says Chet Richards in Certain to Win. He's not just talking about combat, neither are we.
Luciano Floridi is OII’s Professor of Philosophy and Ethics of Information at the University of Oxford, where he's also the Director of the Digital Ethics Lab of the Oxford Internet Institute. His research focuses on Information and Computer Ethics (aka Digital Ethics), the Philosophy of Information, and the Philosophy of Technology.
Admittedly, philosophy tends to be binary in its view of the world, he says, and right now this is not helping us gain a necessary nuanced appreciation of AI#(emphasis mine):
True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work.
This means that we should not lose sleep over the possible appearance of some ultraintelligence.
What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.
For a helpful definition of intelligence we should go back to the military world, where strategy is not only praised, but deemed critical. In this use, intelligence is a:
discipline that uses information collection and analysis approaches to provide guidance and direction to commanders in support of their decisions. This is achieved by providing an assessment of data from a range of sources, directed towards the mission requirements or responding to questions as part of operational or campaign planning.
And so we should base our thinking and appreciation of AI on a range of sources, our direction on developing the tools on the mission requirements. Floridi and may others who work with applications in the middle ground say those requirements are responding to our challenges based on what these technologies can do better than us, and refocus from our fear of it being a double-edged sword by considering how it's in fact “like a two-knife system that can sharpen itself.”
Because this kind of thinking will produce the advances we need to tackle our challenges. The real challenge is how we manage ourselves as we work on building out the capabilities that can extend the usefulness and value of AI. Floridi has some suggestions:
- Make AI more environment-friendly — to help us address the complexity of the systems that are causing us crises, and disasters for the planet
- Make AI more human-friendly — that is to consider people the receiving end of benefits from it rather tan the means sacrificed to get there
- Make AI's stupidity work for human intelligence — that is doing the heavy lifting where it does better, but integrated with people where our brains do better. This should benefit everyone with costs amortized by society
- Make AI’s predictive power work for freedom and autonomy — we can use this application to benefit people without taking away their dignity or privacy in marketing, to influence behavior and nudge choices as well as in gathering intelligence to protect
- Make AI make us more human — which was the premise of Kevin Kelly's What Technology Wants, my talk on uploading humanism at Ignite Austin, and the risk here is that even limited power may go to heads primed by fear and special interests rather than enlightened by reason and evidence
Sciences and humanities are meant to work together, not at odds with each other. To improve a system, we need to work at system level and address the relationships in that system for things to work. As John Muir says,“When we try to pick out anything by itself, we find it hitched to everything else in the Universe.”
Thinking at system level means understanding beyond cause and effect to include dynamic feedback loops. It's also important to understand that what a system can process is defined by boundaries. We also have open and closed systems.
To figure out how to make relations within it constructive we do need a deeper knowledge and more nuanced appreciation of the interactions and the parts. Which is why Dr. Anne Foerst, a researcher at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology and the director of M.I.T.'s God and Computers project, is a theologian.
In Interview, The New York Times Claudia Dreifus asks Foerst why a theologian in a lab that builds robots? Says Foerst:
Two reasons. The first is when you build machines in analogy to humans, you make assumptions about humans. Theologians explore the cultural and spiritual dimensions of that very question, What does it mean to be human? The idea is that as these robots are built, we can use the wisdom of religious studies to engage our understanding of humans, and thus what you build into the humanoid machines.
The other reason is that when we build social interactive robots that force people o treat them as if they were persons, tricky moral questions come up. For instance, Who are we really? Or is there a dimension to social interaction that goes beyond that? What are ethics here? Why should I treat someone else like a human, with dignity, when it is just a mechanistic thing?
For instance, one question we discuss quite frequently is, What would be the threshold when the robots are developed to a certain point that you couldn't switch them off anymore? The question really is, When does a creature deserve to be treated as intrinsically valuable?
We should start asking those questions with people, that would be a good starting point to see for understanding. Foerst says that point or robots is 50 years down the road. But we do have opportunities to treat humans with dignity today, every day, starting with the debates on whether intelligence explosion is a serious risk.
Floridi suggests a mix of tolerance and steadfastness, following Virgil's advice to Dante in the Inferno: ‘Speak not of them, but look, and pass them by.’ Technology has opened the floodgates to information, it's also retraining us and rewiring our brains. It's brought about what Floridi calls The Fourth Revolution, changing the answer to the fundamental human questions of who we are and how we relate to each other.
But before we dive head down into the answers, we better know how we're asking the questions. Self-interest properly understood helps us learn how to get it less wrong.
+
Learn more on the appreciation of the value of strategy, and the importance of practicing thinking.