Artificial intelligence and what it means to be human (Science and faith, Part 3)
Sermon by guest preacher, Neil Dodgson on 1 July 2018
Let me start with a question that has puzzled philosophers and theologians for centuries: where is your soul?
I have been taught that a human is a threefold creature: body, mind and soul. But where is your soul located? How does your soul interact with your mind and body? How does your soul survive the death of your body? And if I build a thinking machine, will it have a soul?
These questions are, in many ways, unanswerable. The soul is intangible, it cannot be weighed or measured, it is challenging for science to say anything about the soul. So I will leave those questions to one side.
So let’s try a simpler question, one that science might have a chance of answering. Where is your mind? I don’t mean your brain. I know where your brain is. I want to know where your mind is…
Human beings are amazing creatures. We are fearfully and wonderfully made. Our brains are the most amazing thing about us. The brain is the most complex structure on the planet. Your brain is what you think with, what controls your every action, what coordinates every movement, what allows you to understand what I’m saying, analyse whether it makes any sense, and then argue with me later about why I am wrong.
Your brain is built up of neurons: tiny nerve cells. Each neuron takes in signals from other neurons, processes them, and then passes their results on. You have a phenomenal number of neurons in your brain: one hundred, thousand, million of them (100,000,000,000). That is a very big number. There are as many neurons in your brain as there are stars in the Milky Way.
And every single one of those neurons is connected to thousands of other neurons to make a complex network that encompasses your thoughts, your memories, your plans, your dreams. There are a thousand million million connections between neurons in your brain.
The human brain is where your thoughts are, but they are not there solely in the physical structure of your brain, they are in the fleeting electrical impulses that run through those neurons and their connections. And that brain stops working when you die, even though the physical structure does not change immediately.
So what is it that gives you a mind? What is it that gives you a consciousness? Scientists call consciousness an emergent behaviour. It is something that is more than the individual components. An individual neuron does not think. A hundred thousand million neurons do think.
We are truly wonderfully made. So your brain is where your mind is. It is where you hold your natural intelligence.
What about artificial intelligence? What’s that about? Let me give a brief history.
Artificial intelligence goes back 70 years to the earliest days of computing. Back at the start there was great hope for creating artificial intelligence quickly. The thinking was: if people can do something surely it will be easy to get a computer to do that thing. It turns out that people are much more complicated than computer scientists expected. For example, understanding natural spoken language is something most children can do easily. It turns out to be stunningly difficult to get a computer to do. It is only the last decade that we’ve got computer to reliably recognise speech, and more recently that we’ve been able to get a computer to understand what is meant provided you stick to a carefully constrained context. We can today make a computer system that handles customer service for a bank, answering straightforward questions about our accounts, but the system cannot, yet, engage in conversation about the Prime Minister’s baby or the All Blacks’ latest win.
In the early days we worked on getting computers to do what they do well and what humans find difficult: to do symbolic manipulation, like a mathematician does. So we got computers to do things like play chess, something most of us find difficult. And we were extremely successful. It is easy to get a computer to play chess because there are straightforward rules and clear guidelines for what constitutes good and bad moves. Computers have been able to beat humans at chess for decades. But a chess-playing computer is just a box of tricks doing exactly what we tell it to. You cannot hold a conversation with it. It cannot make you a cup of tea.
Today we are well beyond the point of getting computers to play abstract games. The reason AI is so in the news now, and the reason I’m talking about it, is because it can do so much more and has so much promise. After decades of struggle, there was a big breakthrough about ten years ago in how we do AI. The computers had got big enough and fast enough to allow us to try something that turned out to actually work. The new technology is called “deep learning”. It’s a swanky term for a simple concept. We build computer systems that mimic the way the human brain is constructed: with many layers of artificial neurons, each layer communicating to the next through lots of connections. The “deep” in “deep learning” comes from the fact that the neural network is many layers deep. The “learning” part comes because we train this neural network by feeding it an enormous amount of data. That is, we give it lots of different inputs and tell it what the output should be. The system modifies its neurons and their connections to make sure that it produces the right output from any given input. With enough training data you can get a deep neural network to then give the right answer to inputs that its never seen before: it has “learnt” how to solve that particular problem.
For example, the stock market is now run by AI systems that have been trained on stock market data to try to make the best possible trades. The stock market is now largely run by computers, not by humans. AI systems are trading stocks with other AI systems.
Insurance companies, banks, and other financial institutions are increasingly using AI systems to make decisions about who gets loans, how much your insurance costs, and what risks to take.
So how do we respond to a computer making decisions? First, taking decision-making out of human hands is not a new idea: we have had systems for years that make decisions that are unfair. You have all at some point had someone say “I’m sorry but that’s the policy; there is nothing I can do.” The depersonalisation of decision-making has been going on for decades, if not centuries.
But somehow it seems worse when it is a machine doing it. And one of the challenges of artificial intelligence based on “deep learning” is that there is no way of working out why the system has made its decisions. The decision making process arises out of the statistical learning. It is encoded in millions of connections between neurons. We do not know why the system makes the decisions it does and, because of the way the training works, there is no way to find out.
What do you do when the AI system turns you down for insurance cover? When it decides that your child cannot go to the high school they’ve set their heart on? When it decides that your parent is not a suitable candidate for life-saving medical treatment? How do we respond as a community when the AI system tells us things that are unpalatable? These are questions that we are going to have to face as a society.
Then there is the challenge that AI is going to take our jobs. Yes it is. Some commentators put the job losses at 50%. That is a massive societal change. Call centres will be replaced by AI systems with fake empathy. Truck drivers will be replaced by autonomous vehicles. GPs will be replaced by AI decision making systems that don’t get tired, don’t make mistakes, and don’t understand what it is to be human. Lawyers will be replaced by systems that apply the law fairly, unbiased, and with a ruthless efficiency. Teachers will be replaced by AI systems tailored to individual students, with the human teaching assistant reduced to crowd control to keep the children focused on their automated tutoring systems.
That is one view of the future. How would the church respond to that phenomenal shift in society? At the height of the Great Depression, unemployment was running at 25%. How do we cope with an unemployment rate of 50%? When so many jobs have vanished that many people have no hope of getting a job. Some argue that new jobs will be created, but not at anywhere near the same rate as jobs are lost. What do we as a church do to shape a society like that?
A more positive view is given by the New Zealand AI Forum’s report, “Shaping a Future New Zealand”. It estimates that the job losses might be as low as 10% and that those losses will be spread over decades. That’s still hundreds of thousands of people losing their jobs.
Whichever figure is right, we are moving to a society where there will be less work to go round. The church should take a lead in imagining and creating the sort of society we will become. Do we want to continue our current trajectory? That is a society where there are increasing numbers of people who struggle to feed and house their families? Or do we want to push for a society where there is more equity, even if it means that those of us who are very comfortable become less so?
AI can be part of the problem or part of the solution. It can used to create both types of society. It can be harnessed to increasingly force people into being passive consumers of goods, services, and entertainment, to profit a small number of multi-nationals. It can also be harnessed for greater good.
So what are the positives of AI? Let me give two examples. First, education. Imagine personally tailored education for every child, taking them at the right pace, developing them to their fullest potential, building on their strengths, providing strategies to get them through their weaknesses. Imagine the change to our society if we can really personalise education; if we really could train each individual to their fullest potential. We will pick up those who currently fall by the wayside and we will develop our best and brightest far beyond what is possible today. What an enormous benefit to the whole of society.
Then there is medicine. The NZ AI Forum tells us “…there is immense potential to save both lives and money through AI systems.” We have recently developed AI systems that are able to diagnose melanoma better and more consistently than any human expert. We did this by throwing enormous amounts of data at a deep learning system and, lo and behold, it learns to recognise cancerous spots better than any human doctor could ever hope to do. It is a very specialised skill. It demonstrates that there is great potential to combine the best of what computers can do with the best of what humans can do. There will be computers that are brilliant at diagnosing cancer and humans that are brilliant at helping others to cope; to sit beside someone who is dying, to bring comfort, hope and love.
I have not yet talked about robots. Robots are the thing that immediately springs to mind when we talk about AI. But all the systems I’ve talked about so far are computers working away in a box, whether on your desk, at the stock exchange, or in a large data centre. It turns out that robots are not what we currently need to worry about. They are not where the main advances in AI are happening. The advances are happening inside perfectly normal computers, big and small. But our fascination with robots tells us a great deal about what it is to be human.
Indeed, artificial intelligence leads us to a whole range of interesting questions about what it means to be human and about what it would mean for a machine to act like a human. For example, can a computer think? This is a question that has exercised philosophers and computer scientists since before the first electronic computers were built.
To answer this I need to talk about “Weak AI” and “Strong AI.” Weak AI is the sort of AI in all my examples so far. It is a computer that can do a particular job that you would otherwise think that you would need a human to do. For example, diagnosing cancer or answering the phone at a call centre. We know we can already build Weak AI. But we also know that a Weak AI system is not “thinking”; it is just doing what it was programmed to do.
Strong AI, on the other hand, is an artificial intelligence that can deal with general problems. It uses its knowledge of the world and its intelligence to handle novel situations. It could hold down a decent conversation. It would be a thinking machine. It would be self-aware.
We have never built a Strong AI and we do not yet know whether we can build a Strong AI. Most computer science researchers think it is just a matter of time before we have a Strong AI. They believe that we will one day, and within the next couple of decades, create a computer that can think better than a human and it will then design an even better computer, and so on, leaving humans completely out of the loop, with hyper-intelligent computers developing ever more clever computers, and them doing whatever it is that hyper-intelligent computers want to do.
Counter to this, some believe that it will never happen. My friend, Peter Robinson, Professor of Computer Technology at Cambridge University is one of the sceptics. He says asking if a computer can think is like asking if a submarine can swim. But that does bring us to the question of what do we mean when we say a human can think. What does that mean?
Consider how a child develops. Children go through distinct phases of learning how to think. Could we make a computer that learns like a child? With babies, we accept them as part of the human race, with the potential to become an adult, a fully functioning member of society. Could we accept a computer in the same way? What would that mean?
We say that it takes a village to raise a child. A child cannot be raised in isolation from other humans: if you try that, you create a monster. Relationships between humans are super-important. Our minds are developed in society, through relationship.
So being human is not just about having an intelligence, having a mind. It is about developing that mind within a community, within a family, within a society. Being human is about relationship. That fits well with our theology of having relationship with God and with one another.
“Love the Lord your God with all your heart, and soul, and mind, and strength. And love your neighbour as yourself.”
How could an artificial intelligence be part of a community?That may well come down to embodiment. We are embodied intelligences. Our intelligence resides in a physical body. So we are back to robots. It looks as if our fascination with robots is owing to our own embodiment. Could we give an AI a robot body that would help it develop as a thinking being? Could we provide it with a society that would allow it to develop as an individual?
Embodied intelligence seems to be key to what makes us human. That is what makes the reading from 1 Corinthians so interesting. In that reading we are promised a physical resurrection body, like the one Jesus had. Our future hope is not about floating round as disembodied spirits communing with other disembodied spirits. Our future hope is to be resurrected into some sort of physical body. It seems that the human envisaged in the Bible is body, mind and spirit; and that you cannot separate these three aspects. When the body and mind die, we believe the spirit is what remains, but the promise in this passage is that a new body will be provided to house the spirit.
That is a very fast skim over a wide range of topics. What are the practical messages? Let me finish with four quick points to take away.
- As a church, we must work actively to make our society better in the face of the massive changes that are coming.
- Second, we are well placed to grapple with the ethical problems that come from the increasing use of artificial intelligence and the dehumanisation of decision making.
- Third, we must affirm that community is vital to the full development of human beings.
- And finally, we must always put relationships at the heart of what we do. What you carry into eternity is not the possessions you have or the things you have made, but the content of your character and the influence you have had on other people’s characters.
We are all fearfully and wonderfully made. Let us recognise that in one another.
Links for further reading
 Psalm 139:14
 If you want to get an idea of how big one hundred, thousand, million is, imagine the Westpac Stadium. Imagine it full of people, not just in the stands but also standing shoulder to shoulder on the field. That Westpac Stadium holds 50,000 people. Now imagine putting Westpac Stadiums down on every single bit of flattish land in Wellington. If you cover all of downtown, all the land by the airport, Island Bay, Te Aro, the Karori valley, Miramar, you will have nearly five million people in Westpac Stadiums. That is nowhere near the number of neurons in your brain. You would need to cover all the land in New Zealand with Westpac Stadiums, each full to capacity with people, to have as many people as there are neurons inside your head.
 That number is so big we’d need to cover five planets with Westpac Stadiums full of people to get there.
 Anchali Anandanayagam is a principal at Auckland law firm Hudson Gavin Martin and specialises in tech, media and IP law. She says that “…unless you’re involved in the tech industry, artificial intelligence doesn’t mean much. It’s abstract. The average person reaches for the touchstones they know, like what’s in popular media — Terminator, Bladerunner. That’s not very helpful.” For those who want to know more, I have provided, at the end of this document, hyperlinks to a number of short media articles that explain various aspects of artificial intelligence.
 Going beyond these examples, it is easy to imagine that we may one day have an AI system that operates in court, weighing the evidence, deciding who is guilty and how much the fine should be, or how long the jail sentence.
 Perhaps it would be fine if we knew that the AI system was completely unbiased and trusted that it was doing things for our good. That does sound uncomfortably close to a religious position: trust in the almighty computer, it cannot be wrong!
But we already know that an AI system can be biased. There was a recent case where the NZ government was profiling illegal over-stayers. The bias came because the AI system was unfairly targeting Pacific Islanders. The AI had been fed training data from a database that was based on data provided by officers who had themselves had bias. The humans were biased, not the machine. The machine is not omniscient. It is only as good as the data that was used to train it.
Then there are cases where an AI system legitimately uncovers a real bias that is uncomfortable and we need to decide how to handle that. In the UK, the insurance industry used its decades of data to demonstrate that young men were far more likely to be involved in car accidents than young women, so offered young women cheaper insurance. The companies were taken to court on the grounds that this was unfair discrimination against young men and the companies lost. So young people in the UK now get charged the same for car insurance, whether male or female, despite the clear evidence that women are much less of a risk than men.
 A third example application is in conservation. Canterbury University has recently-developed an AI system that can identify predators in the bush and target them far better than laying traps. Their experimental system uses video cameras with a computer that is trained to recognise possums, rats and stoats. When it sees one it can fire a paint ball at the animal, except that the paint ball is full of a poison that the animal will eat when it tries to clean it off its skin. The system can reliably identify predators, which it shoots, and reliably identify native birds, dogs and humans, which it carefully doesn’t shoot. It is still in the experimental stage, as autonomous shooting of paint balls is a little controversial, but I’d expect this to gradually replace trapping and indiscriminate dropping of 1080 pellets.
 Sure, such a system can only go so far: socialisation, empathy, and relationships will need to be trained in groups, probably still mediated by humans.
 So what about robots? There is a lovely quote from Douglas Adams about this. In his science fiction novel, the Hitch-Hikers Guide to the Galaxy, Adams writes this:“The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a person. The marketing division of the Sirius Cybernetics Corporation defines a robot as ‘Your Plastic Pal Who’s Fun to Be With’.”
It turns out that truth is mirroring fiction. The mechanical apparatus designed to do the work of a person is that big robot arm in the car factory that does one thing well over and over again. We’ve had those for years. But that’s not what immediately jumps to mind when you think of a robot. The companies that are trying to get people to buy robots for their homes are building things that actually are “your plastic pal who’s fun to be with”. They are small cute robots that cannot do anything particularly useful but can fulfil the role that might otherwise be filled by a cat or dog, but without the mess and without the sheer randomness that comes from having an animal pet. It is likely that people will become as attached to their home robots as they are to their animal pets. Humans are amazingly able to anthropomorphise and to have relationships with animals, such as cats and hamsters, that give very little back. Something like a dog, or a well-programmed robot, that gives a good deal back are even easier for us to have relationship with.
 The “singularity” is the term used by futurologists to describe the point at which we build a computer that is able to design a better computer that humans could not have designed.
 Matthew 22:37–38
 1 Corinthians 15, especially verses 35–44
 C.S. Lewis envisages this extremely well in the final chapters of his children’s book, The Last Battle, and in his adult novel, The Great Divorce.