God Should Fear Artificial Intelligence

A famous quotation, which has been attributed to many authors, states that “it is hazardous to make predictions, especially about the future”. However, in full knowledge of the high probability that I will end up with egg on my face, I’d like to have a little fun with some amateur soothsaying. My conjecture is that over the coming years, artificial intelligence (AI) will become the basis for an increasingly common (and powerful) argument against god. It may not be immediately obvious why one particular approach to software design, may represent good evidence that there is no god, but I believe that as AI continues to improve, theology will struggle to ignore the implications of computer science.

Most people are already familiar with narrow AI, or rather with many different narrow AIs. This term refers to machine learning software that is intended to accomplish one very specific task, while typically remaining entirely useless at all others. Familiar examples include virtual assistants with voice recognition, like Siri and Alexa. We are also seeing an accelerating trend towards autonomous devices, from vacuum cleaners to sports cars. The disparate nature of these applications, highlights one limitation of narrow AIs. While it’s obvious that your Roomba software will not be capable of piloting your Tesla Roadster, it’s also likely that the Tesla software, however advanced it becomes, will never be especially good at cleaning your carpet.

A generalised AI would be able to learn how to perform many different tasks, in the same way that human intelligence can be applied to solve various unrelated problems. Hollywood has long since imagined an AI, which can learn generalised lessons about global thermonuclear war from playing tic-tac-toe, but human-level generalised AI still does not exist today.

The AI from the WarGames movie (the launch code was CPE1704TKS)
The AI from the WarGames movie (the launch code was CPE1704TKS)

However, the pace of improvement for both generalised and narrow AIs, is accelerating rapidly. This progress has been highlighted by a number of high profile people, who are worried about the dangers of an intelligence explosion associated with a superhuman AI (often referred to as the singularity). For example, Stephen Hawking, Bill Gates, Elon Musk, Sam Harris and Steve Wozniak, have all warned of the potential for an AI to learn faster than humans are able to control. One of the most alarmist voices in this debate has been Nick Bostrom, who wrote a book called “Superintelligence: Paths, Dangers, Strategies” in 2014. In this book, he predicted that an AI would not defeat a human world champion at Go, until 2024. Go is an incredibly complex Chinese strategy game, which is much more sophisticated than chess (AI has already exceeded human capabilities at chess since 1997). A year after Bostrom’s book was published, a survey among hundreds of active AI researchers in 2015, showed that they still anticipated slower progress than Bostrom did. Their average predication for the for the first AI victory over humanity at Go, was that it wouldn’t happen until 2028.

Some results from a 2015 survery of leading AI researchers
Some results from a 2015 survery of leading AI researchers

As it turned out, humanity did not need to wait until 2028 to be bested by an AI at Go. In fact, not only was this prediction by the AI researchers wrong, but Bostrom was also wrong when he predicted that an AI victory would not happen until 2024. In March 2017, the reigning world champion Go player was beaten by a narrow AI called AlphaGo. It seems that Moore’s Law applies to deep learning neural networks too.

While AI technology is now improving very quickly, few of these ideas are new. More decades ago than I’d care to admit, I studied basic AI theory and developed a very simple neural network as a Computer Science undergraduate. I was never able to get my neural network to learn optical character recognition as quickly as I would have liked, but I recall being amazed by the genius of Alan Turing’s 1936 paper on computable numbers. As an aside, Newton and Einstein are often mentioned as the greatest ever human intellects, but I would make an argument for Computer Scientists like Turing and von Neumann. Whereas physicists can observe nature as a guide to developing their theories, Turing’s inventions and discoveries involved pure ingenuity from a blank sheet of paper, with no guide other than his own intuition.

Alan Turing's Famous 1936 Paper on Computable Numbers
Alan Turing’s Famous 1936 Paper on Computable Numbers

By defining the mathematics that determine which problems are computable (by what became known as a Turing Machine, or a theoretical computer) Turing outlined the context for AI. In fact, whereas his 1936 paper was foundational for the entire field of Computer Science, he also went on to write a seminal paper on AI in 1950, called “Computing Machinery and Intelligence”. Within contemporary culture, Alan Turing is possibly best known for the Turing Test, which has informed the plot of many movies. I don’t necessarily think that the Turing Test (or the “Imitation Game” as he called it in his 1950 paper) is among Turing’s greatest achievements, but it does help us relate the field of AI to theology, and the existence or otherwise of a god.

Alan Turing's Seminal 1950 Paper on Computer Intelligence
Alan Turing’s Seminal 1950 Paper on Computer Intelligence

Consider that in the near future, an AI successfully passes the Turing Test, and appears to be at least as intelligent a thinker as a human. In fact, let’s outline an even more immediate scenario than that. Consider that as narrow AIs become more pervasive and more impressive, with increasingly generalised abilities like “learning how to learn”, that the mere idea of an AI passing the Turing Test becomes widely accepted. What would be the implications of a widespread popular belief that humans will create something that is smarter than humanity? One initial consequence may be a challenge to human exceptionalism.

The idea that humans are qualitatively different from all other animals, is a very popular one across a number of faiths. This is especially true of the Abrahamic religions, although the belief systems that originated in India (such as Jainism) often have a different perspective. Islam teaches that humans have dominion over the animals and Christianity teaches that only humans were created in the image of god. In fact, in the 1950 Papal Encyclical Humani Generis, the Roman Catholic Church outlined their human exceptionalist teaching with great clarity. This document explains that while all other plants and animals may have evolved according to natural selection, all humans instead descended from an actual historic Adam and Eve, who were the first human parents.

As the binomial suggests, the phenotype that most clearly distinguishes homo sapiens from other animal species, is intelligence. All other significant characteristics that are unique to humans (such as higher language and culture) are a function of intelligence. Even the smartest non-human animals cannot define the transition function for the simplest busy beaver Turing Machine, never mind describe quantum mechanics or general relativity. What is exceptional about humans is not our opposable thumbs or our upright bipedal perambulation, but our intelligence.

If the most widespread ideas about god include a solipsistic human exceptionalism, which teaches that god’s interest in our universe is especially focused on humanity, then what happens if AI begins to erode human exceptionalism? If religions teach that only god creates human minds, what would happen if humanity understood that the creation of a superhuman mind was within reach? If human minds are to become relative dullards even on our own planet, could anyone continue to view the entire universe as having been created only so that human minds could experience it?

The wet Turing Machine in my skull decided to get a Turing Machine tattoo on my arm (photograph taken in the Science Museum, London)
The wet Turing Machine in my skull decided to get a Turing Machine tattoo on my arm (photograph taken in the Science Museum, London)

A broad popular agreement with Turing’s hypothesis that “machines can think” (even if the manner of machine-thinking is somewhat different from human-thinking) would also have implications for ethics and morality. This is another area in which religion has relied on human exceptionalism. That is, religions generally teach that divine instructions as to the right and wrong way to behave, have been delivered to humans alone. As AI continues to erode human exceptionalism, how will theologians answer the many ethical and moral questions that will arise? Asking these questions is an area in which contemporary culture has done a pretty good job. For example:

  • Would it be acceptable to keep Ava from Ex Machina as a prisoner?
  • Would it be wrong to harm or damage Samantha from Her?
  • Should it be considered illicit to emulate a human brain using computers, like Will in Transcendence or Greta in Black Mirror?
  • Would it be unethical to create minds in order for them to suffer, as described by Westworld?
  • Would it be immoral to cause a synthetic mind from Humans to experience pain?
  • If machines can think, do we then have ethical obligations to them?
  • Is there any moral difference between carbon that can think, as compared to silicon that can think?
  • Should Sonny from iRobot be reprogrammed and if so, who should decide which programming is best?
  • Does god want us to build a morality into AI and if so, which ethical framework should an AI have?

If it seems premature to worry about what kind of ethical framework should be incorporated into a generalised AI, then consider that researchers developing this technology are already doing exactly that. Are software developers the best people to be defining the morality of an AI? In the case of some narrow AIs, business people are already making life-and-death decisions about AI ethics, by prioritising the lives of their customers over others. Is this immoral? Perhaps we should ask the Pope to put on his silly hat and place his forefingers on his temples, in the hope that god will reveal unto him some commandments for AI behaviour? To imagine the involvement of clerics in this kind of work, is to provide just a small glimpse of the difficult new problems that AI presents for religion.

We may soon see widespread popular acceptance, that questions like those listed above require answers in the real world and not just in science fiction. How will theologians be viewed when they can find no answers in their ancient texts? Religion has long insisted that morality and ethics fall within its own purview, but how can the religious discover what god’s answers to these questions are? Furthermore, how will the faithful view religious teaching about human exceptionalism, if the most basic consumer goods can incorporate AIs that may confound human exceptionalism? Turing himself anticipated theological responses to AI in his 1950 paper but these issues will become substantially more pressing as AI moves from Hollywood to the High Street, from the movies to Main Street.

Extract from Alan Turing's 1950 Paper on Computer Intelligence
Extract from Alan Turing’s 1950 Paper on Computer Intelligence

To further emphasise how these issues are becoming more immediate, it is worth noting that in his 1950 paper, Turing also anticipated the argument from free will against thinking machines. He termed this “Lady Lovelace’s Objection” as Ada Lovelace had previously proposed that machines would never be able to freely originate anything new. Turing cautioned that the ability of machines to “surprise” us with something that they have freely created, ostensibly without explicit programming to do so, should not be ruled out in principle. To whatever extent a neural network in the human brain has free will, it is difficult to describe what deficit a sufficiently advanced artificial neural network would have in comparison. In fact, evidence for AIs that can surprise us in the way that Turing anticipated, is already abundant today. For example, there have been Facebook AIs that invented their own negotiating language, which was understandable to them but not to humans.

So the challenges to theology that are raised by AI are not new. They date back to Turing’s 1950 paper at the latest. What is new is that these debates are becoming much less abstract and esoteric. They no longer relate only to theoretical Turing Machines. The questions that AI asks about ethics, morality, human exceptionalism and the philosophy of mind, are becoming much more immediate and widely discussed. This trend will continue but theologians do not offer any better answers to us today, than they ever did to Turing. Religion often claims that human exceptionalism was not just a deliberate creation of god, but the very purpose of our universe. Were this is the case, what room would be left for god after human exceptionalism is eroded? Theories within evolutionary biology and philosophy of mind have also suggested that the human intellect is not exceptional, but the erosion of human exceptionalism may gain more popular traction from the experience of using sufficiently intelligent smart phones, than from publications in peer-reviewed journals.

People are dumb in many ways but a widespread acceptance than an AI will pass the Turing Test, would illustrate that even the smartest people could be dumber than their own creation. Would that mean that god could well be dumber than us? That would explain a lot. More importantly, it would demonstrate the creation of an exceptional intelligence, without an exceptional creator. God would then be immediately redundant … and easier to kill than Roy Batty or HAL.

Alan Mathison Turing
Alan Mathison Turing

John Hamill.

Leave a Reply

Your email address will not be published. Required fields are marked *