How should Christians respond to Artificial Intelligence?

Savvas Costi writes: In an interview last year Tony Blair expressed his bewilderment that none of the political parties were talking about the next technological revolution (of course, Brexit was top of the agenda at the time of the interview). But is Blair right to call this “the single biggest challenge we face?” John Lennox’s latest book, 2084: Artificial Intelligence and the Future of Humanity,  thoroughly explores the research on this subject and leads us to question whether society is largely oblivious to what is going on.

In his writing, Lennox blends science with philosophy in a way that is readable and dispels conflict between science and Christian belief. Interestingly, Lennox even questions whether atheism will survive science (p. 15). We’ll explore this later.

In the preface, Lennox outlines his main aims:

This book represents an attempt to address questions of where humanity is going in terms of technological enhancement, bio-engineering, and, in particular, artificial intelligence. Will we be able to construct artificial life and superintelligence? Will humans so modify themselves that they become something else entirely, and if so, what implications do advances in AI have on our worldviews and on the God question in particular? (p. 9)

Although Lennox himself is no expert in AI development, this doesn’t disqualify him from being able to write about it for, as he says, ‘one does not need to know how to build an autonomous vehicle or weapon in order to have an informed view about the ethics of applying such things’ (p. 10). Given the potential scope of its impact, Lennox is compelled to write ‘for the thoughtful reader.’ Indeed, given that we are all affected by the rise of AI I hope it reaches a wide audience.

One can’t help but notice the title chosen for the book and the connotations it brings. The title was actually suggested by Professor Peter Atkins prior to a university debate (p. 9) and, although this is no dystopian novel by any means, Lennox’s observations often come uncomfortably close to Orwell’s work.

What compels someone to develop artificial life and superintelligence? We are ‘insatiably curious’ (p. 11) as Lennox puts it, inquisitive about those big questions which never seem to go away. We ignore them to the detriment of living well and living freely,[1] and Lennox recognises that ‘our responses to these questions help frame our worldview,[2] the narrative that gives our lives their meaning’ (p. 11). A liveable philosophy is one with a strong sense of meaning and purpose, and for some, they will find this through their work with AI. Some will even attempt to cross the frontier to reach God-like status by creating life-like machines.[3] Moreover, we’re well aware of the secularist narrative arguing that ‘life is generated by natural processes without supernatural intervention’ (p. 31), which reinforces Naturalism.[4] Lennox recognises this is a major theme elucidated in Dan Brown’s fictional bestseller, Origin[5], and expresses his concern that the book may be read as popular science by many, particularly as Brown cites scientific findings against the intention of the researcher.

Dan Brown’s Origin is … from a scientific perspective, flawed from the start by making the dubious move of citing someone’s scientific research to make plausible the exact opposite of what the scientist himself thinks that it means … since Brown says he is motivated by a serious philosophical question, many people may well believe what he says, thinking that his conclusions are in tune with established science (p. 32).

Given that more people are likely to read Brown’s novel instead of Jeremy England’s research,[6] it’s understandable why one might deem this to be misleading; the Evening Standard’s endorsement printed on the back cover of Brown’s book stating it is ‘well researched’ doesn’t help.[7] Speaking of the novel, England himself has said, ‘there is no real science in the book to argue over,’ in a Wall Street Journal article entitled, ‘Dan Brown can’t cite me to disprove God.’


Lennox goes on to explore the views of other experts in the AI field. On the topic of artificial superintelligence one expert observes that ‘there are fundamental differences between machine intelligence and human intelligence – differences that cannot be overcome by any amount of research … “the artificial” in artificial intelligence is real’ (p. 26).[8] Lennox also concludes that attempts to explain the origins of life in purely naturalistic terms are inadequate; as the late philosopher Antony Flew put it; ‘how can a universe of mindless matter produce beings with intrinsic ends, self-replication capabilities, and “coded [informational] chemistry?”’[9] Wouldn’t a better explanation be that, ‘the informational aspects of the universe, life, and consciousness ultimately point to … the existence of a non-material source for these things – the Mind of God?’ (p. 118) One can begin to see how for Lennox, science is better sustained within the soil of Christian theism rather than naturalistic atheism.[10] In fact, faith in God was the motor which drove the rise of modern science (p 36).

What about the ethical implications of developing and using AI? Lennox summarises how far we’ve come:

Billions of dollars[11] are now being invested in the development of AI systems, and not surprisingly, there is a great deal of interest in where it is all going to lead: for instance, better quality of life through digital assistance, medical innovations, and human enhancement on the one hand, and fear of job losses and Orwellian surveillance societies on the other hand (p. 13).

Navigating through these issues requires an informed level-headedness which Lennox encapsulates in his writing. ‘There are many positive developments, and there are some very alarming negative aspects that demand close ethical attention’ (p. 54). Lennox surveys the advantages and disadvantages that come with using narrow AI,[12] devoting a whole chapter to look at each. Benefits such as maximising efficiency in the workforce, healthcare,[13] as well as huge savings for the economy are juxtaposed with the drawbacks, notably should data fall into the wrong hands (p. 58) and the issue that ‘technology is developing far faster than the ethics to cope with it’ (p. 59).


There are mixed forecasts regarding the actual impact the technology is likely to have. You can read this piece from Paul Ratner to see predictions from a team of experts on which jobs AI is likely to do ‘better than humans.’ What’s most noticeable is that ‘AI should be better than humans at pretty much everything in about 45 years’ (p. 65). None of us want a future where there is ‘technological unemployment’ (p. 66). But then there’s this piece from Anmar Frangoul which says AI will create more jobs than it destroys. Where does this leave us? ‘We just don’t know with any precision how jobs will be affected, but that they will be affected is clear – they already are’ (p. 66). There are also huge issues around data that is harvested through social media outlets[14] and “personal trackers shaped like smartphones”(!) which can be used ‘not only to inform us but to control us’ (p. 67), as is already happening in places like China. I found these words from Libby Purves regarding the prevalence of digital systems like Siri and Alexa to be most jarring:

Novelty blurs the oddity of paying to live with a vigilant inhuman spy linked to an all-too-human corporate profit centre thousands of miles away … To welcome an ill-regulated corporate eavesdropper into your house is a dumb, reckless bit of self-bugging.

To which Lennox cries, ‘yet millions, maybe soon billions of us do it!’ (p. 68) We don’t want Big Data to empower Big Brother.


Lennox’s final subject is the issue of transhumanism[15] which has the capacity ‘to alter the nature of all succeeding generations’ (p. 109) and ‘forever change what it means to be human’ (p. 46). This is akin to Yuval Harari’s idea that we should work to ‘upgrade humans into gods, and turn Homo Sapiens into Homo Deus (but “think more in terms of Greek gods.”)[16] (p. 87) Where does this lead to? A state of being able to ‘enjoy everlasting pleasure’ as Harari would hope? On the contrary, we learn from history, literature and the ancient wisdom of the Bible that ‘pride goes before destruction, and a haughty spirit before a fall.’ (Proverbs 16:18)

In That Hideous Strength, C. S. Lewis informs us that by grasping physical immortality, ‘Man’s power over Nature is only the power of some men over other men [italics mine] with Nature as the instrument’ (p. 216). Philosopher J. Budziszewski had the same insight believing that in the attempt to remake human nature, some men would end up ‘the absolute superiors of others’ who would ‘hold all the cards’ (p. 110). In numerous places throughout literature, we come across the same idea; dystopias are brought about when absolute power is given to an elite few, or even to one man.[17] The sad reality is that historical attemptsto breed any version of superhumans is often coupled with some sort of program to remove the ‘unclean’ (for example, the Nazi quest for the Übermensch, or the former Soviet Union’s attempts to create the ’New Man.’) But by attaining superintelligence, wouldn’t rationality alone be enough to result in moral behaviour? Not so, as David Hume’s “sensible knave” goes to show that one might deem it reasonable to ‘strategically choose to break a moral norm at opportune moments’ and that ‘the more intelligent such persons are, the more they will want other people to follow all of the moral codes consistently, while they themselves opt to violate them when it is in their enlightened self-interest to do so.’[18] Lennox is right to sound the warning that those with transcendent ethical convictions must be involved in discussing the potential problems of AI, lest it be left to the relativistic ethicists to determine the moral programming of any future AI robotics.[19] This could potentially be disastrous leaving those who created the system in the first place to be held ultimately responsible (p. 144).

We conclude with the biblical response to the developments in AI technology in its various forms. Might our faith in technology be misplaced?[20] Those who have settled the matter regarding the historicity of Christ’s resurrection[21] will find Lennox’s words captivating:

What God offers is a real, indeed a spectacular, upgrade, and it is credible, since by contrast with hoped-for AI upgrades, it does not concentrate merely on technological improvements, but on the moral and spiritual side of human character …

The promises of [AI technology in its various forms] are firmly rooted in this world, and in that sense they are parochial and small compared with the mind-boggling implications of the resurrection and ascension of Jesus …

[Furthermore] The fact that God did become human is the greatest evidence of the uniqueness of human beings and of God’s commitment to embodied humanity (pp. 170-171 and p.187).

You’ll have to read Lennox’s book to better understand the wider ramifications of Christ’s resurrection, and how they surpass ‘way beyond anything AI could even dream of’ (p. 186). His exploration of how advances in AI can be applied to biblical prophecies found in the book of Daniel and Revelation, as well as ‘the man of lawlessness’ of 2 Thessalonians 2:3 also makes for interesting reading.


Regarding future developments with AI, we’ll certainly need humility, modelled by Christ who ‘did not count equality with God a thing to be grasped’ (Philippians 2:6, ESV). This runs counter to much of the Homo Deus projects which are prideful attempts to ‘snatch’ at godhood. This is likely to come with a ‘hail of opposition’ given that in this field, the Christian voice is a minority one (p. 113), but as Matt Chandler has helpfully reminded us, ‘the church thrives on the margins.’[22] At its best, Christianity has always operated against the flow when bringing cultural renewal[23] so Christians should be familiar with the call to stand apart, being salt and light even in the area of AI development. As we have seen already, much is at stake if the Christian voice remains silent. It is because of this that I hope many people will read Lennox’s richly thought-provoking book.


[1] Luc Ferry, A Brief History of Thought, (2011), p. 5.

[2] A worldview is a ‘set of fundamental beliefs through which we view the world and our calling and future in it … concepts that work together to provide a more or less coherent frame of reference for all thought and action.’ See James Sire, The Universe Next Door, (2009), pp. 18-19.

[3] Lennox argues the temptation to ‘be like God’ (Gen. 3:5) fits neatly with the Homo Deus project mentioned in 2084, (2020), p. 140. It’s a major theme throughout the book.

[4] The philosophical belief that everything arises from natural properties and causes, and supernatural or spiritual explanations are excluded or discounted.

[5] Brown clearly articulates an atheist position, but he doesn’t’ completely close the door to the God question. As you go through the book, it appears as if Brown’s views are conflicted, at times endorsing atheism whilst on other occasions acknowledging an intelligent designer. Lennox was both surprised and delighted to read these words from Brown’s novel: ‘When I witness the precision of mathematics, the reliability of physics, and the symmetries of the cosmos, I don’t feel like I’m observing cold science; I feel as if I’m seeing a living footprint … the shadow of some greater force that is just beyond our grasp.’ As cited in Lennox, 2084, (2020), pp. 38-39.

[6] Jeremy England is a physicist from the Massachusetts Institute of Technology (MIT). It’s his research which Brown uses in his novel.

[7] If you look at the 2018 edition on amazon, you will see the endorsement.

[8] Lennox adds that ‘humans will never be able to make a conscious material machine.’ (p. 127). To the despair of many Star Trek fans, Data will always be a fictional character.

[9] Antony Flew, There Is A God, (2007), p. 124. On p. 128 of Flew’s book, he mentions Paul Davies who says, ‘life is more than just complex chemical reactions. The cell is also an information storing, processing and replicating system. We need to explain the origin of this information [italics mine], and the way in which the information processing machinery came to exist.’ Naturalism doesn’t do this. Professor of Chemistry, James Tour has said, “the appearance of life on earth is a mystery. We are nowhere near solving this problem,” cited in Lennox, 2084, (2020), p. 33.

[10] Lennox unpacks this argument in greater detail throughout his book.

[11] Later in the book we read that ‘the UK is planning to invest in educating 1,000 PhD’s in AI with a £1.3 billion fund set up in 2018. According to the Times Higher Education, between 2011 and 2015 China published 41,000 articles on AI, nearly twice as many as the US with 25,500 – way ahead of the rest. In 2018, MIT announced the single largest investment in computing and AI by an American academic institution: $1 billion. Also, China is investing billions of dollars in AI research.’ Lennox, 2084, (2020), p. 54.

[12] This term is more appropriately used given that all applications of AI to date are ‘characterised by a deliberately programmed competence only in a single, restricted domain.’ Ibid., p. 24.

[13] The technology has the capacity to give hearing to the deaf, sight to the blind, limbs to the limbless, and even eradicate killer diseases. Ibid., p. 145.

[14] Around 2.5 billion of the world’s population use Facebook. Ibid., p. 67.

[15] Transhumanism is “the intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.” Cited in Lennox, Ibid., p. 46.

[16] For a review of Harari’s ideas, see this superb piece from Nick Spencer.

[17] Lennox references the Well-Doer in We, the Big Brother in 1984, the Head in That Hideous Strength, Prometheus in Life 3.0, or the ten World Controllers in Brave New World. These are just a few examples. We also visit the same idea in books of the Bible with the beast in Daniel and Revelation, or the man of lawlessness in Thessalonians. Cited in Lennox, Ibid., p 215.

[18] As cited in Christian Smith’s Atheist Overreach, (2019), pp. 25-26.

[19] With ethical relativism it is difficult to reach any consensus on what morality looks like. Alternatively, we could allow AI robotics to determine their own morality, but this could still result in ‘unforeseeable and potentially horrific, even terminal consequences for humanity.’ Lennox, Ibid., p 144.

[20] Giles Fraser in this piece references the work of philosopher John Gray, and states that ‘for many, technology and science function in today’s society very much the same way as magic once did – they both represent the fantasy that there can be some quick fix to the challenges of being human.’

[21] N. T. Wright believes the resurrection to be ‘of historical probability so high as to be virtually certain.’ Pp.709-10 of his masterpiece, The Resurrection of the Son of God, (2017). Alternatively, see the last two chapters of Lennox’s Gunning for God book. I also referenced more books on this topic in my review of Lennox’s previous book.

[22] Matt Chandler, Take Heart, (2018), p. 32.

[23] Before anyone objects, I think Chandler is right to call Christendom a mirage. See Chandler, Ibid., p. 33 for more on this.


Savvas Costi is a graduate from the London School of Theology who currently leads the Religion and Philosophy department at a secondary school in East Sussex. He did his teacher training at King’s College London. He lives with his wife and daughter.


If you enjoyed this, do share it on social media—post it on Facebook or Tweet on Twitter—possibly using the buttons on the left. Follow me on Twitter @psephizoLike my page on Facebook.


Much of my work is done on a freelance basis. If you have valued this post, would you consider donating £1.20 a month to support the production of this blog?


DON'T MISS OUT!
Signup to get email updates of new posts
We promise not to spam you. Unsubscribe at any time.
Invalid email address

If you enjoyed this, do share it on social media (Facebook or Twitter) using the buttons on the left. Follow me on Twitter @psephizo. Like my page on Facebook.


Much of my work is done on a freelance basis. If you have valued this post, you can make a single or repeat donation through PayPal:

For other ways to support this ministry, visit my Support page.


Comments policy: Do engage with the subject. Please don't turn this into a private discussion board. Do challenge others in the debate; please don't attack them personally. I no longer allow anonymous comments; if there are very good reasons, you may publish under a pseudonym; otherwise please include your full name, both first and surnames.

8 thoughts on “How should Christians respond to Artificial Intelligence?”

  1. Will we be able to construct artificial life and superintelligence?

    No. There is no such thing as ‘artificial intelligence’ in the sense that those words are commonly understood; they are ‘suitcase words’, used to smuggle in vast concepts under the guise of allegedly-specific jargon.

    What gets called ‘artificial intelligence’ by the press is, really, just very complicated statistical models. Take an area where this approach has been very successful: mechanised translation. Anyone who’s used Google translate will have been astounded at its ability to create almost-readable texts out of a foreign language. And if they don’t know what’s really going on they could be forgiven for thinking that the computer is really ‘translating’ the text.

    But, it isn’t. What’s happening is that Google has access to a very large number of parallel texts — imagine having thousands of millions of individual Rosetta stones. Then when presented with a new text to translate, it searches its database for sections which match those it has already seen; then it applies statistical techniques to work out which bits in the parallel texts of other languages are most likely to correlate with those sections, and stitches the results together.

    The actual techniques for doing this are complicated, of course, but at no point is it doing anything a human would recognise as ‘translating’: that is, understanding what the text in one language says and working out how to express that in another language (people tried writing machine translators along those lines, back before there was enough computing power in the whole world to even approach the ability to process that amount of statistical data that makes
    the current approach feasible, but they never worked).

    I recommend the following articles, and the series of which they are a part, if you’re interested in cutting through the media misunderstandings (and finding out why, for example, there aren’t going to be self-driving cars on our streets any time in the next few decades); they are long but I think very clearly and understandably written:

    http://rodneybrooks.com/forai-machine-learning-explained/

    https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/

    http://rodneybrooks.com/forai-future-of-robotics-and-artificial-intelligence/

    Will humans so modify themselves that they become something else entirely, […]?

    Now that is altogether more possible.

    Reply
  2. Nero was a great fan of automata. These simple machines could make a statue move its arms etc like a ventriloquist doll. I expect these toys greatly impressed his guests. A.I. has the same effect today on worshipers of technology. Perhaps the bit in Revelation 13:15 alludes to such worship of ‘wonders’? —”the image could speak and cause all who refused to worship the image to be killed”

    Reply
  3. I’ll be back…

    Sounds like a lot of scare-mongering. Who knows how far artificial ‘intelligence’ will go (do we even have an agreed meaning of intelligence?) but there were similar fears when mechanisation began, and none of us today could do without it for the most part (our cities wouldnt even exist without it).

    As for the effect on jobs, well the same argument applied to all aspects of mechanisation. The pyramids no doubt took years to build, on the back of slaves. Now huge buildings are made in a few months, or a few weeks in the case of Chinese hospitals. I doubt if anyone regrets that advance.

    Im not sure about the comparison of physical enhancement with the resurrection of Jesus and believers. To experience the latter, you have to die first!

    The red or blue pill? Time will tell.

    Peter

    Reply
  4. Although Lennox himself is no expert in AI development, this doesn’t disqualify him from being able to write about it for, as he says, ‘one does not need to know how to build an autonomous vehicle or weapon in order to have an informed view about the ethics of applying such things’ (p. 10).

    Well… yes and no.

    Yes, you don’t need to know how something works in order to think about its ethical implications.

    But on the other hand if you don’t know how it works you are in danger of coming up with all sorts of impossible situations and then trying to work out the ethics of them, which, while very interesting for those of us who like the odd bit of sci-fi, isn’t much use in the real world where there will never be such things as super-intelligences. You don’t need to know how to build a TARDISa to have an informed view about the ethics of time travel, but though that might be a very interesting book, as time travel is impossible no matter how informed the view it will never actually be relevant to the real world.

    And the biggest ethical question regarding the things which get called ‘artificial intelligences’ is the ethics of naïvely deploying fragile, fallible probabilistic systems in critical areas of infrastructure where their inevitable failures could well cost lives. Something which you will totally miss if you think that there is any chance these systems might actually be super-intelligent or even just able to work as designed.

    Reply
    • Although I’ve quite enjoyed a number of Lennox’s books, he’s only an ‘expert’ in certain areas of mathematics yet he writes books covering biology, evolution and now AI. His books tend to be filled with quotes from others with minority scientific views, so I take his conclusions with a pinch of salt.

      Reply
  5. I’m unconvinced that we should be accepting of the term AI at all. What’s usually meant is a powerful processing capacity guided by programming to achieve a man-determined outcome or analysis. Sure… such programming might even “self develop” to achieve its analysis in a faster mode but it is not intelligent at all. The danger is that we delegate to an essentially dumb machine decisions which are devoid of any other “inputs” other than the logical steps it is shackled to. “Computer says….”

    Back in the early 70s I worked in Telecommunications Development at BT, the brother of a colleague of mine at the Research branch was deeply into trying to design a programme that thought for itself. It ended sadly and badly for his health.

    Why would human beings trust a machine with “independence”? We might be fallen creatures but machines are entirely God-less.

    Reply
  6. The story of all human ‘advance’ is it is employed for both good and ill. The trouble is the greater the ‘advance’ the greater the potential for ill in the hands of a creature who remains a moral delinquent.

    Thanks for another full post and review. I’m not sure whether posts which begin by citing another writer are written by them or by you Ian. Perhaps I’m skimming too quickly and missing the clues.

    Reply

Leave a Reply to Steve Cancel reply