Savvas Costi writes: In an interview last year Tony Blair expressed his bewilderment that none of the political parties were talking about the next technological revolution (of course, Brexit was top of the agenda at the time of the interview). But is Blair right to call this “the single biggest challenge we face?” John Lennox’s latest book, 2084: Artificial Intelligence and the Future of Humanity, thoroughly explores the research on this subject and leads us to question whether society is largely oblivious to what is going on.
In his writing, Lennox blends science with philosophy in a way that is readable and dispels conflict between science and Christian belief. Interestingly, Lennox even questions whether atheism will survive science (p. 15). We’ll explore this later.
In the preface, Lennox outlines his main aims:
This book represents an attempt to address questions of where humanity is going in terms of technological enhancement, bio-engineering, and, in particular, artificial intelligence. Will we be able to construct artificial life and superintelligence? Will humans so modify themselves that they become something else entirely, and if so, what implications do advances in AI have on our worldviews and on the God question in particular? (p. 9)
Although Lennox himself is no expert in AI development, this doesn’t disqualify him from being able to write about it for, as he says, ‘one does not need to know how to build an autonomous vehicle or weapon in order to have an informed view about the ethics of applying such things’ (p. 10). Given the potential scope of its impact, Lennox is compelled to write ‘for the thoughtful reader.’ Indeed, given that we are all affected by the rise of AI I hope it reaches a wide audience.
One can’t help but notice the title chosen for the book and the connotations it brings. The title was actually suggested by Professor Peter Atkins prior to a university debate (p. 9) and, although this is no dystopian novel by any means, Lennox’s observations often come uncomfortably close to Orwell’s work.
What compels someone to develop artificial life and superintelligence? We are ‘insatiably curious’ (p. 11) as Lennox puts it, inquisitive about those big questions which never seem to go away. We ignore them to the detriment of living well and living freely, and Lennox recognises that ‘our responses to these questions help frame our worldview, the narrative that gives our lives their meaning’ (p. 11). A liveable philosophy is one with a strong sense of meaning and purpose, and for some, they will find this through their work with AI. Some will even attempt to cross the frontier to reach God-like status by creating life-like machines. Moreover, we’re well aware of the secularist narrative arguing that ‘life is generated by natural processes without supernatural intervention’ (p. 31), which reinforces Naturalism. Lennox recognises this is a major theme elucidated in Dan Brown’s fictional bestseller, Origin, and expresses his concern that the book may be read as popular science by many, particularly as Brown cites scientific findings against the intention of the researcher.
Dan Brown’s Origin is … from a scientific perspective, flawed from the start by making the dubious move of citing someone’s scientific research to make plausible the exact opposite of what the scientist himself thinks that it means … since Brown says he is motivated by a serious philosophical question, many people may well believe what he says, thinking that his conclusions are in tune with established science (p. 32).
Given that more people are likely to read Brown’s novel instead of Jeremy England’s research, it’s understandable why one might deem this to be misleading; the Evening Standard’s endorsement printed on the back cover of Brown’s book stating it is ‘well researched’ doesn’t help. Speaking of the novel, England himself has said, ‘there is no real science in the book to argue over,’ in a Wall Street Journal article entitled, ‘Dan Brown can’t cite me to disprove God.’
Lennox goes on to explore the views of other experts in the AI field. On the topic of artificial superintelligence one expert observes that ‘there are fundamental differences between machine intelligence and human intelligence – differences that cannot be overcome by any amount of research … “the artificial” in artificial intelligence is real’ (p. 26). Lennox also concludes that attempts to explain the origins of life in purely naturalistic terms are inadequate; as the late philosopher Antony Flew put it; ‘how can a universe of mindless matter produce beings with intrinsic ends, self-replication capabilities, and “coded [informational] chemistry?”’ Wouldn’t a better explanation be that, ‘the informational aspects of the universe, life, and consciousness ultimately point to … the existence of a non-material source for these things – the Mind of God?’ (p. 118) One can begin to see how for Lennox, science is better sustained within the soil of Christian theism rather than naturalistic atheism. In fact, faith in God was the motor which drove the rise of modern science (p 36).
What about the ethical implications of developing and using AI? Lennox summarises how far we’ve come:
Billions of dollars are now being invested in the development of AI systems, and not surprisingly, there is a great deal of interest in where it is all going to lead: for instance, better quality of life through digital assistance, medical innovations, and human enhancement on the one hand, and fear of job losses and Orwellian surveillance societies on the other hand (p. 13).
Navigating through these issues requires an informed level-headedness which Lennox encapsulates in his writing. ‘There are many positive developments, and there are some very alarming negative aspects that demand close ethical attention’ (p. 54). Lennox surveys the advantages and disadvantages that come with using narrow AI, devoting a whole chapter to look at each. Benefits such as maximising efficiency in the workforce, healthcare, as well as huge savings for the economy are juxtaposed with the drawbacks, notably should data fall into the wrong hands (p. 58) and the issue that ‘technology is developing far faster than the ethics to cope with it’ (p. 59).
There are mixed forecasts regarding the actual impact the technology is likely to have. You can read this piece from Paul Ratner to see predictions from a team of experts on which jobs AI is likely to do ‘better than humans.’ What’s most noticeable is that ‘AI should be better than humans at pretty much everything in about 45 years’ (p. 65). None of us want a future where there is ‘technological unemployment’ (p. 66). But then there’s this piece from Anmar Frangoul which says AI will create more jobs than it destroys. Where does this leave us? ‘We just don’t know with any precision how jobs will be affected, but that they will be affected is clear – they already are’ (p. 66). There are also huge issues around data that is harvested through social media outlets and “personal trackers shaped like smartphones”(!) which can be used ‘not only to inform us but to control us’ (p. 67), as is already happening in places like China. I found these words from Libby Purves regarding the prevalence of digital systems like Siri and Alexa to be most jarring:
Novelty blurs the oddity of paying to live with a vigilant inhuman spy linked to an all-too-human corporate profit centre thousands of miles away … To welcome an ill-regulated corporate eavesdropper into your house is a dumb, reckless bit of self-bugging.
To which Lennox cries, ‘yet millions, maybe soon billions of us do it!’ (p. 68) We don’t want Big Data to empower Big Brother.
Lennox’s final subject is the issue of transhumanism which has the capacity ‘to alter the nature of all succeeding generations’ (p. 109) and ‘forever change what it means to be human’ (p. 46). This is akin to Yuval Harari’s idea that we should work to ‘upgrade humans into gods, and turn Homo Sapiens into Homo Deus (but “think more in terms of Greek gods.”) (p. 87) Where does this lead to? A state of being able to ‘enjoy everlasting pleasure’ as Harari would hope? On the contrary, we learn from history, literature and the ancient wisdom of the Bible that ‘pride goes before destruction, and a haughty spirit before a fall.’ (Proverbs 16:18)
In That Hideous Strength, C. S. Lewis informs us that by grasping physical immortality, ‘Man’s power over Nature is only the power of some men over other men [italics mine] with Nature as the instrument’ (p. 216). Philosopher J. Budziszewski had the same insight believing that in the attempt to remake human nature, some men would end up ‘the absolute superiors of others’ who would ‘hold all the cards’ (p. 110). In numerous places throughout literature, we come across the same idea; dystopias are brought about when absolute power is given to an elite few, or even to one man. The sad reality is that historical attemptsto breed any version of superhumans is often coupled with some sort of program to remove the ‘unclean’ (for example, the Nazi quest for the Übermensch, or the former Soviet Union’s attempts to create the ’New Man.’) But by attaining superintelligence, wouldn’t rationality alone be enough to result in moral behaviour? Not so, as David Hume’s “sensible knave” goes to show that one might deem it reasonable to ‘strategically choose to break a moral norm at opportune moments’ and that ‘the more intelligent such persons are, the more they will want other people to follow all of the moral codes consistently, while they themselves opt to violate them when it is in their enlightened self-interest to do so.’ Lennox is right to sound the warning that those with transcendent ethical convictions must be involved in discussing the potential problems of AI, lest it be left to the relativistic ethicists to determine the moral programming of any future AI robotics. This could potentially be disastrous leaving those who created the system in the first place to be held ultimately responsible (p. 144).
We conclude with the biblical response to the developments in AI technology in its various forms. Might our faith in technology be misplaced? Those who have settled the matter regarding the historicity of Christ’s resurrection will find Lennox’s words captivating:
What God offers is a real, indeed a spectacular, upgrade, and it is credible, since by contrast with hoped-for AI upgrades, it does not concentrate merely on technological improvements, but on the moral and spiritual side of human character …
The promises of [AI technology in its various forms] are firmly rooted in this world, and in that sense they are parochial and small compared with the mind-boggling implications of the resurrection and ascension of Jesus …
[Furthermore] The fact that God did become human is the greatest evidence of the uniqueness of human beings and of God’s commitment to embodied humanity (pp. 170-171 and p.187).
You’ll have to read Lennox’s book to better understand the wider ramifications of Christ’s resurrection, and how they surpass ‘way beyond anything AI could even dream of’ (p. 186). His exploration of how advances in AI can be applied to biblical prophecies found in the book of Daniel and Revelation, as well as ‘the man of lawlessness’ of 2 Thessalonians 2:3 also makes for interesting reading.
Regarding future developments with AI, we’ll certainly need humility, modelled by Christ who ‘did not count equality with God a thing to be grasped’ (Philippians 2:6, ESV). This runs counter to much of the Homo Deus projects which are prideful attempts to ‘snatch’ at godhood. This is likely to come with a ‘hail of opposition’ given that in this field, the Christian voice is a minority one (p. 113), but as Matt Chandler has helpfully reminded us, ‘the church thrives on the margins.’ At its best, Christianity has always operated against the flow when bringing cultural renewal so Christians should be familiar with the call to stand apart, being salt and light even in the area of AI development. As we have seen already, much is at stake if the Christian voice remains silent. It is because of this that I hope many people will read Lennox’s richly thought-provoking book.
 Luc Ferry, A Brief History of Thought, (2011), p. 5.
 A worldview is a ‘set of fundamental beliefs through which we view the world and our calling and future in it … concepts that work together to provide a more or less coherent frame of reference for all thought and action.’ See James Sire, The Universe Next Door, (2009), pp. 18-19.
 Brown clearly articulates an atheist position, but he doesn’t’ completely close the door to the God question. As you go through the book, it appears as if Brown’s views are conflicted, at times endorsing atheism whilst on other occasions acknowledging an intelligent designer. Lennox was both surprised and delighted to read these words from Brown’s novel: ‘When I witness the precision of mathematics, the reliability of physics, and the symmetries of the cosmos, I don’t feel like I’m observing cold science; I feel as if I’m seeing a living footprint … the shadow of some greater force that is just beyond our grasp.’ As cited in Lennox, 2084, (2020), pp. 38-39.
 If you look at the 2018 edition on amazon, you will see the endorsement.
 Antony Flew, There Is A God, (2007), p. 124. On p. 128 of Flew’s book, he mentions Paul Davies who says, ‘life is more than just complex chemical reactions. The cell is also an information storing, processing and replicating system. We need to explain the origin of this information [italics mine], and the way in which the information processing machinery came to exist.’ Naturalism doesn’t do this. Professor of Chemistry, James Tour has said, “the appearance of life on earth is a mystery. We are nowhere near solving this problem,” cited in Lennox, 2084, (2020), p. 33.
 Later in the book we read that ‘the UK is planning to invest in educating 1,000 PhD’s in AI with a £1.3 billion fund set up in 2018. According to the Times Higher Education, between 2011 and 2015 China published 41,000 articles on AI, nearly twice as many as the US with 25,500 – way ahead of the rest. In 2018, MIT announced the single largest investment in computing and AI by an American academic institution: $1 billion. Also, China is investing billions of dollars in AI research.’ Lennox, 2084, (2020), p. 54.
 Transhumanism is “the intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.” Cited in Lennox, Ibid., p. 46.
 For a review of Harari’s ideas, see this superb piece from Nick Spencer.
 Lennox references the Well-Doer in We, the Big Brother in 1984, the Head in That Hideous Strength, Prometheus in Life 3.0, or the ten World Controllers in Brave New World. These are just a few examples. We also visit the same idea in books of the Bible with the beast in Daniel and Revelation, or the man of lawlessness in Thessalonians. Cited in Lennox, Ibid., p 215.
 As cited in Christian Smith’s Atheist Overreach, (2019), pp. 25-26.
 With ethical relativism it is difficult to reach any consensus on what morality looks like. Alternatively, we could allow AI robotics to determine their own morality, but this could still result in ‘unforeseeable and potentially horrific, even terminal consequences for humanity.’ Lennox, Ibid., p 144.
 Giles Fraser in this piece references the work of philosopher John Gray, and states that ‘for many, technology and science function in today’s society very much the same way as magic once did – they both represent the fantasy that there can be some quick fix to the challenges of being human.’
 N. T. Wright believes the resurrection to be ‘of historical probability so high as to be virtually certain.’ Pp.709-10 of his masterpiece, The Resurrection of the Son of God, (2017). Alternatively, see the last two chapters of Lennox’s Gunning for God book. I also referenced more books on this topic in my review of Lennox’s previous book.
 Matt Chandler, Take Heart, (2018), p. 32.
Savvas Costi is a graduate from the London School of Theology who currently leads the Religion and Philosophy department at a secondary school in East Sussex. He did his teacher training at King’s College London. He lives with his wife and daughter.
Much of my work is done on a freelance basis. If you have valued this post, would you consider donating £1.20 a month to support the production of this blog?