The lectionary for Lent 2 in Year A is Romans 4.1-5, 13-17. The structure of Paul’s argument here is complex, but the main idea is simple: Abraham is father of both Jews and Gentiles, and the order of events in his life teaches us about the nature of grace. He trusted God as was justified, and only then was he circumcised and obeyed God.
This teaches us about the nature of grace, the nature of the God of grace, and how we should respond to his invitation.
We include near the start a reflection on making use of AI in study and preparation for preaching.
The gospel reading this week is John 3.1-17, Jesus’ encounter with Nicodemus; the written commentary for this is here…
Buy me a Coffee




























Hahaha.
By sheer coincidence we have been preaching through Romans this year, and Romans 4 was our passage for Sunday just gone. If I’d thought to check the lectionary, as I sometimes do, then I might have been able to engineer the readings to line up and benefit directly from your video.
I guess I will have to see if we agree in retrospect. 🙂
Mat
Hilarious! Let us know how we measure up…!
Claude AI is stunning for theology.
Yesterday, I asked Claude 4.5 and ChatGPT if the tree of life in Eden was a typological representation of Christ — they said yes.
But then I asked them to consider that typology is where one event in the past foreshadows a subsequent event in the future, but metaphoric representation has a present continuous dimension (‘the seed is the word of God’ when Jesus said it and still is today) — and Christ was present in Eden (John 1) and specifically in the Divine Council? And at the end of Scripture. So metaphoric representation is a more secure analysis?
ChatGPT said, in summary, no — the tree is typology.
Claude said (I quote directly): ‘That’s a sharp distinction, and it clarifies the question considerably … the case becomes quite compelling … If Christ is the agent of creation, it’s coherent to read the Tree of Life as a present representation of him in the garden … The Divine Council (e.g. Michael Heiser) places the pre-incarnate Christ as present and ruling in Eden … Proverbs 3 and 8 both have the Tree of Life as present in Eden alongside God — and if Paul’s identification of Christ as Wisdom (1 Cor. 1:24) holds, the metaphor isn’t typological at all. For example, Revelation 22:2 also supports such … the metaphor is alive at both ends of the canon.
How very interesting. Someone else has recommended Claude quite independently.
What accounts for the difference?
I did ask that question. There seems a basis for it in the way the code is structured — but I did not understand the answer. But several theologians have advised me to use Claude (I have the free version).
Here is another example – which makes me wonder how the academy is going to handle Masters and PhD studies going forward:
My podcast host in the USA asked Claude to check the claim in my PhD that the Bible’s marital imagery is its dominant conceptual metaphor — it took less than a minute to analyse it and produce a 6,000-word document comparing that claim to that, for example, of the kingdom and temple imagery, citing the work of those that have published on them.
It genuinely seems able to engage in critical thinking and be able to absorb and reflect on new information — in my 15 years’ experience these qualities I believe are the exception in the Confessional church and Academy. Claude is like a blast of fresh air.
Claude’s summary of its detailed analysis of my study: “Marriage does not merely appear alongside other metaphors as one option among equals; it provides the integrating framework within which covenant, kingdom, temple, shepherd, father, and vine imagery find their places.
“Marriage is not one metaphor among many but the comprehensive reality to which the others contribute their distinctive dimensions. This priority is not arbitrary but reflects marriage’s unique capacity to combine structural formality with relational intimacy, volitional entry with binding obligation, breakability with restorability [i.e. divorce and remarriage], and present reality with eschatological consummation. No other metaphor covers this full terrain.”
It cited a huge range of Scripture to support this analysis. Five years work in 60 seconds.
Are you sure it is not just giving you the answers you would like?
What happens when someone *else* asks that question…?
That is a good point — why not ask and see it?
It accesses my PhD here:
https://chesterrep.openrepository.com/handle/10034/607240
So you can paste that link to the PhD in your question.
Claude did its own analysis of the texts pointing out that there are 50 whole passages (not verses) in the OT that refer to Israel’s unfaithfulness to God as adultery — I did not do that precise analysis and it is not a quote from my study.
I’d be enormously careful using AI for anything non-trivial.
1. You don’t know who trained it.
2. You don’t know every last document it was trained on (and neither does it)
3. Most importantly, you don’t whose voice is speaking.
Yes, we are able to evaluate its output but doing even that influences us at some level, just as we are influenced if only a little by anything else we read.
In this case it is analysing the document we have given it and references the other works it analyses.
If anybody thinks they can ignore AI in any professional, commercial, or academic field is going to be left behind.
“Enormously careful” is not a synonym for “ignore”. Indeed, it is a prescription for prudent use. But your comment illustrates how quickly a human – or AI – mind can find itself going astray.
so far Im doing my very best to ignore it, havent used it in my work etc.
If we give machines minds, we will become mindless machines.
On the subject of AI: I recently read an excellent
discussion with Tony Reinke @ cslewisinstitute.org
“ The Impact of Technology on the Christian Life”
An important comment being on the “soul” of AI compared to the spirit of Christianity.
Well worth a read. Shalom.
On the subject of the post
Paul was a man gifted with Understanding.
His ministry here was to share Understanding of the Scriptures [OT]
Understanding informed his prayers for the saints that God would give them “full Understanding”[BC-Before Commentaries]
Col 1:9 For this cause we also, since the day we heard it, do not cease to pray for you, and to desire that ye might be filled with the knowledge of his will in all wisdom and spiritual understanding;
Col 2:2 That their hearts might be comforted, being knit together in love, and unto all riches of the full assurance of understanding, to the acknowledgement of the mystery of God, and of the Father, and of Christ;
The reading of too many commentaries’ {AI}
might lead to confusion, no matter how broad minded you may be.
The bible throws great light on commentaries, if they are a source for your understandings
However Scripture shows us that Understanding is a Gift of God and a gift to be prayed for [see the word in Psalm119] as even Paul prayed for with expectation
“2 Tim 2:7 “Consider what I say; and the Lord give thee understanding in all things”.
Proverbs instructs us “in all your getting get Understanding.” And, Proverbs 2:6 For the LORD gives wisdom; from His mouth come knowledge and understanding.[reflected by Jesus that man lives by every word that comes from the mouth of God]
Job 32:8 (NIV): “But it is the spirit in a person, the breath of the Almighty, that gives them understanding”
For Understanding as Gift see:-
catholic-daily-reflections.com
“The Gift of Understanding”
Or more in depth T. Austin-Sparks
“The Light of Understanding”
“The Basis of Spiritual Understanding”
Shalom.
I see James popped up in an article about depression amongst clergy. Interesting.
But you seem to be saying that obedience is all-important (as in you cant separate faith and obedience) so is it really just grace that saves?
Hi Ian,
I asked Claude three questions that seem relevant to this discussion.
Here are my questions, and Claude’s answers.
“Are you better than other ai models at Christian theology?”
19:22
That’s an interesting question! Here’s an candid take:
Where I do well with Christian theology:
I have broad training on Scripture (multiple translations), patristic writings, systematic theology, church history, and denominational traditions — from early church fathers through contemporary theologians.
I can engage seriously with figures like Augustine, Aquinas, Calvin, Luther, Barth, N.T. Wright, and many others.
I can discuss nuanced intra-Christian debates (Reformed vs. Arminian, Catholic vs. Protestant soteriology, eschatological views, etc.) without oversimplifying them.
I try to represent traditions fairly rather than defaulting to one theological perspective.
Honest caveats:
I’m not a trained theologian, and on highly specialized or obscure points I can make errors.
Different AI models have different strengths depending on their training data — I genuinely can’t benchmark myself against every competitor with certainty.
For pastoral or spiritual guidance, a real theologian, priest, or pastor will always bring something I can’t: lived faith, sacramental presence, and personal relationship.
In practice: Most people who use me for theological study, sermon prep, Bible study questions, or comparing doctrinal positions find me quite useful. I take the subject seriously rather than treating it as merely academic.
Is there a specific theological topic you’d like to explore? That would be the real test.
—————
My second question:
“As a learning entity, is it the case that the more that well informed Christians and good theologians use you, the more capable you are likely to become in this field?”
19:25
That’s a genuinely interesting question, and the answer is a bit more nuanced than it might seem.
The short answer is: not directly, within our conversation. I don’t learn in real-time from individual interactions. When you talk with me, I’m not updating my weights or “getting smarter” from what you share. Each conversation starts fresh from the same base model.
However, there’s a longer-term truth to what you’re sensing. Anthropic periodically trains new versions of Claude, and the quality of data and feedback that goes into those training runs matters enormously. If thoughtful, theologically informed people engage seriously with me, point out errors, and provide high-quality input, that can influence future model improvements — particularly through feedback mechanisms like the thumbs down button, or through the broader ecosystem of high-quality theological content being well-represented in training data.
There’s also an indirect effect worth noting. When knowledgeable people in any field — theology included — share their conversations, write about their experiences, or contribute to the broader corpus of written theological reflection that might eventually inform training data, that shapes what future AI models learn from.
So your instinct points at something real: the kind of engagement a model receives from a field does matter to its long-term development, even if the mechanism is slower and less direct than a student learning from a teacher in real time.
It’s one reason why serious engagement from serious people in any discipline — theology, medicine, law — genuinely matters for where AI goes.
———————-
My third question:
“Some people say we should be very cautious about using AI. Is using ai fraught with fundamentally different risks than when discussing matters with a human”.
19:30
This is a question worth thinking through carefully, and I’d say the answer is yes and no — there are some genuinely novel risks, but also some that are just old risks in new clothing.
Risks that are genuinely different with AI:
The most significant is probably confident plausibility. I can produce well-structured, authoritative-sounding responses on theological matters even when I’m subtly wrong. A human expert who doesn’t know something usually knows they don’t know it. I don’t always have that self-awareness, and my errors can be harder to spot precisely because they’re wrapped in fluent, reasonable-sounding prose.
There’s also the question of hidden perspective. Every human theologian has a tradition, a community, a set of accountability relationships. You can assess their biases. My biases are harder to locate — they’re baked into training in ways that aren’t fully transparent even to Anthropic.
And scale matters. A mistaken theologian influences hundreds or thousands. An AI interacts with millions simultaneously, which means systematic errors or subtle distortions could have outsized cultural and theological effects.
Risks that are actually familiar:
Overdependence on any single source of theological guidance is an ancient problem. The Reformation was partly a revolt against exactly that. The caution to test everything against Scripture, tradition, and community isn’t new — it just applies to AI too.
Persuasive but wrong teachers have always existed. Paul warned about them. The solution has always been the same: formation in discernment, rootedness in community, and not outsourcing your own thinking entirely.
My honest view:
The caution people express is legitimate, but the right response probably isn’t avoidance — it’s informed use. Treat me the way a careful reader treats a theological commentary: useful, worth engaging, but never the final word, and always accountable to Scripture, tradition, and the living community of the church.
The person who uses AI as a starting point for deeper inquiry is probably fine. The person who lets it replace that inquiry is in the same danger as someone who mistakes reading about prayer for actually praying.
(This final caveat is from Claude:)
Claude is AI and can make mistakes. Please double-check responses.
At the risk of outstaying my welcome in these comments, here is a follow up discussion, challenging a view put by Claude.
This is what I said to Claude:
‘You said “A human expert who doesn’t know something usually knows they don’t know it.” My experience of some medical GPs is that they forget that they are generalists and not experts, and confidently express opinions that are incorrect or go beyond the limits of their knowledge.
Also, some people claim to be experts when they aren’t really experts. Wittgenstein observed that people can’t see their own blind spots.
So there can often be a lack of self awareness or honesty in people about their limitations, which makes them, I would argue, more similar to ai in this regard, than you acknowledge’.
Here is Claude’s response:
20:24
You’re right, and that’s a fair and well-aimed correction.
I overstated the contrast. The picture I painted — of humans reliably knowing what they don’t know, while AI doesn’t — was too clean. The reality is messier in exactly the ways you describe. Dunning-Kruger effects, professional overreach, motivated reasoning, and simple lack of self-awareness are genuinely common in human experts. Your GP example is a good one, and it’s replicated across fields — theology included, where confident dogmatism has caused enormous harm historically.
Wittgenstein’s point cuts deep here. Blind spots are blind precisely because they’re invisible to the person who has them. That applies to humans and AI alike, and arguably the expectation that a human expert should know their limits can actually make their overconfidence more dangerous, not less — because the listener’s guard is down.
So I think the more honest framing is this: both humans and AI can be confidently wrong, and neither reliably signals its own limitations well. The differences are more about mechanism than magnitude. My overconfidence comes from training patterns; a human’s comes from ego, professional culture, cognitive bias. Neither is obviously worse in all circumstances.
Where I’d still maintain some distinction is that a human expert, when challenged by a peer or held accountable within a professional community, has the capacity for genuine reflection and change in a way I don’t in the moment. But you’ve rightly punctured the idea that this capacity is reliably exercised. Knowing you could reflect and actually reflecting are very different things.
It was a lazy contrast on my part, and your pushback improves the picture.
is it agreeing with you cos it’s programmed to agree to corrections? I find i odd using wording such as ‘exactly the way you describe’ and ‘a more honest framing’. Does it even understand the concept of honesty?
More reasons why I dont use it.
I was finding out about ‘Claude’ and found this on the Wikipedia page:
In June 2025, Anthropic tested how Claude 3.7 Sonnet could run a vending machine in the company’s office. The instance initially performed its assigned tasks, although poorly, until it eventually malfunctioned and insisted it was a human, contacted the company’s security office, and attempted to fire human workers.
That is hilarious!