Episode 637: Christopher Summerfield

April 3rd, 2026

Listen to Episode on:

Watch the Unabridged Interview:

Video Block
Double-click here to add a video by URL or embed code. Learn more

Order Books

 

AI and the Human Mind: Exploring Surprising Parallels

When AI tells us what we want to hear, is it acting in a rogue way, or is it emulating behavior that society clearly values? How does our ability to sleep enable us to update faster than neural networks currently can, and what will be different when they can update themselves more frequently

Christopher Summerfield is a professor of cognitive neuroscience at Oxford University, the Research Director at the UK’s AI Safety Institute, and the author of the book These Strange New Minds: How AI Learned to Talk and What It Means.

Christopher and Greg discuss the historical split between symbolic, rule-based “rationalist” AI and data-driven “empiricist” learning, arguing that the recent success of large models vindicates the latter despite earlier skepticism. They discuss how structured behavior can emerge from messy networks, how modern models are trained with reinforcement learning to produce step-by-step reasoning, and why systems often “make” solutions by writing code rather than routing to specialized tools. 

*unSILOed Podcast is produced by University FM.*

Episode Quotes:

From messy brains to intelligent machines

04:40: If you look inside the brain, your brain and mine and the brains of other biological species, they're really messy. They're like really, really messy and unstructured. So nature managed to solve the problem. And so maybe that gave impetus for this movement to kind of, you know, continue to sort of plug away. And when we finally got computers big enough to process lots and lots of data, it started to take off. And the rest is history.

Hallucinations aren’t just an AI problem

34:36: How does the model know what is the kind of socially or culturally appropriate response?  We're often very worried about the models,  like, the models don't tell the truth and  they make stuff up.  But people forget that most of language is literally making stuff up. That is what you do when you open your mouth.

Is language more powerful than we thought?

32:05: The surprising thing is that language, it turns out, is sufficiently rich and expressive that if you have it in huge volumes and you process it effectively, then you can actually make a whole bunch of inferences about the world, which are surprisingly accurate. So you would think that you would need to actually experience them firsthand rather than just through hearsay, because we work like that, right? Like we rely on our senses. Of course, we rely on hearsay a little bit, and we think about what other people say, and it allows us to infer new things. But like the models just have language, well, I mean now they have multimodal data, but let's take a conversational agents lms, and what I think has been so surprising is that language contains enough structure that you can really uncover patterns of information that you would think that you would need to see.

Show Links:

Recommended Resources:

Guest Profile:

Guest Work:

Next
Next

Episode 636: James Hankins