Search Episodes
Listen, Share, & Support
Listen to the latest episode
Subscribe via iTunes
Subscribe via RSS
Become a fan
Follow on Twitter

Support Us:

Please consider making a donation to help make this podcast possible. Any contribution, great or small, helps tremendously!

Subscribe to E-Mail Updates

Related Readings
  • Answers for Aristotle: How Science and Philosophy Can Lead Us to A More Meaningful Life
    Answers for Aristotle: How Science and Philosophy Can Lead Us to A More Meaningful Life
    by Massimo Pigliucci
  • Nonsense on Stilts: How to Tell Science from Bunk
    Nonsense on Stilts: How to Tell Science from Bunk
    by Massimo Pigliucci
  • Denying Evolution: Creationism, Scientism, and the Nature of Science
    Denying Evolution: Creationism, Scientism, and the Nature of Science
    by Massimo Pigliucci

RS95 - Gerard O'Brien On the Computational Theory of Mind

Release date: October 27, 2013

Gerard O'BrienIs the mind a kind of computer? This episode of Rationally Speaking features philosopher Gerard O'Brien from the University of Adelaide, who specializes in the philosophy of mind. Gerard, Julia, and Massimo discuss the computational theory of mind and what it implies about consciousness, intelligence, and the possibility of uploading people onto computers.

Gerard's pick: "Alan Turing: The Enigma The Centenary Edition"


References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    I enjoyed the podcast. A few points of disagreement however: 1. It's not productive to assume that AI is a single goal. Particularly not that it is merely to construct _artificially_ intelligent systems. There's a broader (more scientifically interesting) research program to study the space of possible minds, including human minds. ...

Reader Comments (16)

Extremely interesting discussion. I would be most curious to explore the last part more clearly, i.e. what kind of physical "computers" might be capable of reifying a human-like consciousness (I don't think "simulate" is at all an appropriate term in this formulation). For instance, must the substrate in essence be neurobiological, with neurons and glial cells--in other words, must the "computer" be a brain in a jar? Or is it sufficient to maintain the internal chemistry of neurons, while conducting the nerve impulses electronically?

In any case, great episode!

October 27, 2013 | Unregistered CommenterBjörn Carlsten

There seems to be some confusion about the specific meaning of the term "analog" as it is used here and unfortunately what exactly is meant by it was not explored in much detail.

In the beginning it was said that some aspects of biological systems can be considered digital on a higher level but if you look closer they are analog and it seems that this was meant to provide contrast to digital computers. However, the reality is that at the lowest level, the "digital" computers that we use today are all analog too, or at least they are as analog as the biological systems (i.e. we are ignoring quantum mechanics in both cases). In digital computers, we assign certain (analog) voltages as thresholds between zeroes and ones and interpret everything at the higher levels in terms of digital bits but physically it is all analog. So, this kind of distinction between "digital" computers and those specific biological systems that can be seen as digital at a higher level does not make sense. Note: I'm not saying that all biology can be considered digital, only certain very specific (sub)systems.

Further, when we talk about the difference between "digital computers" and "analog computers" from an engineering perspective the difference is solely in terms of efficiency. That is, in this context, everything that an analog computer can do, can also be done by a digital computer. The big difference in this context is that in many cases, the corresponding digital computer will be much more expensive (in terms of energy and other resources used), which is why analog computers can be more appropriate in those cases. However, in this discussion it is implied that the philosophical difference between digital and analog computers is more fundamental than the issue of cost-effectiveness.

So, given that all physical systems being considered here are ultimately analog (again, ignoring quantum mechanics), and that cost is not the issue, then what exactly is it that an analog computer can do that a digital one cannot, in the context of the discussion here?

October 27, 2013 | Unregistered CommenterInfinitely Improbable

I enjoyed the discussion, but I would have liked to hear more about O'Brien's views, and less on O'Brien responding to Massimo's views.

As the preceding poster notes, it would have been good to hear a lot more about what O'Brien means by analog computation in the brain and how it relates to intelligence and consciousness (and why emulation of analog computing on a digital computer does not count).

Massimo has a blog where he can detail his views. For these podcasts, I think it would be better to for him to be an interviewer whose goal is to get the guest to do most of the speaking.

October 27, 2013 | Unregistered Commenteranon

Gerard, did you get the PDF of my new book, sent to you and other colleagues of yours at Adelaide earlier this year? It solves your issues, and if you didn't open it or read it, you can access it free at my site

Computation is a fashionable term with little relevance to the mind. We have no understanding of how the brain creates the experience of awareness from neural flow, and so we look for analogies, and the obvious one is a computer. The obvious can often be a red herring.

The issue is much deeper, and very different from simple digital or analog function. It is a continual neural flow around the anatomy in the course of which we must centralise our functional diversities (sight, sound, touch etc) in the brain for those specific qualities to be experience along with an ethereal attendant - thought - that is their integrate level. Their commonalities are split and then unified into workable schemes of thought that encompass all functions. Thought directs attention to different combinations of those diverse functions according to their priorities in the moment as functions change sequentially from moment to moment.

Where does computation fit into automatic splitting and unification of diverse inputs (sight sound touch etc) as thought? It doesn't. It is a chemical flow of current to excite diverse capacities. It has meaning from its integration in the moment, and there are no symbols or computational langue involved. The 'symbols' are real chemical neurons simply splitting and integrating their real capacities for finalization in the brain when current is sufficient. Then the flow continues to adaptive outputs.

Representation is a relevant term, but that is not 'symbolic'. It is real, from real neuronal capacities and events that represent anatomical functions in interface with the world (seeing a dog, hearing it, touching it, etc). we collect the relevant stimulation by receptors and merely order them by splitting into commonalities and unifying those commonalities as thought about the dog. We have a real representation of a dog, not a symbolic one. As the dog moves, so does our representation of it - this is not merely a manipulation of symbols, its is a manipulation of realities detected by receptors.

There is more to the story, so you will have to read my book to learn the correct perspective.

October 28, 2013 | Unregistered CommenterMarcus Morgan

Sorry, the correct site name is

I should add that Searle's Chinese Room is a good test for one's intelligence as well as supposedly being a good model to sort what intelligence might involve. Dennett along with every man and his dog has waffled about it, but it's pure symbol manipulation. How can an 'experiment' relying on nothing hut symbol manipulation be a model for the mind? Are we nothing but symbol manipulators, or is symbol manipulation one small aspect of mind? The Searle room is an extreme and limited example of 'computation' and can simply be discarded.

We might have some kind of 'matching' along with other capacities, indeed 'matching' is such a common idea that it is hard to imagine there is not some kind of matching involved - but real neuronal signals and not 'symbols'. Our real neuronal signals might match with others in their commonalities across vision, sound, and touch of the same dog, to assemble an integrated thought about it. Pattern matching perhaps - but not 'symbolic' or computational.

Computation encompasses many basic ideas and functions such as matching different things - but confined to symbols. We are real. Forget about these vague conflations, inflations, and whatever philosophers substitute for reasoning, and get to the point of neural capacities in coordination.

October 28, 2013 | Unregistered CommenterMarcus Morgan

Quantum theory suggests that everything is digital at the most fundamental level.

October 29, 2013 | Unregistered CommenterJohn Moore

Gerard's assertion that the function of the brain cannot be entirely digital is almost certainly true. I must however take exception with his claim that only analog computers would be able to simulate the brain. As pointed out in previous posts, digital computers are very good at simulating analog systems with a high degree of accuracy. The mind appears to be a robust emergent property of the chemically noisy brain, so the continuous states in a mind simulation are unlikely to require particularly high resolution.

Every general purpose digital computer ever made has been Turing complete. This means that if any digital computer can simulate the brain well enough to produce a mind, then they all can given sufficient memory and time.

November 2, 2013 | Unregistered CommenterAlan

I must be missing the point or something. We cannot currently define awareness in mechanical terms - we have no idea how the subjective experience is created as a signal finalization in the brain. Equating neural signals to computable variables is a pipe dream

You can propose a weak hypothesis at the very limits of logic : that everything is digital or analog, or both, and therefore "computable", including the unknown mechanical process of neural finalizations, but why put the cart before the horse? We don't currently know the mechanism, and all you are saying is that you hope you can do it because you hope that thinking about the mechanism at the most basic level of digitization might reveal what it is, as we assume digitization applies generally in nature.

Fine, then, start working on the neural mechanism for finalization at the digital or analog level. I would be intrigued to see your analysis of neurons as creators of awareness at that level. But, where is it? All I read here is about the weak hypothesis and whether "computation" explains mind. I see no-one actually doing the work here or making proposals about actual neurons on that level.

Why not get over the fact that the weak hypothesis exists - that there is some kind of mechanical digitization at the very basics of nature - and that it is an open possibility. It is as open a La Place was open from Netwon's mechanical view, and about as useful until someone actually does some work.

To me, its just quite idle to chatter with no insight into: 1. the digital level of nature relevant to humans; 2. neural finalizations of awareness using that digital level; or how on earth you are going to get a subjective experience from a computer unless you are La Place's demon.

Remember, we only know awareness and use of intelligence within it as a subjective ongoing experience. That adds to the difficulty of any explanation, although probably not beyond a La Placian demon. Think about it and try to work on these issues rather than quite pointless argument about whether the weak hypothesis exists. Prove the hypothesis, or don't.

I think the best place to start is with the view I present in my free book at and use it to prove your hypothesis.

November 3, 2013 | Unregistered CommenterMarcus Morgan

As others have suggested, since we really know so little about consciousness, making claims about what's required to produce it seems very premature.

November 3, 2013 | Unregistered CommenterGreg Esres


I agree that a digital computer can simulate a brain and, for example, predict whether a real brain would feel pain, but I doubt that the digital simulation itself, which amounts to a sequence of simple calculations, would actually feel pain. The same way that a digital computer simulation of photosynthesis can predict the process of photosynthesis, but it wouldn't actually do photosynthesis, because photosynthesis is a physical process, as is the sensation of pain.

November 4, 2013 | Unregistered CommenterMax

Here's a summary of the blog post about this episode that I just linked to.

I enjoyed the podcast. A few points of disagreement however:

1. It's not productive to assume that AI has a single goal. Particularly not that it is merely to construct artificially intelligent systems. There's a broader (more scientifically interesting) research program to study the space of possible minds, including human minds.

2. Understanding understanding does not hinge on understanding “consciousness”. Carl Bereiter has persuasively argued that understanding is a relational concept: i.e., it has to do with the relation between a knower and knowledge. That's in Education and Mind in the Knowledge Age. (Also in my Cognitive Productivity book).

3. The hoary debates about digital vs. analogical computing, classical and connectionist AI felt long in the tooth soon after they started over 20 years ago. It's time to move on.

I expand upon these points in:

The blog post references a number of relevant articles and books.

November 5, 2013 | Unregistered CommenterLuc P. Beaudoin

Interesting level of confidence Luc. Given that "understanding" only arises as a subjective experience by unknown neural processes, I'm intrigued at how you are going to obviate that process from the concept of "understanding". There are superficial analogies, but that's all. The crux is neural activity, which you do not appear to "understand".

I suppose there are ways of looking at understanding, as a relational concept between knower and knowledge - that's probably consistent with Chomsky and undeniably a knower constructs their own awareness by their own anatomy and so the knower must 'relate' directly to what the knower knows - but so what? That's logical, but goes nowhere. How does the knower construct his knowledge? By what neural process? Back to square 1.

Who knows whether the digital / analog debate will bear fruit. I don't, but I reckon that the starting place is with understanding neurons, which none of the above posts show. Then hopefully find parallels to digital or analog processes from that neural understanding. Otherwise its a La Placian pipe dream. I would not "move on " from digital / analog debates, except into neural debates, of which there are none in this thread!

November 6, 2013 | Unregistered CommenterMarcus Morgan

Thanks for your reply, Marcus.

> I'm intrigued at how you are going to obviate that process from the concept of "understanding".
> How does the knower construct his knowledge? By what neural process?

My comments on understanding were about the _concept_ of understanding, not the etiology of instances of understanding. Of course, to explain instances of understanding, one needs a computational theory. However, one does not need to understand how understanding arises in order to understand what it _means_ to understand something. Similarly, one does not need to understand a car's engine in order to know that it is rolling faster than another one, and in some sense that it is not capable of rolling faster than another. The concept of understanding that Bereiter and I falls under the general category of the Intentional Stance. Cf. Dennett's book by that name.

As another example, consider the separation of concerns in the TCP/IP stack (or other stacked systems). You can fully understand one level of the stack, and define concepts at that level, without understanding lower levels. You're reading this message. You know what it means to send it. But unless you have studied network protocols, you probably don't understand what happens at a lower level.

I do believe there needs to be a design-based (computational) theory of contributors to understanding. Neuroscience can contribute to this.

> I reckon that the starting place is with understanding neurons,

There's no starting place in cognitive science. We need to tackle the problems from all angles. Cognitive science is a multi-disciplinary and multi-method approach.

My main problem is with Massimo's claim that "My definition of understanding involves consciousness", however. It's not parsimonious to define or _essentially characterize_ a concept like understanding in terms of consciousness. Proof? Take a look at Bereiter's characterization in his book or the one I put forth in my book. No appeal to consciousness required yet Bereiter's concept does a lot of work. (Principle of parsimony is a basic one in science of course.)

November 6, 2013 | Unregistered CommenterLuc P. Beaudoin

Before I get hammered for my car example, let me put it differently (because judging the capability of car does in fact sometimes require knowing something about what happens under the hood of a car.)

You don't need to understand anything about how engines work in order to be able to apply the concept of speed to a car, or to use the concept "speed of this car" effectively.

So, we can make judgments of understanding of ourselves and of other people without reference to mental or neural mechanisms.

Incidentally, about the so called neural layer: As Seth Grant (neuroscientist) has argued, we're not just dealing with one layer at the neural "layer". Moreover, the mind itself can be thought of as a hierarchy of virtual machines. I explain what I mean by that in Ch. 1 of my book. Yes, science requires reduction, but the best explanation is not always the lower level one.

The concept of layering is absolutely critical to understanding cognitive science/AI. But it's something that is all too often misunderstood. It helps to have studied TCP/IP and programming systems that have stacks in them to understand this. Computer science invented layering as we know it. But evolution invented layering as the mind uses it.

November 6, 2013 | Unregistered CommenterLuc P. Beaudoin

Thanks for the clarification, but you are reaching for a lower level of understanding what understanding means. Unfortunately, understanding is a personal construct by individuals, by their anatomical and neural processes. Clearly you can define understanding, and understand that definition and its general application to humans and nature, but that is again a human construct accepted or not by individuals. I would never delimit too much the varied types of understanding humans and other animals might have.

Unfortunately, about the only thing I agree with Massimo about is that understanding is a subjective construct by an individual using neural processes. He probably copied that from me because I hammered him with it. You would gain a lot by reading my free book at my site I appreciate that you may have also written a book, but is it freely accessible?

If you read my book you will find a deeper and more consistent analysis of neurons - back to neurons - as facilitators of awareness, and what the capacity to understand amounts to. It is from comprehensive ordered anatomical capacities represented by neurons. Neurons simply facilitate awareness in the course of enablement of functions (seeing, hearing, touching etc).

Real progress is made in books like mine that take a novel approach as deconstruct using realities of anatomy - not abstract computational theories. Perhaps neurons have digital and other properties, but no one here or anywhere I have read has simply stated how neurons achieve what they do. It's a pipe dream using secondary levels of understanding what understanding might be.

I haven't seen how you deconstruct neurons for their properties of understanding, but if you can do it rather than hope to do it, please provide some clue. My book provides just about everything else you need to know -- but the experience itself as an immediate formation from neural excitation awaits neuroscience breakthroughs, not abstract analogies with computation.

Nevertheless I wish you luck, reading my book will help, but unless you are a neuroscientist, you probably can't help me advance beyond what I have written in my book.

understand g

November 6, 2013 | Unregistered CommenterMarcus Morgan

I’ve just listened to this podcasts, quite a while after it was recovered. I’d like to put my bit into the argument. I’m pretty convinced by Searle’s arguments, taken as a whole – not just the Chinese Room (CR). The CR is usually the only argument of Searle’s against computationalism that gets discussed in this context, and then he gets accused (e.g by Dennett) of being a one hit wonder.
I’d really suggest that anyone interested in this reads at least the whole of Searle’s book “the Rediscovery of the Mind” to get a true picture of his argument.

I agree that the concepts of ‘intelligence’ and ‘consciousness’ often get confused. Searle was answering the computationalist claim that digital data processing was sufficient to give you the whole of the human mind, which must include both the kind of flexible problem solving and adaptability that we call intelligence and the first person irreducible qualitative phenomena that we call consciousness. And it is the latter that give rise to intentionality in the philosophical sense and to semantics as opposed to syntax.
And ultimately, you can take Seale’s CR as refuting the Turing test,. Now it’s easy to get confused if you accept a broadly behaviouristic view that what matters is how things look from the outside; an argument that Gerard O'Brien seems to propounding rejecting Searle. Yet that’s another part of Searle’s whole perspective that isn’t explicitly contained in the CR, and so often gets ignored; that the third person, view based only on behaviour is inadequate to deal with an intrinsically, ontologically first person phenomenon , i.e consciousness. I think if you read Turing’s orginal work it’s clear that he buys this behaviouristic view, not surprisingly given how dominant it was in philosophy and psychology when he was writing.
Julia argues in favour of the System reply by pointing out that the brain gives rises to consciousness, even though the parts of it are not at all conscious. True but irrelevant; this refutes an argument that Searle is not putting forward; that the CR is not conscious because it is not conscious in all it’s part. His argument is that what is going on is the wrong kind of thing to be conscious. At it’s core, he sets out to show in the CR that sysntax is not sufficient for semantics. That is, syntax is only an empty symbolic form that is given meaning (semantics) by the human consciousness.

Later, Searle said that this argument was much to kind to “classical” computationalism. Because, he argues, not only is syntax not enough to produce semantics, but syntax is not intrinsic within physics. In other words, both the syntax and the sematics of any language (any system of symbolic communication ) only have meaning because we collectively, social ally, culturally, attribute meaning to them. So, the data in a computer system has no intrinsic meaning at all.

I was really taken aback when I first heard Searle say that computation is entirely an observer dependent phenomenon. Since I’ve read more of Searle’s books, such as his “Construction of Social Reality”, I understand and agree with this. Crudely, there are two kinds of things in the world; those that exist because of the brute facts of physics, like rocks. These would exist even if there had never been any intelligent being in the universe. They are observer independent..
Then there are things which only exist because intelligent observers agree that do, like money, political power and –language – all language. If no intelligent life ever arose, then none of these could exist.

Now, Searle asks, on which side of this divide does computation. In the sense of the algorithmic, rule-based processing of symbols belong. The answer is clear, because symbols rules (as opposed to regularities in nature) only exist when intelligent cultures agree that they do. Just as “cat” would be just a meaningless sound outside the community of speakers of English, so the data in computers is no more at all that empty meaningless patterns of magnetism or voltage levels or holes in punch cards until intelligent beings agree to attribute meanings to this patterns.

In which case, since data processing requires consciousness and the derived intentionality (Searle’s term) that comes from the intentional state of conscious beings outside the computer system. The can’t possible give rise to consciousness, intentionality, semantics or even syntax themselves.

I was just thinking that you can always tell when someone has missed the point of Searle’s comeback to the Systems reply because they say “but Searle could possibly memorise all the rulebooks and symbols – and the next second, Gerard O’Brien said jusrt that. This is a red herring; first, and more trivially, thought experiments don’t have to practicable; Einstein couldn’t really ride on a beam of light, but so what. More importantly, he linked this to Dennett’s suggestion that if Searle really could internalise the whole system, perhaps he really would understand Chinese. We, as Searle might say, let’s think about what that means.

I’d suggest that the main obstacle to Searle’s internalising the whole system is the limitation of human memory. Suppose could overcome that (As an aside, imagine that we supplement Searle’s brain by discovering how to interface the human memory to computer storage, only storage mind, not processing power). O’Brien and Dennetts are imagining that there would come a point when Searle’s consciousness suddenly changes; one moment he’s conscious of thinking through the algorithm and remembering the database like this; “Ok here is the Chinese character 20345b – let’s find the rule for that – OK now I find English symbols 203455+4234+2345…” – and then suddenly, when he manages to internalise the last bit is the rulebook and symbol books, suddenly, he starts to experience lucid, meaningful semantically charged Chinese? Is that in the slightest bit credible?

July 16, 2014 | Unregistered CommenterGraham Warner

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
All HTML will be escaped. Hyperlinks will be created for URLs automatically.