John Searle

UC Berkeley philosopher John Searle is no stranger to challenging authority. After all, he was the first tenured professor to join Berkeley’s Free Speech Movement. Today he is one of our leading philosophers and in his new book “Seeing Things As They Are,” Searle takes on icons like Rene Descartes and Immanuel Kant, and outlines his own theory of perception. Searle joins us to talk about why Western philosophy is all wrong about how we experience reality.

Philosopher John Searle on ‘Seeing Things As They Are’ 20 April,2015forum

John Searle, Slusser professor of philosophy at UC Berkeley and author of books including "Seeing Things As They Are: A Theory of Perception"

  • Mason Gibb

    Wittgenstein made Searle and his philosophical project irrelevant. Anyone enamored with Searle should do themselves a favor and pick up Philosophical Investigations.

  • Robert Thomas

    it’s likely that “consciousness” is congruent with the successive states of an asynchronous modifiable and self-modifying multi-value state machine with input from variable sensory apparatus.

    Does Professor Searle find anything wrong with this assertion?

    • Whamadoodle

      Speaking of obscurantism, can you render that in English, please? Unless you’re trying to speak above the heads of everyone reading your post, in which case then katalavaino ti les.

      • Ron Gutman

        I think Robert is a computer scientist. That’s where words like “asynchronous” and “state machine” come from. I take Robert’s comment as meaning that computers are potentially as conscious as humans. That’s one reason for my asking for a definition of “conscious” – the definitions I gave would definitely allow machines to be conscious. So, if humans want to reserve “consciousness” for themselves and deny it to machines, then they must find a different definition.

        • Whamadoodle

          Thanks Ron! I assumed that he meant “asynchronous” in the same sense that computer manufacturers mean it as well. It is odd, though, and is almost guaranteed to be misunderstood, if you’re speaking with an expert in one field (philosophy, here) and blast off with a million terms from a completely different field (computer science), without prefacing it by announcing that you’re introducing jargon from a different discipline. For instance, you might say “to apply the language of computer science, is consciousness analogous to [I don’t think he really meant “congruent with”…?] asynchronous, modifiable, and self-modifying etc. …? That is, accessing information in a non-linear fashion, appropriate to the type of information needed in a particular situation?” if you didn’t want to be misunderstood.

          Robert is great, but I feel as though he has a tendency either to assume that every layperson is familiar with every term that he uses, whether highly technical or only very basic, or to purposefully ignore the fact that most people aren’t.

          This is a problem, because unless your intention is to draw a velvet rope around your conversation, and then congratulate yourself on being the only one behind it, you want to be understood. Anyone wanting to be MISunderstood can do so; simply take every one-syllable word in your writing, and substitute the longest words possible from your thesaurus, and voila! Watch readers’ eyes glaze over, and their comprehension fail. Congratulations, your writing is unintelligible.

          • Robert Thomas

            Unfair! As I wrote here elsewhere, the succinct definition above is one I’ve deliberately distilled down to symbols admittedly unfamiliar to many and also juxtaposed in such a way as to perhaps provoke a frown even from those familiar with them. But this was an attempt to get a smarty-pants (but real!) question to Searle – who I’m sure will have been exposed to these nouns and adjectives in their technical context – rather than for the further discussion here. This is why after offering the definition, I wrote “Does Professor Searle find…”, you see? Like I wrote here elsewhere, I’ll try to use clear language to expand the thing in a further post, for anyone interested.

            The use of the state machine model to depict and explore neurological and other metabolic processes such as those Searle in this interview affirmed that he believes include the phenomenon of consciousness is pretty current activity, I’m lead to believe.

            I can tell you that I chose the word “congruent” judiciously.

          • Whamadoodle

            Sorry to sting you, my friend. I used the “Robert is great” part judiciously as well.

          • Ted Crowell

            Searle’s thesis can be traced back to Tomas Reid

          • Robert Thomas

            This makes sense inasmuch as I understand Searle to have reached – through a different path – conclusions similar to those of Peirce’s Pragmaticalistaclism, or whatever he called it, and I recall that the debate with Derrida began as a defense of Austin.

      • Robert Thomas

        I posted this with the hope of getting a response from Professor Searle. Since he’s been interviewed in the past and has written occasionally in a way suggesting he would understand this sort of thing, I thought I might get a bite. Win some, lose some.

        Your question is fair. When I first tried, twenty years ago, to write down what I thought might be meant by “consciousness”, it grew to an RT+ length of twenty pages or so. Now, it’s a lot shorter and maybe less adequate.

        I could describe what I mean by “multi-value state machine” and so on (that, by this, I mean NOT a state machine capable of exhibiting more than a trivial state but with decision making based on something like Saul Kripke’s modal inferences along with analog-like “fuzzy” state evaluation) but the essential idea in this particular formulation is that the phenomenon of “consciousness” does NOT consist in a particular state of the machine and also does NOT consist in the design of the machine (by which I mean the instantaneous configuration of state – including a large but finite memory – as defined by a rule set or instantaneous information from external stimuli) but rather, what “consciousness” IS, is the continued succession of states exhibited by such a machine.

        The Machine (your machine, my machine) is the substrate which may at any time occupy one of a large but finite number of states (including the state of its large but finite memory) and in be reception of external sensation; consciousness is then its temporal progress from state to state.

        • Whamadoodle

          Thanks Robert–I find Ron Gutman’s statement, below, to be intelligible, but I still find yours to be unintelligible. If you’ll indulge someone who is unfamiliar with your terms, could you please tell me what is meant (presuming that terms such as “instantaneous” or “state” mean different usages than those employed by ordinary Americans in everyday speech) by:

          – state machine
          – trivial state
          – analog-like “fuzzy” state
          – state evaluation
          – instantaneous
          – state

          ? TiA.

          • Robert Thomas

            Okay, I’ll give it a go. This is where my own frustration with others not providing definitions bites me in the posterior. I’ll post a separate response with content, after I think it up.

          • Whamadoodle

            Thanks, Robert!

          • Robert Thomas

            Part 3 of 3

            Now, consider our ninth grade introduction to nerve structure.

            We learned that neurons (mostly) gather signals through their various dendrites from the axons of other neurons, mediated by conditions present at the synapses of the dendritic terminals. These conditions are themselves caused and modified by the neurons on either side of the synapse. Draw a functional analogy between the electrochemical conditions at the synapse and the transition rules of a state machine. Draw a further analogy between 1) the level of neurotransmitter (GABA etc.) and neuro-inhibitor (cholinesterase etc.) released into the synapse by the cells and 2) the self-modifying actions of some state machine on their own sets of transitions.

            Most neurons “fire” at regular intervals, some at high rates and others more slowly. The rate at which a neuron fires may vary somewhat among the same types of neurons and from individual to individual but within a confined range. Individual neurons may fire at a pace distinct from others with nearby cell bodies or they may fire in synchrony with other members of a local bundle. Draw a functional analogy between the regular synchrony of an individual neuron or synchronous neuron bundle and the individual state machine in an asynchronous hierarchy of networked state machines.

            Other neurons are are in continuous (“tonal”) conduction, varying the intensity of signal they conduct over time which suggests a parallel with mediated transmitters of data between asynchronous state machines capable of apprehending multi-valued input.


            My use of “instantaneous” was not special but meant only to emphasize the idea that consciousness disappears if the central nervous system under observation is frozen in time. Consciousness in this conception is identical to the activity of regular progression of the machine from state to state. No activity, no consciousness.

            “Evaluation” is the phenomena that occurs when an input is tested against a transition rule (e.g. “next transition is to state 522 if all inputs are active [i.e., “1”, “true”]”; else, next transition is to state ZF2404A”).

            Finally, with respect to my choice of the word “congruent”…

            In the past, I’ve discussed this predicate adjective in conversation with others about this definition of consciousness. “Consciousness is [blah].” “Cosciousness is like [blah].” “Cosciousness is indistinguishable from [blah].” “Cosciousness is similar to [blah].”

            I settled on congruent for two reasons. The first reason is that I feel that the asynchronous hierarchy of networked synchronous state machine model may be extended to encompass not only small regions of the central nervous system of an organism such as a human being but the entire scope of the nervous system, as complex as it is. This invokes “congruence” in the sense of identicality in size and extent with all aspects of the object under definition that have meaningful content. The second reason for using this word is its parallel implication of consonance and harmony between the things – in this case, between the object and the definition.

          • Robert Thomas

            Part 2 of 3

            Qualifiers peculiar to the definition of consciousness

            The qualifiers my definition places on the conventional definitions offered by for example the WP articles are worth discussing.


            State machines are “synchronous”. This means that all internal variables change at the same time, at the time of a trigger event presented at the inputs (the trigger may be the ticking of a clock. If so, the clock is said to be outside of the machine regardless of any implementation).

            By “asynchronous state machine” I invoke a network of separate state machines each of which may take as any or all of its inputs the particular output(s) of another state machine to which it may be connected; these separate state machines that communicate in this way may recognize different trigger events. Hence they are asynchronous with one another. Among other things, output of one machine that is changing just at the time another machine is sampling these outputs as their own input may lead to unpredictable results in the machine triggering later.

            Modifiability; self-modification

            Conventional stored program computers (not exactly the same thing as state machines) can be categorized in many ways. One way to do this is by how the instruction memory of the computer is juxtaposed with its data memory. In a “Harvard” memory architecture, instruction memory is stored in a way completely separate from data and when instructions are “fetched” to be acted upon, travel over wires and switches separate from those used to store and move data. In a “Von Neumann” memory architecture, instruction memory is distinguishable from data memory only due to the moral compass of the programmer and The Grace of God.

            Modern computers are composed of mixtures of these two schemes but along with the better nature of the programmer, bells and whistles are set off if a VERY BAD THING occurs in a Von Neuman part of the system – that is, when the computer control program attempts to modify itself. Although it can be tempting to do this (and there are very tightly controlled instances where it is allowed), even if one is very careful, extremely unpredictable errors may result that are nearly impossible to understand and correct.

            In the case of state machines, it’s also possible that arrival at a particular new state under certain conditions can be made to modify the transition rules for the current state (or in fact, for any state). There is a substantial literature on the subject of these so-called self-modifying automata.

            Since central nervous system evolution has proceeded with hundreds of millions of years and many millions of generations of adjustments, tweaks and missteps and utterly without regard to responsible design methodology or even good taste, I infer that the consciousness machine is likely to self-modify. Obviously, the machine may also be modified by infectious agent or trauma or geriatric deterioration or other insult.

            Multi-valued variables; variable sensory apparatus

            I wrote above that the range of values for inputs and internal variables is a binary digit, either “0” or “1”. In the 1960s an idea came out of a subject called “automatic controls” that came to be known as “Fuzzy Logic”. This is a whimsical name for the sober idea that the traditional predicate (symbolic) logic of Russell, Whitehead and Frege could sensibly and usefully be augmented to include – besides “1” and “0” – the values in between these limits, without abandoning the structure used to manipulate the symbols. Values of the truth or falsehood of a symbol may in this context be neither true nor false but to some degree “fuzzy”. Since the language with which a state machine’s transitions are described is this very kind of predicate logic, there’s reason to suspect that the analog nature of our senses and so on might impinge upon the consciousness machine in the way of a continuum of intensity and so be best described in this way, also called “multi value logic”.

            Our senses provide a variety of information in a variety of situations with a variety of degrees of reliability. This implies that nervous systems will have evolved to accommodate continuous variable values “no sensation present at all” (“0”) to “fully saturated sensation” (“1”).

            Rather than strictly adhering also to the “first order” predicates of Frege and Peirce (“is a philosopher”; “is not a bachelor” etc.) it is likely that inferences used by the patchwork consciousness machine will include “modal” verbs (“is not very likely a dog”; “could possibly be yellow” etc.) as described by Saul Kripke et al.

          • Robert Thomas

            Part 1 of 3

            Opening remarks

            Having usually spent effort pruning and whittling away at my definition of consciousness rather than fluffing it back up, I have to warn that my usual reluctance to post only Implacable Truths may waver. I may be seen to retract or redraw some element.

            First, I say that this brief discussion makes no attempt whatsoever to address the nature of difference between nervous systems while conventionally understood to be “conscious” and “unconscious” or between periods of waking, sleeping or dreaming. I risk invoking concepts with which I’m supremely unfamiliar by saying that my notion is intended to describe – equally well or equally poorly – any of these variations of mind excluding that of a person physically compromised to the point of being classified as vegetative; those in an “unresponsive” condition on the other hand may be included, as much as I understand the meaning of this.

            Though some of the conventions here are found in topics in “computer science”, I generally reject this phrase. Instead, I think of them purely as mathematical ideas. I reserve “science” to describe the activity of imperfectly observing the a posteriori natural world and the arrival of consequently synthetic, contingent conclusions about its nature. I’m far from alone in thinking this way; respected institutions agree. When I was a young person (and still, perhaps?), one might take an undergraduate degree at Cal in “computer engineering and mathematics” but generally not in “computer science”.


            Rather than try to offer a definition for “state” and “state machine” I am satisfied with the adequate WP articles “State (computer science)” and “Finite state machine”. Briefly,

            1) a finite state machine is an idealized machine (included in a larger group of such things called “automata”) which apprehends input and produces output and that is completely described by a finite list of discrete possible “states” in which the machine is said to occupy (see [2] below) along with a fixed set of “transitions rules”. At any time, the state machine occupies a single state (the “current” state).

            The fixed transition rules deterministically describe which new state the machine will occupy immediately after step-wise change from the current state, as initiated by an external “trigger event”. These rules may take into account EITHER the current state OR the values of external inputs, or BOTH of these.

            The output of a state machine may be considered to be the name of the state that results from the transition from the current state.

            2) a “state” in this context is understood to describe a finite set of internal variables, each of which may take on one of a finite set of discrete values (or a value from a finite range) together with the values presented at a finite number of inputs.

            Conventionally, the range of value that any internal variable or input may assume is limited to “0” or “1” (“true”, “false” etc.). Each variable may change only during a transition from the current state to the next, according to the transition rules; optionally, some internal variables may persist in value from one state to the next independent of the input values presented during the current state. Such internal variables may be called “memory”.

            By “trivial” state I mean the purely “combinatorial” state machine that only changes according to the transition rules and ONLY input values and does not contain any internal variables.

  • Fay Nissenbaum

    “ontological sollipsism” ? Prof. Searle is using terms of art without defining them. Many of us have taken philosophy classes long ago or not at all. Dont dumb it down, but please do fill in the blanks when using jargon, lest we not hear as he means…

    • Whamadoodle

      Yes, jargon is obscurantism, which is a sin too often found in philosophers! Our legal system moved (I think there is actually statute law dictating this) to using less jargon in their lawmaking, so that any ordinary reader could understand the law. Philosophers should too. Be clear!!

      • Mason Gibb

        This is highly amusing, as Searle typically levies the “obscurantist” charge against recent Continental thought. All depends on where you’re sitting, I guess.

        • Whamadoodle

          Yes, and well said if he does–if Derrida’s heirs taught us anything, it is that unending sentences should always be suspected of representing a Moebius loop connecting one’s face with one’s anus, transiting through one’s own intestines.

          I didn’t find his speech obscurantist during the show, really; but I do echo Fay’s caution that one should define unfamiliar terms for the layperson. Of course, Professor Searle was happy to do so (he defines “ontological,” etc. during the show), so no foul on his part after all.

          • Timothy Palmer

            “if Derrida’s heirs taught us anything, it is that unending sentences should always be suspected of representing a Moebius loop connecting one’s face with one’s anus, transiting through one’s own intestines”

            A colourful description, and one a professor of mine (who frequently uses the Moebius strip as a metaphor) would find much to be amused by.

            With this being said, I often find that the problem with many criticisms of the alleged (or otherwise) obscurantism of the continental tradition comes from empiricists who lack any theoretical training in the continental philosophy whatsoever: to first comprehend that strange beast with which one is grappling, one must first have read its canon. There’s no doubting that continental thought can be, at times, opaque (the less said about Hegel, or early Heidegger, in terms of clarity of expression the better: mind you, the logical-positivists are to blame for similar sins of language in the empiricist tradition), but if the language is complex or seemingly opaque it is because the ideas exist at the extremity of extant theoretico-philosophical knowledge, and the language used must, itself, attempt to navigate linguistically uncharted waters.

            Often, what results, are linguistic neologisms and metaphors that are, at first, seemingly non-sensical. Sometimes, in the hands of lesser philosophers, what results is just gibberish, but most with a training in the continental tradition should be sufficiently well-versed to be able to distinguish what is what – and vice versa in the empirical tradition, too. As for plain speech philosophy, whilst the ideas of Strawson and Austin, for instance, are interesting, I think a large part of why their movement mostly ceased to exist from the 1970s and beyond is due to the limits of natural language philosophy to do as the continentals strive to do in pushing the theoretico-philosophical limits intrinsic to questions of ontology, phenomenology, etc., and the bases of such explorations in sensitive philological-etymological investigations explicit to constructions of meaning within language.

            However, yes, it is only right (and polite) for philosophers to explain the intellectual jargon that constitutes philosophical shorthand when engaging in public intellectual duties so as to connect these difficult ideas to a level the layperson is able to comprehend them [of course, this is not to disparage laypersons, nor demean or imply imputed intelligence]. Some, like Slavoj Žižek, for instance, are so good at it they’re often accused of being “pulp” or ersatz philosophers, but I think this misleading snobbery.

          • Whamadoodle

            “Some, like Slavoj Žižek, for instance, are so good at it they’re often
            accused of being “pulp” or ersatz philosophers, but I think this
            misleading snobbery.”

            I agree completely–Žižek was refreshingly clear. I think those making such accusations must be the sort who want to use obscurantism to create a gated community, to insulate themselves from criticisms of lack of depth, or of lack of logic, by (as the philosophers of the underground comix of the 60s called it) “baffling ’em with bulls—.”

            The problem I had with the post-modernists, Derrida’s followers, if not Derrida himself, was twofold: first, that they indulge in (per another great modern philosopher, Louis CK) this “insane deconstruction”–the incessant breaking down of the definitions of words themselves, ostensibly in an attempt to rethink their meaning, but really, seemingly in an attempt to make unending busywork (again, per Louis CK’s apprehension of his small daughter incessantly asking “why?”).

            Secondly, that like Libertarians, THEY NEVER GET AROUND TO SAYING WHAT ON EARTH THEY STAND FOR. Or if they do, and you take it to a second post-modernist and say “well post-modernists believe X, correct?” they claim “no no! Post-modernists don’t believe that at all!” Gaa.

          • Timothy Palmer

            “I think those making such accusations must be the sort who want to use obscurantism to create a gated community”

            An astute observation, and one I agree with, by and large. The issue for me is that Gramsci is right, and they are wrong, to the extent that “intellectuals” – as specialist class (within or without of capitalist modes/relations of “production” – your choice: I’m with Weber, not Marx) are either part of the problem, being the production of ideological assumptions, or they are part of the solution, which is to say, the destruction of same; however, he was also careful to note the very real danger that this same “class” of people fail abjectly to critique their own ideas/philosophies ontologically and epistemically, and instead, reify the very ideas they should be shining a light upon. Of course, in steps the post/post debate, which seems to me – even as a continentalist (possibly of the very worst sort!) – to define itself through its infinite capacity for differentiation without apparent recourse to Leibnizian reintegration between thesis and praxis.

            Derrida’s deconstruction seems altogether fine to me in terms of Derrida’s usage, but as with many of Foucault’s acolytes (who are really Heidegger’s acolytes, only less intellectually honest with themselves, given Heidegger’s Nazi associations), those who follow either misunderstood the intent, misrepresent it, or simply lack the talent of either. I haven’t had much opportunity to enjoy Louis CK’s humour, but it seems to me that the very best stand-up comics are some of the finest philosophers/thinkers we’ve got for the simple reason that they all-too-often are able to make the telling insight communicated cleverly and economically with the humour of its telling proving far more beneficial to ensuring the right message is being learned by those listening to the “act”: it’s a potent cocktail.

            I agree about the libertarians, but unlike the post-modernists, who I tend to think struggle to get around to the point of their various positions because the position itself is never solidified, the libertarians tend to derive from the empiricist tradition (but not always), and more than this, openly declare what they are opposed *to* – but, as you say, abjectly failing to specify what it is they are in support *of*; theirs is the bigger intellectual failing, I would wager.

  • Ron Gutman

    Define “conscious”.

    • Robert Thomas

      It’s something I’ve tried to do, now and again, for several years. See my attempt, posted here elsewhere.

      What’s your definition? Do you believe one exists?

      • Ron Gutman

        Two definitions that have come to mind are: 1 – a conscious experience is one that the experiencer can articulate in language. That’s a definition that is easy to apply. 2 – consciouness is analysis by the mind (brain) of itself.
        I get frustrated hearing people say that explaining consciousness is the biggest unsolved problem in science without giving a definition. Without a definition, there is no scientific problem at all. I look for your definition.

        • Robert Thomas

          I’m not so sure about (1) being easy to apply. Which organisms have language? What criteria are there for deciding this? We may not be able to determine what organisms have language but obviously, our inability therefore to always apply this rule to an organism doesn’t preclude the organism from meeting the standard or for (1) to be a sensible rule. I’m not sure that (2) is not hopelessly analytic (in the sense that the predicate is not just recapitulation of the subject).

          I agree heartily that lazily neglecting to discuss definitions is a really frustrating failure of interlocutors in such discussions and in fact is a chief (not lone) cause of the frustrating and stupid antipathy between scientists and (analytic) philosophers.

          • Ron Gutman

            Regarding 1, yes the definition depends on the definition of “language”. BTW, these are definitions, not rules, so “sensible” is whatever can be agreed on. If you don’t like a definition because it excludes some creatures (e.g. your dog), then you should have some other definition already in mind.
            Regarding 2, I don’t know what “hopelessly analytic” means; it does imply that consciousness is a self-referential phenomenon and, as mathematicians and comptuer scientists know, self-referential things can be tricky. I think some people expect “consciousness” to be something complex so it fits the expectations of some.

          • Whamadoodle

            It’s an interesting question, about language. I think this may be a natural thought when we attempt to define “consciousness,” because what do we mean when we say “I was unconscious”? We mean “I was unresponsive,” and we judge it by noticing that a person is not responding when we address them.

            However, it may narrow things down too much: a deaf person won’t ever respond to you if he or she doesn’t lip-read, but the deaf person may be as perfectly conscious as any of us are. However, at 3:00 a.m., also like any of us, they don’t respond to ANY stimulus unless it’s sufficient to revive their consciousness.

            Yet if we take someone who is “unconscious,” asleep and dreaming, and have our neighbor honk their car horn, the sleeping person may begin to dream of being in traffic. So are we ever truly, completely without consciousness? And are sea creatures that never sleep (there are some animals, like dolphins I think, who alternate shutting off one hemisphere, then the other, for “sleep,” and in that way, the other hemisphere is always “awake”) ever unconscious? Are they more or less conscious than we are?

            Further food for thought: if language is the yardstick, then is the dog who learns 100 words (as border collies can do, for instance) more conscious than the baby human who knows none yet, but who is learning with every moment?

          • Robert Thomas

            I only meant that a definition of “language” suggests agreement between the parties on the imposition of some rule… is a grunt language? is a moment of stern eye contact language? Is the release of a volatile aromatic compound language? And so forth.

            I say it’s okay to have a period of discussion in order to more or less agree on a definition of terms.

            My preference is to imagine a super-duper process whereby I just get to look at someone else’s brain very carefully. If I see that this person is a regular person exhibiting regular behavior that we would generally describe as “conscious behavior, then “consciousness” is what I see going on with her brain. This way, I escape the self-observation problem entirely and invite what to me seems a simpler observation: is this a regular person, behaving normally? Do I see similar things going on in the brains of others behaving similarly?

          • Whamadoodle

            Reducing the definition of “consciousness” to mean only the functions that one’s brain exhibits, this seems to draw a conclusion that equates consciousness with brain activity. This seems odd: dogs having brains tinier than that of a tiny human baby will understand the meaning of far more words than that baby does yet, but we don’t describe the baby as unconscious.

            If “consciousness” is defined instead as “responsiveness,” whether verbal response in language, or a more expanded responsiveness including all stimuli, then that raises questions such as: does this mean only external stimuli? What if someone is deep in thought about a memory, and their eyes are open, but they don’t register a question from their spouse? Are they really not conscious beings? I’d say they are; they’re just focusing on something different.

            That focus seems to be one of the main things at issue.

  • Fay Nissenbaum

    Does Searle have any comment on how the free speech movement of old gave us such great oratorical moments as 1964’s Mario Savio, yet the Occupy movement gave us not one memorable speech or speechmaker?
    a minute of mario savio:

    • Guest

      thanks, Michael!

    • DoNoHarm

      Sadly, I have no idea what the Occupy movement accomplished locally except to contribute to overtime pay for the police and increase the price of property insurance for small business owners in Oakland.

  • Chris Davidson

    Is Professor Searle making the assertion that consciousness emerges from the neural pathways of the brain? If so, please comment on the theory that consciousness exists independently of the material universe (the dimension of space-time, essentially).

    Thank you for having such a great guest!

    • AllisonRoad

      I am sure Searle would emphatically say that consciousness is an emergent property of living brains, and that the idea that consciousness exists independently of organic minds is a complete fantasy. So, not only is computer consciousness a contradiction in terms, but “independently of the material universe” is just fall-off-the-cliff absurd.

      But if you are asking about something that exists “outside nature” (i.e. the supernatural — God), then Searle, like any scientist, has no greater expertise to answer this question than anybody else. Science has its limits.

      • Bill_Woods

        Do “living brains” or “organic minds” have some supernatural quality? If not, they can be duplicated in other (material) media. “Uploading your consciousness” into another brain — in the sense of moving the one & only mind — looks like a fantasy, but uploading a copy isn’t. Silicon!Searle 1.0 may agree with Meat!Searle_2015 that it’s a similar-but-distinct version of the original. Or it may think it’s the Searle.

        By 2115, the difference may be academic. (But by then there’ll probably be millions of minds with an equally valid claim to being the Searle.)

        • AllisonRoad

          If you know Searle, then you know that “uploading” is nothing but symbol manipulation. Living brains do not work by symbol manipulation. Consciousness cannot be uploaded. The year of the upload makes no difference. Our understanding of consciousness will no doubt improve in the future, but not our ability to make silicon think. Ain’t gonna happen.

  • Ben Rawner

    Currently there is a growing group of people who are seeking immortality by uploading their consciousness onto a computer or what they call the singularity. If the brain creates consciousness then uploading on to a computer would be impossible. What does Professor Searle think if this and would he consider this real consciousness or some other form?

    • Whamadoodle

      “uploading on to a computer would be impossible”

      I agree with that statement, because there are two simple truths: 1) human beings are unaware of many of the functions of our own bodies, and therefore of our brains (for example, our bodies are often fighting disease before we are ever aware we’re not well); and therefore, 2) we are incapable of programming a machine that will take every single aspect of our consciousness into account. This holds true whether consciousness can exist independently of a physical body or is only a function of a physical body.

    • AllisonRoad

      I’m sure Searle would agree that the idea of uploading your consciousness is a fantasy no less than time travel, anti-gravity, UFOs, and other nonsense. And I completely agree with him. On the other hand, uploading of consciousness is one of the key plot devices in many science fiction stories, and is central to the current movie “Chappie”. It does make for great fun on the big screen.

      • Whamadoodle

        Some of your statements as to what Professor Searle “would” agree with seem to gibe with what he’s said on the program; however, I would note that I think he did in fact answer the statements you’ve replied to (or ones very like them) in the broadcast, so I urge you to listen to the link to the broadcast. The link is under the date line at the top of the page. It was an interesting show.

    • joeHollyweird

      Check out Dr Bruce Greyson’s talk on youtube “Is Consciousness Produced by the Brain?” He cites numerous medical studies that prove its not, could not be the only source of consciousness. People born with brains that shouldn’t have higher functions, people with atrophied brains who suddenly regain faculties prior to death, people dead who remember details, including blind patients who have a near death experience and correctly identify colors on Doctor’s clothing. Point is, the brain is not the only creator of consciousness, it behooves us to examine the implications of that. Neuro-scientists, psychiatrists and Harvard/Yale researchers are interviewed in “Its a Wonderful Afterlife” discussing how quantum physics is leading the way into the study of consciousness research. Machines would only be sentient if we chose to incarnate in one, as the life selection process appears to be free will based and not dictated by any other factors. At least thats what the data shows. First we’d have to include the concept that the life as a robot would be a better, more interesting choice than that of choosing to be a human. He’s welcome to do so, but i imagine he’d be bored after hanging out in a mundane processor when the human ecosystem is filled with much more entertaining choices.

  • Fay Nissenbaum

    Any application to the media? I am reminded that the late Tom Snyder said, “the only thing phonier than show biz is the news biz!”. From tsk-tsking newscasters to edited video supposedly capturing reality, we see a packaged product yet talk about it as though we witnessed the truth

  • Alex

    Could Profressor Searle comment on how intensionality arises in conscious animals, when the parts (bodies) are not intensional?

  • Mrs. Eccentric

    “No one expects a computer simulation of weather to cause a rainstorm in San Francisco….” hahah! the best comeback to this idiocy i’ve ever heard. Thanks!

    • dgodawood

      Yes, but, upload advocates intend to transfer consciousness to another REAL (though man-made) mind, and not into a computer simulation of one.

  • Mjhmjh

    In listening to this interesting broadcast, I have become unpleasantly aware of my woeful ignorance as far as philosophy is concerned! If I’d like to become more enlightened by reading some of Professor Searle’s books, which one should I read first?

  • Anastasi Mavrides

    Wittgenstein (referred to earlier) wrote in his “Brown Book” ,I think, that if you look in a mirror and attempt to point to the subjective source of your visual experience you have to paradoxically be pointing everywhere in your perceptual field (and not pointing to your physical eye). Does Prof. Searle have any thoughts about this exercise and it’s challenge to ordinary notions of subject-object perception. thnx

  • Sandip Chitale

    The full question I had sent:

    Can John comment on the separation of subject and object of perception and the truth about respective things. I always find that people confuse between the two and think of truth of the subject is same as the truth of the object. For example:

    When a person reports a spiritual experience there are two parts – the perception in their brain (subject) which in IMO is different from the object – the thing they experience i.e. spirit (say!). I think no one will/can deny that the truth of the persons inner experience but that does not mean the spirit is true.

    Or when a person perceives the red pigment… (perception of) the color is really the property of brain of the person experiencing the red color. The emission or reflection of a particular frequency of light is the property of the pigment. So IMO when people like Dan Dennett claim that the Colors are real (in “what is real?” debates etc)…what they really mean is that the distinct perception of a particular frequency which the person has learned to call red is real. Not the color red itself.

    It is easy to see how the exact same perception (for scenarios above) inside brains can be induced using other mechanism and it happens all the time…drugs or pressing on the eyeball – sometime you see red color.

    Michael only asked part of it, but I wanted to really ask about the separation of truth of subject and object of the perception.

  • Codger37

    Prof. Searle does much to bring reason our of mush and give us insights that falidate our disdain for the manipulators. He does for philosophy what Feyman (sp?) did for NASA and Menninger did for labels in mental illness.

    Thank you!

  • Old Geyser

    Since he likes Leibniz think of Leibniz Mill analogy except the gears and mechanisms are made from jello? Aha…it is the inner function of hardness that grinds the wheat and likewise there are inner cellular biological functions that cause experienceconsciousnessrepresentation……..

  • Old Geyser

    There are the four forces of nature but read any chemistry, biochemistry or biology book and only valency and energy are taken into account or what we get is a mechanical description of nature. Consciousness may simply require understanding of all the forces when it comes to neurons. My suspicion is that neurons may be unifying metabolic functions across cells or forming supercells in the cortex areas and aggregating the specialized cortices across the neocortex and setting up a control-feedback loop into the primitive brain structures via the thalamocortical loop and other structutres.

Sponsored by

Become a KQED sponsor