Responding to comments about Bakker’s “blind brain theory”

Discussion has continued beneath my last post about Bakker. Below are a few of my comments there:

rsbakkar writes:

I advert to common idiom when discussing theoretical incompetence, but it certainly doesn’t turn on any commitment to representationalism – even less correspondance! The fact is, people regularly get things wrong in what appear to be systematically self-serving ways. You don’t need to subscribe to assertion conditions or truth conditions or anything speculative to commit to this.

People generally get things “wrong” in what respect? How are you defining “wrong” here? Upon what scientific criteria do we determine “right” from “wrong”? I assume you mean to speak of “falsity” and “truth,” rather than right and wrong? Even so, the scientific enterprise is not a scantron test where we bubble in T or F after each experiment. Experimental facts are always underdetermined by the theory framing them, which means there is always some degree of extra-scientific hermeneutic, aesthetic, or intuitive selection going on to determine which theory is “best.” For example, even given all empirically verified neuroscientific evidence to date of a brain-mind correlation, brain-based reductionist accounts of what we call “consciousness” represent only one possible causal explanation: it remains entirely possible that the brain functions more like a radio antenna and that the causes of “consciousness” are non-locally distributed beyond the skull (see my reflections on cognitive neuroscientist Michael Persinger and cognitive philosopher Andy Clark, for example). If the scientific enterprise were simply a matter of confirmation or falsification (either a theory is true or it is false) then there’d be very few if any viable scientific theories. That most of our theories fail to account for all the evidence (or if they do, fail to definitively disqualify competing theories which also account for the evidence) suggests either that humans are theoretically incompetent, or that nature/matter is more complex than our mechanistic models generally allow.

rsbakkar writes:

The life sciences are mechanistic, so if subjective experience can be explained without some kind of ‘spooky emergence,’ as I fear it can, then all intentional philosophy, be it pragmatic or otherwise, is in for quite a bit of pain.

I’d dispute the statement that the life sciences are mechanistic, depending on what you mean by the machine metaphor. There are major unresolved controversies within the life sciences concerning the status of life, whether mechanism can really account for the self-organizing, biosemiotic, and phenomenological dimensions of even a single living cell (See the cognitive neuroscientist Francisco Varela’s 2002 paper “Life After Kant,” or his colleague cognitive scientist Evan Thompson’s book Mind in Life, for good run downs concerning this controversy). There is no reason to conceive of “emergence” as spooky. This way of thinking about the place of wholes in nature is terribly misleading. There’s no reason to make emergence seem supernatural now that science has the conceptual tools to deal with complexity, chaos, non-equilibrium systems, etc (see Terry Deacon’s recent book Incomplete Nature for the cutting edge attempt to account for intentionality in a non-reductive way).

Where I entirely agree with you is that classical philosophical “metacognition” is over-matched by the complexity of the experiential universe. I don’t take much stock in theories like supervenience, functionalism, or anomalous monism for this reason. They are too abstract and cogni-centric and pay too little attention to the complex textures of lived, embodied reality, textures that unfold at or below the threshold of what usually gets called “consciousness.” I turn instead to philosophers like James and Whitehead who sought to correct for the rationalist biases of so much Western philosophy by turning philosophy’s attention to an investigation of feeling and bodily reference, pushing back against the pretensions of disembodied thought and transcendental deduction.

 

 

Advertisements

6 Comments Add yours

  1. noir-realism says:

    Ah, for the blessings of the ’empirical turn’: “I turn instead to philosophers like James and Whitehead who sought to correct for the rationalist biases of so much Western philosophy by turning philosophy’s attention to an investigation of feeling and bodily reference, pushing back against the pretensions of disembodied thought and transcendental deduction.”

    And, of course, that’s just it! Of course Kant tried to save both, but failed… Hume’s empirical turn and skepticism of atemporal forms of thought situated outside what came from the senses alone (what you call the ‘pretensions of disembodied thought and transcendental deduction’; or, what I term the ‘outside time’ gang), led him to the first truly embodied philosophical stance. When Kant said that Hume awakened him from his dogmatic slumber he spent the rest of his life struggle to save Plato and his traditions by internalizing Reason and creating the eternal categories of thought… that is where Kant truly went wrong, and where most of the past two hundred years of philosophy has followed.

    I tend to agree with where you’re going here, that we need to pathways out of Idealism (these atemporal philosophies) and toward something else…. my only problem with James and Whitehead is they seemed to have an intermixture of both threads, and, yet, they, too pointed the way as you say. I love Whitehead and James… have written of Whitehead on my blog, and on other blogs have written of the pragmatist traditions.

    The other point you make of mind being “non-locally distributed beyond the skull”, furthers that thought of asking “Why did Kant and his tradition reduce Reason to the brain in the first place?” Trying to escape the corrosive thought of Hume, Kant and the rest have tried to build a bastion within the confines of the mind/brain… I think that as you mention some of these scientists are refusing this path, and are leading the way to a wider conception of mind than our current lot of philosophers would like to accept.

    I am, myself, moving toward a temporal critique of these traditions. Both process philosophy and OOO have taken one side of the coin of time: the one sticking with time as final causation – or seed to tree process, while the other deals with efficient causation – or relational time. What sort of ontological differences are there among the present, past and future? There are three competing theories. Presentists argue that necessarily only present objects and present experiences are real, and we conscious beings recognize this in the special “vividness” of our present experience. So, the dinosaurs have slipped out of reality. However, according to the growing-universe or growing-block theory, the past and present are both real, but the future is not real because the future is indeterminate or merely potential. Dinosaurs are real, but our death is not. The third theory is that there are no objective ontological differences among present, past, and future because the differences are merely subjective. This view is called “the block universe theory” or “eternalism.”

    That controversy raises the issue of tenseless versus tensed theories of time. The block universe theory implies a tenseless theory. The earliest version of this theory implied that tensed terminology can be replaced adequately with tenseless terminology. For example, the future-tensed sentence, “The Lakers will win the basketball game” might be analyzed as, “The Lakers do win at time t, and time t happens after the time of this utterance.” Notice that the future tense has been removed, and the new verb phrases “do win” and “happens after” are tenseless logically, although they are grammatically in the present tense. (Similarly, when we use the present-tense verb “is” in “Seven plus five is twelve” we are not making a claim just about the present.) Advocates of a tensed theory counter by saying that tenseless terminology is not semantically basic but should be analyzed in tensed terms, and that tensed facts are needed to make tensed statements be true. For example, a tensed theory might imply that no adequate account of the present tensed fact that it is now midnight can be given without irreducible tensed properties such as presentness or now-ness. So, the philosophical debate is over whether tensed concepts have semantical priority over untensed concepts, and whether tensed facts have ontological priority over untensed facts.

    sorry for being so long winded…

  2. Adam Robbert says:

    Great discussion. Another way to position what you’re saying is via developmental context (or, as you know, structural coupling). The common sense approach to developmental contexts is that there is a brain which undergoes certain developmental processes (Nature), and there is a context which lays on top of it making small adjustments here and there (Culture). The common sense approach needs to be abandoned, as does any approach that wants to claim supremacy for either side of the Nature-Culture split. In order to get a grip on what human minds are we need to abandon the playing field upon which the terms of debate are set—no Nature and no Culture, but a radical, cosmological realism. In place of (exclusively) internal accounts of brain functioning what we should have are descriptive analyses of the long chains of associations that form the conditions of reality for certain kinds of minds and behaviors (and vice-versa). These chains extend far beyond the brain alone. What’s at stake is understanding that the series of ecological translations and mediations—or, more provocatively, what Bruno Latour calls “transubstantiations”—between heterogeneous groups of agencies are historical events that require as much ethnographic fieldwork and philosophical contemplation as they do neuroscientific research. In this chain of associations and mediations the brain—being one multiplicity of actors among many—is a necessary but not sufficient condition for understanding human minds.

  3. Mark Crosby says:

    The only “spooky emergence” is the fictional spooks of Bakker’s NEUROPATH.

    For an overview of the latest approaches to emergence, check out Plamen L Simeonov’s article “On Some Recent Insights in Integral Biomathics” at
    http://arxiv.org/abs/1306.2843. These approaches are applying to biology and immunology the more strictly mathematical techniques that Fernando Zalamea describes in SYNTHETIC PHILOSOPHY OF CONTEMPORARY MATHEMATICS.

    Example: “A ‘self-emergent object’ is modeled as a colimit … an evolutive system [is] a family of categories indexed by time with transition partial functors applying between them”. Simeonov’s “Wandering Logic Intelligence (WLI) theory” includes: “1. Multidimensional feedback. 2. Pulsating Metamorphosis. 3. SDelf-Rference. 4. Dualistic Congruence”.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s