Discussion has continued beneath my last post about Bakker. Below are a few of my comments there:

rsbakkar writes:

I advert to common idiom when discussing theoretical incompetence, but it certainly doesn’t turn on any commitment to representationalism – even less correspondance! The fact is, people regularly get things wrong in what appear to be systematically self-serving ways. You don’t need to subscribe to assertion conditions or truth conditions or anything speculative to commit to this.

People generally get things “wrong” in what respect? How are you defining “wrong” here? Upon what scientific criteria do we determine “right” from “wrong”? I assume you mean to speak of “falsity” and “truth,” rather than right and wrong? Even so, the scientific enterprise is not a scantron test where we bubble in T or F after each experiment. Experimental facts are always underdetermined by the theory framing them, which means there is always some degree of extra-scientific hermeneutic, aesthetic, or intuitive selection going on to determine which theory is “best.” For example, even given all empirically verified neuroscientific evidence to date of a brain-mind correlation, brain-based reductionist accounts of what we call “consciousness” represent only one possible causal explanation: it remains entirely possible that the brain functions more like a radio antenna and that the causes of “consciousness” are non-locally distributed beyond the skull (see my reflections on cognitive neuroscientist Michael Persinger and cognitive philosopher Andy Clark, for example). If the scientific enterprise were simply a matter of confirmation or falsification (either a theory is true or it is false) then there’d be very few if any viable scientific theories. That most of our theories fail to account for all the evidence (or if they do, fail to definitively disqualify competing theories which also account for the evidence) suggests either that humans are theoretically incompetent, or that nature/matter is more complex than our mechanistic models generally allow.

rsbakkar writes:

The life sciences are mechanistic, so if subjective experience can be explained without some kind of ‘spooky emergence,’ as I fear it can, then all intentional philosophy, be it pragmatic or otherwise, is in for quite a bit of pain.

I’d dispute the statement that the life sciences are mechanistic, depending on what you mean by the machine metaphor. There are major unresolved controversies within the life sciences concerning the status of life, whether mechanism can really account for the self-organizing, biosemiotic, and phenomenological dimensions of even a single living cell (See the cognitive neuroscientist Francisco Varela’s 2002 paper “Life After Kant,” or his colleague cognitive scientist Evan Thompson’s book Mind in Life, for good run downs concerning this controversy). There is no reason to conceive of “emergence” as spooky. This way of thinking about the place of wholes in nature is terribly misleading. There’s no reason to make emergence seem supernatural now that science has the conceptual tools to deal with complexity, chaos, non-equilibrium systems, etc (see Terry Deacon’s recent book Incomplete Nature for the cutting edge attempt to account for intentionality in a non-reductive way).

Where I entirely agree with you is that classical philosophical “metacognition” is over-matched by the complexity of the experiential universe. I don’t take much stock in theories like supervenience, functionalism, or anomalous monism for this reason. They are too abstract and cogni-centric and pay too little attention to the complex textures of lived, embodied reality, textures that unfold at or below the threshold of what usually gets called “consciousness.” I turn instead to philosophers like James and Whitehead who sought to correct for the rationalist biases of so much Western philosophy by turning philosophy’s attention to an investigation of feeling and bodily reference, pushing back against the pretensions of disembodied thought and transcendental deduction.