Naturalizing Representation

I wrote this essay a few years ago for a philosophy of mental representation course. I think I would rework a few ideas looking back, but I would still defend the idea that reality is not describable from 1st or 3rd person perspectives alone. Both are part of a larger ongoing whole/part.

Q: Why is misrepresentation supposed to be so difficult to explain naturalistically? Do you think it is possible to do so?

To properly address the question, we must first gain an understanding of the context of naturalism in science and its relationship to a representational theory of mind. Naturalism refers to the philosophical stance that all phenomena are natural, physical processes, and therefore that “all supernatural phenomena are nonexistent, unknowable, or not inherently different from natural phenomena” (wikipedia.com). When applied to cognitive science, naturalism provides the foundational dogma of the representational paradigm, requiring that consciousness be equated with the lawful, physical functioning of the brain. From this perspective, a theorist need not concern him or herself with subjective experience itself, but rather ought to study only that small part of experience decreed measurable in objective, physical terms. It is either, then, that (a) consciousness—something we all experience directly every moment—has been deemed supernatural, and as such nonexistent, or (b) consciousness is simply not required for a naturalistic description of the universe.

Leaving the issue of such a negation of consciousness aside for a moment, let us lay out the problem of misrepresentation. Naturalistic representational theories suppose that the world exists preformed and ready-made for our brain, which is inserted into it as if from outside. Once inside, the brain retrieves basic information about this pre-existing world via the senses. From there, further processing leads to some form of cognitive representation. These mental representations mirror the world, acting as vehicles carrying their contents to an agent. There is, however, a tendency for the brain to incorrectly supply the vehicles with their content, thereby representing the world inaccurately. Any naturalistic explanation of misrepresentation, then, must reveal how a purely physical system such as the brain could mistakenly retrieve and represent information from the world. It must also shed light on what it means that a representation should represent what it does (i.e., what role can interpretation play in what is supposedly an objective world?).

Some theorists suggest that mental representations are only pragmatic mirrors of the world, meaning they are required to be merely good enough to get one by. They cite that evolution does not require perfection, only proficiency. An organism’s cognitive map need only recreate especially vital aspects of its environment correctly (i.e., it need only function properly most of the time). The glitches in the representational process are said to deal with aspects of the world largely unrelated to survival and procreation; they therefore are tolerable to the functioning of the system.

How such a pragmatic theory of representation deals with the notion of interpretation is unclear. To say that representations should refer to the world correctly because natural selection has accidentally programmed them to do so is to forget the large role that interpretation plays in higher-level cognition. For instance, in the case of human cognition, one cannot use natural selection to explain why one person sees a glass of water half full and another half empty. Clearly, each chose* how to represent the glass themselves, one feeling it should be half empty and the other feeling it should be half full. What also becomes clear in this example is that each person might change their mind in the future should they so choose (they can swap representations of the glass at will).

* In this case, “chose” refers to the interpretive process of the overall cognitive apparatus (predispositions, and current emotional status) and not to some specific notion of a self who makes a distinct choice based on free will. The point of the example is merely to show that the way the world appears to each person depends on the subjective state of their perceptual mechanism, rather than on the state of some objective, external domain. Such a domain, could it be said to exist at all, is not one the human organism is privy to.

Of course, the theory of a pragmatic representational process may still hold for certain unconscious operations; but when the representations in question are brought to the level of awareness, they seem to gain their momentum from an aspect of reality deemed superfluous by naturalism (that aspect being one’s actual subjective experience). In other words, the role played by interpretation in selecting representations seems to call into question the premise of the problem of misrepresentation itself. For, if interpretation can play such a large role in how we represent the world, why is it that we suppose there is an objective world “out there” for us to represent in the first place? Once this metaphysical assumption is called into question, the issue of misrepresentation becomes moot. If there is no objective world to represent (at least not one we are capable of reaching), one needn’t worry about misrepresenting anything at all.

Before continuing too deeply into this question of the substantiality of the physical world, let us see from another route why it is that we must call it into question. Let us start with a description of what we know about the brain, given by Alan Wallace:

“…What neuroscientists actually know is that specific neural events (N) are correlated to specific mental events (M), such that if N occurs, M occurs; if M occurs, N occurs; if N doesn’t occur, M doesn’t occur; and if M doesn’t occur, N doesn’t occur. Such a correlation could imply that the occurrence of N has a causal role in the production of M, or vice versa; or it could imply that N and M are actually the same phenomenon viewed from different perspectives. There is not enough scientific knowledge at this point to determine which of these types of correlation is the correct one.
So then, we see only a correlational relationship between brain states and mind states. We must then ask: How might someone who equates brain states with mind states (i.e., a naturalist) account for the intentionality of cognitive systems? Such a theorist would say that evolution has designed the brain (a) to directly represent its own body and (b) to indirectly represent whatever it interacts with. Neurons are then said to serve the specific evolutionary function of representing those interactions. “In short,” Wallace says, “[their] solution to this problem is that the brain has the capacity to represent other things because it was designed that way ‘by evolution.’ This ‘explanation’ obviously illuminates nothing other than the fact that [such a theorist has] great faith in the mysterious ways of evolution, which for the biologist here takes on the role theologians have long ascribed to God.” In other words, saying that a representation has content (that a neural state is about something) because of evolution does nothing to further our understanding of what is going on.

Regardless of this shortcoming, the role played by interpretation in developing the human cognitive system’s understanding of the world is under attack by most naturalist scientists. Their goal, based on the notion of a pre-existing external world, is to disambiguate our conceptual models so that they better fit the actual world in which we live. In other words, they suppose that there is an independent and objective world, and that within our head is an imperfect re-creation of that world based on sensory input. They then assign themselves the task of correctly aligning our inner representations with the true outer reality. The problem with this approach, as Wallace explains, is that:

“…Scientists have no body of external objective truth by which the alignment of scientific theories and the world outside our heads can be calibrated…The empirical data that we perceive, together with our scientific theories that account for them, all consists of mental representations ‘within our heads’…We have no objective yardstick with which to compare those representations with what we assume to be the ‘real world.’”
The basic idea is that, if our only view of the world comes through the filter of representation, we can never know what the world is in itself. This seems troublesome, but this is only so because we feel so compelled to know the intrinsic qualities of things in themselves. Hilary Putnam refers to this grasping after things in themselves as “the deep systemic root” of the objectivist view of the world. He argues against the idea that intrinsic qualities exist at all, as all “objects” gain their meaning only in relation to the subject whose nervous system interprets them. He argues that we cannot understand what an object “is” if we suppose our representation refers to a mind-independent object. Instead, objects are said not to exist independently of our conventions and conceptual schemes. It is our interpretation of the world that breaks it up into objects. Putnam continues: “Since the ‘objects’ and their [representations] are alike internal to the scheme of description, it is possible to say what matches what.” So then, from this perspective, mental representations have content because they are contextually embedded in one’s chosen (or culturally adopted, or biologically predisposed) conceptual map of the world.

The problem of misrepresentation, then, seems impossible to explain naturalistically. Insisting on a naturalistic account of the mind using the representational model puts unnecessary constraints on a viable theory of cognition, requiring that one tip-toe quietly around an insoluble problem using over-complicated distractions and abstract logical gymnastics. No matter how many linguistic loops a philosopher tries to run around the issue, it remains steadfast in its implications: no physical entity, no matter how complex, can be about another. We know this because aboutness implies meaning, and there is no room for meaning in a causal world of automatic and objective physical processes. Meaning requires context to be understood, and context is relational and groundless. That is, no part of a contextual scheme’s meaning can be known objectively “in itself,” only in terms of what it is not (or how it relates to the whole). We derive meaning from the world experientially by being embedded in our own conceptual understanding of it, not by correctly representing its preformed state.

It seems easier to start again—dropping the metaphysical requirement of a separate, external world—beginning, instead, with what we know: that neural states seen in 3rd person and mental states experienced in 1st person are both valid perspectives of what must be one co-existing reality. As such, reality cannot yet be reduced to an objectively existing physical world of matter (or to an imagined creation of the subjective mind). Any theory of cognition that begins its reasoning from a naturalistic perspective therefore assumes more than empirical science has yet proven.

Sources

à“The Embodied Mind: Cognitive Science and Human Experience” by Francisco Varela, Evan Thompson, and Eleanor Rosch. 1991. The MIT Press. (Excerpts from pages 232-233).

à“A Science of Consciousness: Buddhism (1), the Modern West (0)” by Alan Wallace. 2002. The Pacific World: Journal of the Institute of Buddhist Studies. Third Series, No. 4.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s