“Electrons Don’t Think” by Sabine Hossenfelder

The following is a comment I posted on the physicist and blogger Sabine Hossenfelder’s blog Backreaction to a post titled “Electrons Don’t Think.”

https://backreaction.blogspot.com/2019/01/electrons-dont-think.html


Hi Sabine.

I discovered your blog last night after Googling “Carlo Rovelli and Alfred North Whitehead.” It brought me to Tam Hunt’s interview with Rovelli. I have been studying Rovelli’s popular works lately (I just finished The Order of Time) because I’d heard his loop quantum gravity might be a natural fit with Whitehead’s panexperiential process-relational ontology. I am a philosopher, not a physicist or a mathematician, so I struggle with many technical papers in physics journals (it is helpful when the author is kind enough to lay out the conceptual structure of the math). Luckily, I’ve noticed that popular books are the best place to look for a physicist’s natural philosophy and the best way to understand the metaphysical background of a physicist’s theories. I am looking forward to reading your book Lost in Math. It strikes me as another example of a larger trend in theoretical physics (also exemplified by Lee Smolin) that’s challenging the ascendency of mathematical speculation over experimental evidence and empiricism.

As for your post “Electrons Don’t Think”, I don’t know what panpsychist philosophy you read, but either it was badly written or you misunderstood it. There are, of course, many varieties of panpsychism, just as there are many varieties of materialism and idealism, etc. Perhaps the variety you read has misled you. The panpsychism of, for example, the mathematician, physicist, and philosopher Alfred North Whitehead was constructed precisely in order to provide a new metaphysical interpretation of the latest scientific evidence (including relativity, quantum, evolutionary, and complexity theories), since the old mechanistic materialism could no longer do the job in a coherent way. Panpsychism is metaphysics, not physics. A metaphysical scheme should aid in our philosophical interpretation of the physical evidence, not contradict it. Any philosopher whose metaphysics contradict the physical evidence is doing bad philosophy.

I like to distinguish between two main species of panpsychism:

1) substance-property panpsychism (Aristotle, Spinoza, Leibniz, and contemporary philosophers Philip Goff, Galen Strawson, and David Chalmers seem to me to fall into this category)

2) process-relational panpsychism (Friedrich Schelling, William James, Henri Bergson, Gilles Deleuze, A. N. Whitehead)

I count myself among the later category, and following the Whiteheadian philosopher David Ray Griffin, I prefer the term “panexperientialism” to panpsychism, since the idea is not that electrons have the full capacities of human psyches (reflective thinking, deliberate willing, artistic imagining, etc.) but that all self-organizing systems are possessed of at least some modicum of feeling, even if this feeling is faint and largely unconscious in the vast majority of systems. Human consciousness is an extremely rare and complex integration of the more primordial feelings of these self-organizing systems.

I unpack the differences between these species of panpsychism/panexperientialism at more length in this blog post. In short, the substance-property species of panpsychism has it that mind is an intrinsic property of all substance. This at least has the advantage over materialism that it avoids the hard problem of consciousness and provides a way out of the incoherence of dualism. But I think substance-property panpsychism is working with an overly abstract concept of consciousness. Consciousness is a relational process, not a quality inhering in a substance. Consciousness emerges between us, not in you or in me.

You write: panpsychism is “the idea that all matter – animate or inanimate – is conscious, we just happen to be somewhat more conscious than carrots. Panpsychism is the modern elan vital.”

I would say that panpsychism is the idea that all matter is animate. What is “matter,” anyway, other than activity, energy vectors, vibrations? Is there really such a thing as “inanimate” matter, that is, stuff that just sits there and doesn’t do anything? As for the “elan vital,” I suppose you are trying to compare panpsychism to vitalism? Vitalism is the idea that some spiritual agency exists separately from a merely mechanistic material and drives it around; it’s the idea that, for example, angels are pushing the planets around in their orbits. The panexperientialist cosmology I articulate in my book Physics of the World-Soul explicitly denies this sort of dualism between spirit and matter. Panexperientialism is the idea that spirit and matter are not two, that mechanism is merely an appearance, a part mistaken for a self-existing whole, and that ultimately Nature is organic and animate from top to bottom.

 

Politics and Pluralism in the Anthropocene

Notes from a talk I gave at CIIS this past March titled “Politics and Pluralism in the Anthropocene”

Here’s the video of the whole panel:

https://youtu.be/sgoAZV4VVsc

Foucault on Hegel:

“[T]ruly to escape Hegel involves an exact appreciation of the price we have to pay to detach ourselves from him. It assumes that we are aware of the extent to which Hegel, insidiously perhaps, is close to us; it implies a knowledge, in that which permits us to think against Hegel, of that which remains Hegelian. We have to determine the extent to which our anti-Hegelianism is possibly one of his tricks directed against us, at the end of which he stands, motionless, waiting for us.” (Discourse on Language, Inaugural Lecture at the Collège de France, 1970-1971. tr. A. M. Sheridan Smith)

Begin with Hegel’s claim to have achieved absolute knowledge of Spirit, and to at least foreseeing the becoming concrete of this Spirit in the historical, social, and ethical life of the human community.

Marx read Hegel upside down, but still read Hegel. He was a materialist, but a dialectical materialist who recognized the potential of the human spirit, and this potential’s degradation and alienation from itself at the hands of capitalism. Marx tried to shake human beings awake, out of their slumber, out of the false consciousness that commodifies labor, life, and value.

It is not easy to do better than Hegel and Marx in terms of understanding, diagnosing, and prescribing action to overcome the contradictory situation in which humans find themselves as neoliberal capitalist subjects. But the dawning realization that we live in the time of the Anthropocene is fundamentally changing our situation. We can no longer talk about the nonhuman world, about what used to be called “Nature,” as though it was something separate from us, some kind of inert background or stage upon which human history progresses. As good dialecticians, Hegel and Marx fully recognized this entanglement of the human and the physical world, but they did so in a rather anthropocentric way that still presupposed and celebrated the idea of mastering nature. The Anthropocene signals, yes, the end of history, but also the beginning of (or at least the beginning of human recognition of) what Latour refers to as geostory. From Latour’s point of view, Hegel would never have expected our current situation, where Spirit, after its millennial march of dialectical progress, suddenly finds itself at risk of being suffocated, sublated, by carbon dioxide. As Latour describes it, the ecological crisis is pushing us into a profound mutation in our relation to the world. When the world as it has been known to the Western metaphysical project ends, we are left not with no world, but with many worlds. For Latour, politics is the composition of common worlds through the negotiation of differences. Political negotiation cannot be undertaken with the presupposition that unity has somehow already been achieved. If politics fails, we are left with a war of the worlds. A pluralistic politics asks us to forgo the desire for the premature unification of the world, to accept that “the world” has ended and diplomatic negotiation is the only viable way of “worlding.” Ours is always a world-in-process, and any unity we do achieve is fragile and must be continually re-affirmed and maintained.

Latour has been deeply influenced by William James. James positioned his ontological pluralism against Hegel and Marx’s dialectical monisms. William James was appreciative of Hegel, but certainly he was an counter-Hegelian thinker. As far as Marx goes, James was too American to ever fully reject at least the individualist spirit of capitalism, even if he was suspicious of capitalism’s larger cultural impact and its relation to American imperialism. In a letter to H. G. Wells in 1906, for example, James lamented “the moral flabbiness born of the exclusive worship of the bitch-goddess SUCCESS.” James thought worship of success, by which he meant money, was “our national disease.” James championed the individual, but an individual who is sympathetic to meeting and being transformed by novel differences, whose selfhood is leaky and perforated by human and nonhuman otherness, whose identity is always in-the-making and open to question and revision.

James on excess: “Every smallest state of consciousness, concretely taken, overflows its own definition. Only concepts are self-identical; only ‘reason’ deals with closed equations; nature is but a name for excess; every point in her opens out and runs into the more; and the only question, with reference to any point we may be considering, is how far into the rest of nature we may have to go in order to get entirely beyond its overflow. In the pulse of inner life immediately present now in each of us is a little past, a little future, a little awareness of our own body, of each other’s persons, of the sublimities we are trying to talk about, of earth’s geography and the direction of history, of truth and error, of good and bad, and of who knows how much more? Feeling, however dimly and subconsciously, all these things, your pulse of inner life is continuous with them, belongs to them and they to it. You can’t identify it with either one of them rather than with the others, for if you let it develop into no matter which of those directions, what it develops into will look back on it and say, ‘That was the original germ of me.’” (A Pluralistic Universe)

James leans strongly in the direction of particular, unique, once-occurrent individuals (even if he does not see individuals as autopoietic, but as sympoietic). In contrast, some historical performances of communism have leaned in the other direction, toward some abstract conception of communal will, and when individuals stood in the way of this abstract will, as we saw in Stalin’s Soviet Union and Mao’s China, they were crushed. Our capitalist society claims to prize the individual highest, but it is corporate individuality that we really cherish. We human individuals are mere cogs in the labor machine, and Earth is a store of raw materials and a garbage heap. So either way, in either situation, capitalism or communism, human and nonhuman communities and individuals are in trouble.

Our challenge today, in the Anthropocene, is to think individuality and community concretely, to think relation, difference, and particularity concretely. Normally thinking seeks out universals, essences, substances, and to the extent that the Western metaphysical project has sought out universals, essences, and substances that failed to align with the particular contours of the sensory, social worlds that we inhabit, it has done great violence those worlds. As a result of the failure of our ideas and concepts to cohere with reality—that is, to sympoietically relate to the communities of actual organisms composing the living planet—these concepts have functioned to destroy them. Humans, whether we like it or not, are in community with these organisms, our worlds overlap and perforate one another; we touch interior to interior, my inside bleeding into your inside bleeding into all nonhuman insides. But our subjectivities do not just add up or sum to some seamless Globe-like Mind. Gaia, Latour is constantly reminding his readers, is not a Globe! To the extent that we are all internally related to one another, we form a network of of entangled, overlapping perspectives, where each perspective is still unique and once-occurrent, novel; and yet each is also related to what has come before and will be related to what comes next. We are individuals-in-communion, communities whose wholeness subscends the individuals who compose them. Subscendence is a concept developed by Timothy Morton to refer to the way that wholes, like Gaia, are actually less than the sum of their parts. He calls this “implosive holism” and contrasts it with “explosive holism,” the sort of holism that led Stalin to murder millions of individuals for the sake of the Soviet Union, or that leads some environmentalists to emphasize saving species or even the whole planet without paying enough attention to individual organisms (a species doesn’t feel pain; only individual organisms feel pain, etc.).

So the question becomes, how do we think pluralism, difference, and diversity concretely, and not abstractly. Because when we think particular identities or individuals abstractly, we do violence to them, we try to universalize them in an overly abstract way without being sensitive to their unique contours. This is a form of reductionism. We can reduce individuals “up” to the whole, or reduce them “down” to their parts. Pluralism is trying to find a middle path between both forms of reductionism: It seeks a “strung-along” sort of holism (as James put it), not a global or continuous holism where each thing is connected to everything else in exactly the same way. Instead, as Donna Haraway puts it, “Nothing is connected to everything” even though “everything is connected to something.”

Thinking pluralism concretely means stepping out of a sense of exclusively human society, out of the self-enclosed social bubble that used to insulate us from any access whatsoever to something called Nature, or “the environment” standing in wait “over there” for science to objectify into knowledge or for the economy to commodify into money. Thinking pluralism concretely means stepping outside of the monetary monism of contemporary capitalism, where all value is reduced to exchange value in the human marketplace, to instead become part of a democracy of fellow creatures, as Whitehead puts it, where values pervade the biosphere, and “Nature” is no longer just a realm of inert, law-abiding facts but of creative, expressive agencies. Thinking pluralism concretely means walking out of the old Copernican universe, forgetting the mastery-seeking knowledge supplied by the monotheistic gaze of Science, in order to inhabit a new cosmos composed of infinitely many perspectives, more a pluriverse than a universe.

Whitehead’s Radically Empirical Theory of General Relativity

“The doctrine of relativity affects every branch of natural science, not excluding the biological sciences. . . . Relativity, in the form of novel formulae relating time and space, first developed in connection with electromagnetism. . . . Einstein then proceeded to show its bearing on the formulae for gravitation. It so happens therefore that owing to the circumstances of its origin a very general doctrine is linked with two special applications.”
–Whitehead (The Principle of Relativity, 3).

One of the biggest surprises for me upon reading Auxier and Herstein’s book The Quantum of Explanation was learning that Whitehead’s theory of extension (or “mereotopology” as it has come to be called) has been taken up by computer scientists working in the field of robotic vision (see for example the work of Ian Pratt-Hartmann).

“It is a widely acknowledged fact in this sub-discipline that Alfred North Whitehead’s work on extension is foundational for their enterprise. Our experience has been that Whitehead scholars are simply astounded to learn of this fact. Yet we should have expected and even predicted such a connection” (QE 90).

Guilty as charged. While I think I got things mostly right in section 3.2 of my dissertation (“From Geometric Conditions of Possibility to Genetic Conditions of Actuality”), the promising application of Whitehead’s topological scheme to robotic vision certainly brings this aspect of his project into sharper focus for me. As a radical empiricist, Whitehead was searching for a formal account of our concrete experience of projectively related extensa. We are finite creatures with limited sensory organs and processing capacity. We do not experience the world of spatial relations in terms of infinitesimal points or the geometrical schemes built up from such points. Rather, what we encounter in our immediate experiential field are the intuitive whole-part relational structures formalized by non-metrical projective geometry.

Following Einstein’s articulation of the special and general theories of relativity (in 1905 and 1916, respectively), and his problematic “mono-metric” identification of a 4-D geometrical model with physical space-time*, Whitehead pursued his theory of extension with renewed urgency. Somehow, the uniformity of spatial geometry had to be preserved, else scientific measurement would become impossible. Einstein did not appear to realize that allowing the contingent warping of space by massive objects undermined the fundamental logical requirements of measurement: that space have a necessary and universal structure (or, as Auxier and Herstein put it, “we must have a standard unit of spatial comparison for conjugacy…and standard(s) of spatial projection” so as to bring this unit into comparison with whatever we are trying to measure [QE 102]). By collapsing the difference between physical space and his favored geometrical scheme, Einstein made the structure of spatial geometry contingent upon randomly arrayed masses.

“We must know the complete distribution of matter and energy in the universe prior to knowing its geometry. But we must have a comprehensive grasp of this geometry in order to discover this distribution. As Whitehead pointed out, with General Relativity as our theory of space and gravity, we are saddled with a situation where we must first know everything before we can know anything” (QE 104).  

Einstein’s “mono-metric” model has been one of the most successful in the history of science. But because of the unexpected observations of the rotational velocity of galaxies and of cosmic inflation rates, its theoretical supremacy has begun to be seriously questioned. Some astrophysicists have attempted to save the theory by inventing “dark matter” and “dark energy” to explain the missing mass that would bring observations back into agreement with Einstein’s theory. Auxier and Herstein refer to these inventions as “an especially unhappy piece of nonsense” (QE 20). I’m sympathetic, but I wouldn’t go quite that far. To my mind, these invented entities are akin to the epicycles of Ptolemaic astronomy. In other words, these exotic and invisible forms of mass/energy (which supposedly compose ~96% of the universe) are postulated ad hoc in an attempt to “save the appearances” (as ancient astronomers used to say). Ancient astronomers were tasked by Plato with explaining the seemingly erratic motion of the planets in terms of a theoretical model composed only of uniform circular motions. When new planetary observations conflicted with the model, more circles were added (epicycles) to bring the model back into alignment with appearances. One view of science is that it is just about refining existing theoretical presuppositions to fit new observations, gradually approaching a perfect identity between model and reality. In this sense, the addition of epicycles to match observations could continue indefinitely. After all, Ptolemy’s geocentric model was more accurate than Copernicus’ heliocentric model (which itself still required epicycles until Kepler and Newton updated his math). The geocentric model is still accurate enough that modern planetarium projectors (invented in the 1920s by a company in Jena, Germany) continue to utilize it, reproducing Ptolemy’s deferents and epicycles with their internal gears and motors (see also).

zeissprojlayout

But as Karl Popper taught us, scientific theories must be subject to empirical falsification. The eternal circular orbits of Ptolemy’s model fall out of phase with the long-term evolution of planetary orbits, while the (updated) heliocentric model accommodates this evolution well. As Thomas Kuhn, another great philosopher of science, taught us, the history of science is not just about the gradual refinement of old theories to fit new observations in an asymptotic convergence of model to reality; rather, this history is also characterized by periods of revolutionary crisis as aging paradigms are supplanted by deeper, wider, more elegant and inclusive explanatory perspectives. Einstein’s genius was to bring the reigning Newtonian theory of gravity into alignment with Maxwell’s theory of electromagnetism. A deeper theory of space was born. But in a sense, despite many other successful observational predictions, empirical falsification is exactly what happened to Einstein’s gravitational theory when it failed to accurately predict the observed rotational velocity of galaxies. However, because this darling model had made a number of other accurate predictions, and because no widely accepted alternative paradigm was on hand, astrophysicists decided to fudge the numbers by inventing new free parameters, new epicycles, to bring the theory back into alignment with observations. Appearances were thereby saved, but at the cost of conjuring into existence an entire universe (or 96% of one, at least) of cold and dark, that is, unobservable, matter/energy.

Even though he did formulate a “bimetric” alternative in 1922 (QE 109), Whitehead’s problem is not with Einstein’s model. This isn’t a “scientists have been wrong before, so why should we trust them now?” argument. Science is about modeling. In some sense, scientific models are always wrong. That’s the name of the game, after all: build a model and throw it against reality until it breaks. Then study why it broke until you find a new model that doesn’t break as quickly. Gradually, more robust, inclusive models emerge. Rather, Whitehead’s problem is with the philosophically naive “model-centrism” that leads scientists to equate their favored model with reality in a dogmatically literalistic way. We should never assume the reigning physical models of the universe offer a final account of the way things are (especially when today’s two most successful models, relativity and quantum theory, remain irreconcilable). Science is not ontology: science is a method of inquiry involving the making and breaking of toy models.

The dogmatic equation of a favored geometrical model with physical reality not only undermined the logical basis of measurement, it led Einstein to dismiss our concrete experience of an irreversible flow of time as nothing more than a “stubbornly persistent illusion.” This is Whitehead’s famous “fallacy of misplaced concreteness” writ large. Einstein’s unquestioned commitment to the classical “spectator theory of knowledge” prevented him from grasping the profoundly relational implications of his new theory of space. He upheld the old Galilean-Cartesian view of a bifurcated Nature, construing our consciousness as somehow external to a cosmos that we can only ever confusedly experience. Whitehead offers an alternative, fully relational epistemology and ontology that re-embeds experience in the cosmos: we are creative participants in a cosmogenetic relational nexus.  

Instead of rushing to eliminate experience from our understanding of a relativistic (or relational) reality, Whitehead carefully examined the hidden epistemic presuppositions and metaphysical requirements of Einstein’s more specific application of relativity to the physics of light and gravitation. The result of his examination was eventually assembled in Process and Reality as the fourth category of explanation, a truly general principle of relativity: “it belongs to the nature of a ‘being’ that it is a potential for every ‘becoming'” (PR 22). Obviously, the importance of Whitehead’s fourth category of explanation (of which there are 26 others) can only be understood within the total gestalt of his categoreal scheme (which includes the category of the ultimate: Creativity; eight categories of existence, among which the most important are eternal objects and actual occasions; and nine categories of obligation). Whitehead’s categoreal scheme is laid out in Part I of Process and Reality as something like an opening credit roll listing the conceptual dramatis personae who, in Part II, will take the stage to exemplify their adequacy. But I’m not going to run through the whole dress rehearsal right now (for a helpful exegesis of Whitehead’s first four categories of explanation, see pgs. 108-110 of QE). Suffice it to say that Whitehead’s principle of relativity expresses the truth that everything co-exists in a web of relatedness, whether actually or potentially. 

Auxier and Herstein:

This is the principle that Einstein and his devotees have abandoned: not the mathematical expression of their physical model; that model is itself only an application of what has become the standard dogma of orthodox cosmology, with its narrowly defined approach to the interpretation of a truncated representation of experience. Rather, physical cosmology has left behind the full principle of relativity and its unqualified commitment to the incurable relatedness of the real. That abandonment comes in the truncation of experience at the root of their largely unexpressed theory of experience [i.e., the theory of the bifurcation of Nature]. For one cannot have a universal principle of relativity—applicable to all that is real—unless one takes experience in its real, relational totality. Experience—both actual and potential—is exactly the kind of reality that falls under the principle of relativity. One cannot take the metaphysical principle of relativity seriously unless one is a radical empiricist” (QE 110). 

In The Quantum of Explanation, Auxier and Herstein have brilliantly succeeded in elucidating the features of a radically empirical cosmology. As Whitehead reminds us early and often in Process and Reality, the purpose of philosophy is not to explain away the existence of the concrete by reduction to the abstract, but to explain the emergence of abstraction from concretion. The proper questions are: how does concrete fact participate in general form and how are general forms exemplified in concrete facts?

For a longer discussion of Whitehead’s radical empiricism a.k.a. relational realism, see my essay “Retrieving Realism: A Whiteheadian Wager.”


*It has been brought to my attention that the matter of whether Einstein thought the physics of gravitation is reducible to the geometry of space-time is not so clear cut. See for example: “Why Einstein did not believe that general relativity geometrizes gravity” by Lehmkuhl. The research continues… 

Lectures on Timothy Morton’s “Humankind: Solidarity with Nonhuman People”

Process and Difference in the Pluriverse
(opening lecture)

My Spring course at CIIS.edu finishes up this week with a set of modules on Timothy Morton’s book Humankind: Solidarity with Nonhuman People (2017). Earlier in the semester, we read works by Plato, William James, Catherine Keller, William Connolly, Bruno Latour, Anne Pomeroy, and Donna Haraway. Below, I am sharing a series of lecture fragments about Morton’s book, as well as a panel discussion formed around the course topics.

Searching for Stars: A Conversation with Alan Lightman

Process and Difference in the Pluriverse: Plato, William James, & W.E.B. Du Bois

I’m sharing the lecture from the first module of my course this semester at CIIS.edu, PARP 6135: Process and Difference in the Pluriverse. The lecture discusses Plato’s Republic, William James’ pluralism, and W.E.B. Du Bois’ critical inheritance of James’ philosophy.

Here’s a PDF transcript of the lecture