Politics and Pluralism in the Anthropocene

Notes from a talk I gave at CIIS this past March titled “Politics and Pluralism in the Anthropocene”

Here’s the video of the whole panel:

https://youtu.be/sgoAZV4VVsc

Foucault on Hegel:

“[T]ruly to escape Hegel involves an exact appreciation of the price we have to pay to detach ourselves from him. It assumes that we are aware of the extent to which Hegel, insidiously perhaps, is close to us; it implies a knowledge, in that which permits us to think against Hegel, of that which remains Hegelian. We have to determine the extent to which our anti-Hegelianism is possibly one of his tricks directed against us, at the end of which he stands, motionless, waiting for us.” (Discourse on Language, Inaugural Lecture at the Collège de France, 1970-1971. tr. A. M. Sheridan Smith)

Begin with Hegel’s claim to have achieved absolute knowledge of Spirit, and to at least foreseeing the becoming concrete of this Spirit in the historical, social, and ethical life of the human community.

Marx read Hegel upside down, but still read Hegel. He was a materialist, but a dialectical materialist who recognized the potential of the human spirit, and this potential’s degradation and alienation from itself at the hands of capitalism. Marx tried to shake human beings awake, out of their slumber, out of the false consciousness that commodifies labor, life, and value.

It is not easy to do better than Hegel and Marx in terms of understanding, diagnosing, and prescribing action to overcome the contradictory situation in which humans find themselves as neoliberal capitalist subjects. But the dawning realization that we live in the time of the Anthropocene is fundamentally changing our situation. We can no longer talk about the nonhuman world, about what used to be called “Nature,” as though it was something separate from us, some kind of inert background or stage upon which human history progresses. As good dialecticians, Hegel and Marx fully recognized this entanglement of the human and the physical world, but they did so in a rather anthropocentric way that still presupposed and celebrated the idea of mastering nature. The Anthropocene signals, yes, the end of history, but also the beginning of (or at least the beginning of human recognition of) what Latour refers to as geostory. From Latour’s point of view, Hegel would never have expected our current situation, where Spirit, after its millennial march of dialectical progress, suddenly finds itself at risk of being suffocated, sublated, by carbon dioxide. As Latour describes it, the ecological crisis is pushing us into a profound mutation in our relation to the world. When the world as it has been known to the Western metaphysical project ends, we are left not with no world, but with many worlds. For Latour, politics is the composition of common worlds through the negotiation of differences. Political negotiation cannot be undertaken with the presupposition that unity has somehow already been achieved. If politics fails, we are left with a war of the worlds. A pluralistic politics asks us to forgo the desire for the premature unification of the world, to accept that “the world” has ended and diplomatic negotiation is the only viable way of “worlding.” Ours is always a world-in-process, and any unity we do achieve is fragile and must be continually re-affirmed and maintained.

Latour has been deeply influenced by William James. James positioned his ontological pluralism against Hegel and Marx’s dialectical monisms. William James was appreciative of Hegel, but certainly he was an counter-Hegelian thinker. As far as Marx goes, James was too American to ever fully reject at least the individualist spirit of capitalism, even if he was suspicious of capitalism’s larger cultural impact and its relation to American imperialism. In a letter to H. G. Wells in 1906, for example, James lamented “the moral flabbiness born of the exclusive worship of the bitch-goddess SUCCESS.” James thought worship of success, by which he meant money, was “our national disease.” James championed the individual, but an individual who is sympathetic to meeting and being transformed by novel differences, whose selfhood is leaky and perforated by human and nonhuman otherness, whose identity is always in-the-making and open to question and revision.

James on excess: “Every smallest state of consciousness, concretely taken, overflows its own definition. Only concepts are self-identical; only ‘reason’ deals with closed equations; nature is but a name for excess; every point in her opens out and runs into the more; and the only question, with reference to any point we may be considering, is how far into the rest of nature we may have to go in order to get entirely beyond its overflow. In the pulse of inner life immediately present now in each of us is a little past, a little future, a little awareness of our own body, of each other’s persons, of the sublimities we are trying to talk about, of earth’s geography and the direction of history, of truth and error, of good and bad, and of who knows how much more? Feeling, however dimly and subconsciously, all these things, your pulse of inner life is continuous with them, belongs to them and they to it. You can’t identify it with either one of them rather than with the others, for if you let it develop into no matter which of those directions, what it develops into will look back on it and say, ‘That was the original germ of me.’” (A Pluralistic Universe)

James leans strongly in the direction of particular, unique, once-occurrent individuals (even if he does not see individuals as autopoietic, but as sympoietic). In contrast, some historical performances of communism have leaned in the other direction, toward some abstract conception of communal will, and when individuals stood in the way of this abstract will, as we saw in Stalin’s Soviet Union and Mao’s China, they were crushed. Our capitalist society claims to prize the individual highest, but it is corporate individuality that we really cherish. We human individuals are mere cogs in the labor machine, and Earth is a store of raw materials and a garbage heap. So either way, in either situation, capitalism or communism, human and nonhuman communities and individuals are in trouble.

Our challenge today, in the Anthropocene, is to think individuality and community concretely, to think relation, difference, and particularity concretely. Normally thinking seeks out universals, essences, substances, and to the extent that the Western metaphysical project has sought out universals, essences, and substances that failed to align with the particular contours of the sensory, social worlds that we inhabit, it has done great violence those worlds. As a result of the failure of our ideas and concepts to cohere with reality—that is, to sympoietically relate to the communities of actual organisms composing the living planet—these concepts have functioned to destroy them. Humans, whether we like it or not, are in community with these organisms, our worlds overlap and perforate one another; we touch interior to interior, my inside bleeding into your inside bleeding into all nonhuman insides. But our subjectivities do not just add up or sum to some seamless Globe-like Mind. Gaia, Latour is constantly reminding his readers, is not a Globe! To the extent that we are all internally related to one another, we form a network of of entangled, overlapping perspectives, where each perspective is still unique and once-occurrent, novel; and yet each is also related to what has come before and will be related to what comes next. We are individuals-in-communion, communities whose wholeness subscends the individuals who compose them. Subscendence is a concept developed by Timothy Morton to refer to the way that wholes, like Gaia, are actually less than the sum of their parts. He calls this “implosive holism” and contrasts it with “explosive holism,” the sort of holism that led Stalin to murder millions of individuals for the sake of the Soviet Union, or that leads some environmentalists to emphasize saving species or even the whole planet without paying enough attention to individual organisms (a species doesn’t feel pain; only individual organisms feel pain, etc.).

So the question becomes, how do we think pluralism, difference, and diversity concretely, and not abstractly. Because when we think particular identities or individuals abstractly, we do violence to them, we try to universalize them in an overly abstract way without being sensitive to their unique contours. This is a form of reductionism. We can reduce individuals “up” to the whole, or reduce them “down” to their parts. Pluralism is trying to find a middle path between both forms of reductionism: It seeks a “strung-along” sort of holism (as James put it), not a global or continuous holism where each thing is connected to everything else in exactly the same way. Instead, as Donna Haraway puts it, “Nothing is connected to everything” even though “everything is connected to something.”

Thinking pluralism concretely means stepping out of a sense of exclusively human society, out of the self-enclosed social bubble that used to insulate us from any access whatsoever to something called Nature, or “the environment” standing in wait “over there” for science to objectify into knowledge or for the economy to commodify into money. Thinking pluralism concretely means stepping outside of the monetary monism of contemporary capitalism, where all value is reduced to exchange value in the human marketplace, to instead become part of a democracy of fellow creatures, as Whitehead puts it, where values pervade the biosphere, and “Nature” is no longer just a realm of inert, law-abiding facts but of creative, expressive agencies. Thinking pluralism concretely means walking out of the old Copernican universe, forgetting the mastery-seeking knowledge supplied by the monotheistic gaze of Science, in order to inhabit a new cosmos composed of infinitely many perspectives, more a pluriverse than a universe.

Whitehead’s Radically Empirical Theory of General Relativity

“The doctrine of relativity affects every branch of natural science, not excluding the biological sciences. . . . Relativity, in the form of novel formulae relating time and space, first developed in connection with electromagnetism. . . . Einstein then proceeded to show its bearing on the formulae for gravitation. It so happens therefore that owing to the circumstances of its origin a very general doctrine is linked with two special applications.”
–Whitehead (The Principle of Relativity, 3).

One of the biggest surprises for me upon reading Auxier and Herstein’s book The Quantum of Explanation was learning that Whitehead’s theory of extension (or “mereotopology” as it has come to be called) has been taken up by computer scientists working in the field of robotic vision (see for example the work of Ian Pratt-Hartmann).

“It is a widely acknowledged fact in this sub-discipline that Alfred North Whitehead’s work on extension is foundational for their enterprise. Our experience has been that Whitehead scholars are simply astounded to learn of this fact. Yet we should have expected and even predicted such a connection” (QE 90).

Guilty as charged. While I think I got things mostly right in section 3.2 of my dissertation (“From Geometric Conditions of Possibility to Genetic Conditions of Actuality”), the promising application of Whitehead’s topological scheme to robotic vision certainly brings this aspect of his project into sharper focus for me. As a radical empiricist, Whitehead was searching for a formal account of our concrete experience of projectively related extensa. We are finite creatures with limited sensory organs and processing capacity. We do not experience the world of spatial relations in terms of infinitesimal points or the geometrical schemes built up from such points. Rather, what we encounter in our immediate experiential field are the intuitive whole-part relational structures formalized by non-metrical projective geometry.

Following Einstein’s articulation of the special and general theories of relativity (in 1905 and 1916, respectively), and his problematic “mono-metric” identification of a 4-D geometrical model with physical space-time*, Whitehead pursued his theory of extension with renewed urgency. Somehow, the uniformity of spatial geometry had to be preserved, else scientific measurement would become impossible. Einstein did not appear to realize that allowing the contingent warping of space by massive objects undermined the fundamental logical requirements of measurement: that space have a necessary and universal structure (or, as Auxier and Herstein put it, “we must have a standard unit of spatial comparison for conjugacy…and standard(s) of spatial projection” so as to bring this unit into comparison with whatever we are trying to measure [QE 102]). By collapsing the difference between physical space and his favored geometrical scheme, Einstein made the structure of spatial geometry contingent upon randomly arrayed masses.

“We must know the complete distribution of matter and energy in the universe prior to knowing its geometry. But we must have a comprehensive grasp of this geometry in order to discover this distribution. As Whitehead pointed out, with General Relativity as our theory of space and gravity, we are saddled with a situation where we must first know everything before we can know anything” (QE 104).  

Einstein’s “mono-metric” model has been one of the most successful in the history of science. But because of the unexpected observations of the rotational velocity of galaxies and of cosmic inflation rates, its theoretical supremacy has begun to be seriously questioned. Some astrophysicists have attempted to save the theory by inventing “dark matter” and “dark energy” to explain the missing mass that would bring observations back into agreement with Einstein’s theory. Auxier and Herstein refer to these inventions as “an especially unhappy piece of nonsense” (QE 20). I’m sympathetic, but I wouldn’t go quite that far. To my mind, these invented entities are akin to the epicycles of Ptolemaic astronomy. In other words, these exotic and invisible forms of mass/energy (which supposedly compose ~96% of the universe) are postulated ad hoc in an attempt to “save the appearances” (as ancient astronomers used to say). Ancient astronomers were tasked by Plato with explaining the seemingly erratic motion of the planets in terms of a theoretical model composed only of uniform circular motions. When new planetary observations conflicted with the model, more circles were added (epicycles) to bring the model back into alignment with appearances. One view of science is that it is just about refining existing theoretical presuppositions to fit new observations, gradually approaching a perfect identity between model and reality. In this sense, the addition of epicycles to match observations could continue indefinitely. After all, Ptolemy’s geocentric model was more accurate than Copernicus’ heliocentric model (which itself still required epicycles until Kepler and Newton updated his math). The geocentric model is still accurate enough that modern planetarium projectors (invented in the 1920s by a company in Jena, Germany) continue to utilize it, reproducing Ptolemy’s deferents and epicycles with their internal gears and motors (see also).

zeissprojlayout

But as Karl Popper taught us, scientific theories must be subject to empirical falsification. The eternal circular orbits of Ptolemy’s model fall out of phase with the long-term evolution of planetary orbits, while the (updated) heliocentric model accommodates this evolution well. As Thomas Kuhn, another great philosopher of science, taught us, the history of science is not just about the gradual refinement of old theories to fit new observations in an asymptotic convergence of model to reality; rather, this history is also characterized by periods of revolutionary crisis as aging paradigms are supplanted by deeper, wider, more elegant and inclusive explanatory perspectives. Einstein’s genius was to bring the reigning Newtonian theory of gravity into alignment with Maxwell’s theory of electromagnetism. A deeper theory of space was born. But in a sense, despite many other successful observational predictions, empirical falsification is exactly what happened to Einstein’s gravitational theory when it failed to accurately predict the observed rotational velocity of galaxies. However, because this darling model had made a number of other accurate predictions, and because no widely accepted alternative paradigm was on hand, astrophysicists decided to fudge the numbers by inventing new free parameters, new epicycles, to bring the theory back into alignment with observations. Appearances were thereby saved, but at the cost of conjuring into existence an entire universe (or 96% of one, at least) of cold and dark, that is, unobservable, matter/energy.

Even though he did formulate a “bimetric” alternative in 1922 (QE 109), Whitehead’s problem is not with Einstein’s model. This isn’t a “scientists have been wrong before, so why should we trust them now?” argument. Science is about modeling. In some sense, scientific models are always wrong. That’s the name of the game, after all: build a model and throw it against reality until it breaks. Then study why it broke until you find a new model that doesn’t break as quickly. Gradually, more robust, inclusive models emerge. Rather, Whitehead’s problem is with the philosophically naive “model-centrism” that leads scientists to equate their favored model with reality in a dogmatically literalistic way. We should never assume the reigning physical models of the universe offer a final account of the way things are (especially when today’s two most successful models, relativity and quantum theory, remain irreconcilable). Science is not ontology: science is a method of inquiry involving the making and breaking of toy models.

The dogmatic equation of a favored geometrical model with physical reality not only undermined the logical basis of measurement, it led Einstein to dismiss our concrete experience of an irreversible flow of time as nothing more than a “stubbornly persistent illusion.” This is Whitehead’s famous “fallacy of misplaced concreteness” writ large. Einstein’s unquestioned commitment to the classical “spectator theory of knowledge” prevented him from grasping the profoundly relational implications of his new theory of space. He upheld the old Galilean-Cartesian view of a bifurcated Nature, construing our consciousness as somehow external to a cosmos that we can only ever confusedly experience. Whitehead offers an alternative, fully relational epistemology and ontology that re-embeds experience in the cosmos: we are creative participants in a cosmogenetic relational nexus.  

Instead of rushing to eliminate experience from our understanding of a relativistic (or relational) reality, Whitehead carefully examined the hidden epistemic presuppositions and metaphysical requirements of Einstein’s more specific application of relativity to the physics of light and gravitation. The result of his examination was eventually assembled in Process and Reality as the fourth category of explanation, a truly general principle of relativity: “it belongs to the nature of a ‘being’ that it is a potential for every ‘becoming'” (PR 22). Obviously, the importance of Whitehead’s fourth category of explanation (of which there are 26 others) can only be understood within the total gestalt of his categoreal scheme (which includes the category of the ultimate: Creativity; eight categories of existence, among which the most important are eternal objects and actual occasions; and nine categories of obligation). Whitehead’s categoreal scheme is laid out in Part I of Process and Reality as something like an opening credit roll listing the conceptual dramatis personae who, in Part II, will take the stage to exemplify their adequacy. But I’m not going to run through the whole dress rehearsal right now (for a helpful exegesis of Whitehead’s first four categories of explanation, see pgs. 108-110 of QE). Suffice it to say that Whitehead’s principle of relativity expresses the truth that everything co-exists in a web of relatedness, whether actually or potentially. 

Auxier and Herstein:

This is the principle that Einstein and his devotees have abandoned: not the mathematical expression of their physical model; that model is itself only an application of what has become the standard dogma of orthodox cosmology, with its narrowly defined approach to the interpretation of a truncated representation of experience. Rather, physical cosmology has left behind the full principle of relativity and its unqualified commitment to the incurable relatedness of the real. That abandonment comes in the truncation of experience at the root of their largely unexpressed theory of experience [i.e., the theory of the bifurcation of Nature]. For one cannot have a universal principle of relativity—applicable to all that is real—unless one takes experience in its real, relational totality. Experience—both actual and potential—is exactly the kind of reality that falls under the principle of relativity. One cannot take the metaphysical principle of relativity seriously unless one is a radical empiricist” (QE 110). 

In The Quantum of Explanation, Auxier and Herstein have brilliantly succeeded in elucidating the features of a radically empirical cosmology. As Whitehead reminds us early and often in Process and Reality, the purpose of philosophy is not to explain away the existence of the concrete by reduction to the abstract, but to explain the emergence of abstraction from concretion. The proper questions are: how does concrete fact participate in general form and how are general forms exemplified in concrete facts?

For a longer discussion of Whitehead’s radical empiricism a.k.a. relational realism, see my essay “Retrieving Realism: A Whiteheadian Wager.”


*It has been brought to my attention that the matter of whether Einstein thought the physics of gravitation is reducible to the geometry of space-time is not so clear cut. See for example: “Why Einstein did not believe that general relativity geometrizes gravity” by Lehmkuhl. The research continues… 

Spring online course at CIIS.edu – Whitehead’s Adventure in Cosmology

Auditors are welcome, though space is limited. Email me at msegall@ciis.edu for more information.

One of our core texts in this course will be my Physics of the World-Soul  (a new third edition soon to be published). PARP 6133 01

Back to teach at Schumacher College in April 2019: “The Evolution of Consciousness and the Cosmological Imagination”

I’ll be teaching another short course at Schumacher College in the UK the week of April 22nd-26th, 2019.

Here’s a link if you’re interested in registering:

https://www.schumachercollege.org.uk/courses/short-courses/re-enchanting-the-cosmos

Here’s what I’ll be teaching on:

“The Evolution of Consciousness and the Cosmological Imagination”

This week-long course will trace the evolution of consciousness in the West from ancient Greece through to the present. The goal is twofold: to understand the historical process whereby humanity severed itself from a meaningful universe and to re-ignite the cosmological imagination allowing us to reconnect to the soul of the world. The course begins by exploring Plato’s cosmology and theory of participation and moves on to consider the Scientific Revolution and the Romantic reaction to it. It concludes with a study of several contemporary efforts to re-enchant the cosmos by grounding human consciousness back in the more-than-human creative process responsible for generating it. In addition to Plato, the course draws upon the archetypal astronomy of Johannes Kepler, the Naturphilosophie of Goethe and Schelling, the nature poetry of Coleridge and Wordsworth, the esoteric philosophy of Rudolf Steiner and Owen Barfield, the process philosophy of Alfred North Whitehead, and the contemporary participatory theory of Jorge Ferrer.

 

*featured image above by Jakob Boehme

Pluto and the Underworld of Scientific Knowledge Production

A Slovakian visual artist, András Cséfalvay, recently invited me to submit a video for inclusion in his upcoming exhibition in Prague focused on the cultural significance of Pluto (my video is embedded below). Back in 2006, Pluto was demoted from its planetary status by the International Astronomical Union. Following the flyby of NASA’s New Horizons spacecraft in 2015, the scientific and popular controversy over Pluto’s classification was reignited in part because Pluto proved to be more lively (i.e., geologically active) than astronomers had assumed.

Shortly after I accepted Cséfalvay’s invitation, a group of planetary scientists led by Philip Metzger (a physicist at my alma mater the University of Central Florida) published a paper that wades right into the center of the conflict. According to Metzger, “The IAU definition would say that the fundamental object of planetary science, the planet, is supposed to be defined on the basis of a concept that nobody [no planetary scientist] uses in their research.”

Pluto finds itself caught in the middle of a clash of paradigms: many (not all*) astronomers stand on one side arguing that the defining characteristic of a planet is that it clears its own orbit of other objects (Pluto does not), while on the other side planetologists like Metzger classify planets based on their spherical shape.

Metzger explains: “It turns out [sphericality] is an important milestone in the evolution of a planetary body, because apparently when it happens, it initiates active geology in the body.”

Metzger goes on to say that the IAU definition is too sloppy, since if taken literally, there would be no planets at all in our solar system (none of the bodies orbiting our Sun fully clears its own orbit).

So what is Pluto? Scientifically speaking, I think the planetary scientists have come up with a better classificatory scheme. As a process thinker, I agree with them that the best way to understand the essence of a planet is in terms of its evolutionary history. But my interest in this debate is more philosophical. I think about this controversy in the context of an interplay between the ontologies of multiple paradigms. For astronomers, Pluto is a mere “dwarf planet”; for planetologists, Pluto is a geologically active planet; and for astrologers, Pluto is Hades, Lord of the Underworld, the archetypal power of death and rebirth.

Inferno Giovanni da Modena
Giovanni da Modena’s 1409 fresco “The Inferno” depicting Dante’s vision of Hell.

Having been influenced by the work of Bruno Latour (in this case, see especially An Inquiry into Modes of Existence), I see the philosopher’s role as akin to that of a diplomat. I ask: is it possible to translate between a plurality of paradigms and to avoid the need to collapse our view of Pluto into Newton’s single vision? Can Pluto be a telescopically-enhanced point of light in the sky, a geologically active planetary body, and King of Hell all at once?

I also think about this debate as it relates the transcendental conditions of knowledge. For Kant, a table of twelve categories and our fixed intuitions of space and time delimits what we can know. The mind structures a priori everything we are capable of knowing about Nature. In 2006, the International Astronomical Union acted as a sort of institutionalized enforcer of transcendental limits, establishing the classificatory rules that the rest of the community of knowledge producing scientists is supposed to obey. Archetypal astrologers transmute the transcendental approach even more radically, replacing Kant’s twelve categories with the ten planetary archetypes (the Sun and Moon are included along with Mercury through Pluto). These cosmically incarnate archetypal powers condition each individual knower, stamping each of us with a unique planetary signature at the moment of our emergence from the womb. The participatory epistemology underlying the archetypal cosmological paradigm implies new conditions of experiential access to reality. Our knowing is mediated not just by mental categories, but by archetypal powers inhabiting Nature as much as mind.

Metzger’s et al.’s recent scientific paper is titled “The Reclassification of Asteroids from Planets to Non-Planets.” Here’s the abstract:

It is often claimed that asteroids’ sharing of orbits is the reason they were re-classified from planets to non-planets. A critical review of the literature from the 19th Century to the present shows this is factually incorrect. The literature shows the term asteroid was broadly recognized as a subset of planet for 150 years. On-going discovery of asteroids resulted in a de facto stretching of the concept of planet to include the ever-smaller bodies. Scientists found utility in this taxonomic identification as it provided categories needed to argue for the leading hypothesis of planet formation, Laplace’s nebular hypothesis. In the 1950s, developments in planet formation theory found it no longer useful to maintain taxonomic identification between asteroids and planets, Ceres being the primary exception. At approximately the same time, there was a flood of publications on the geophysical nature of asteroids showing them to be geophysically different than the large planets. This is when the terminology in asteroid publications calling them planets abruptly plunged from a high level of usage where it had hovered during the period 1801 – 1957 to a low level that held constant thereafter. This marks the point where the community effectively formed consensus that asteroids should be taxonomically distinct from planets. The evidence demonstrates this consensus formed on the basis of geophysical differences between asteroids and planets, not the sharing of orbits. We suggest attempts to build consensus around planetary taxonomy not rely on the non-scientific process of voting, but rather through precedent set in scientific literature and discourse, by which perspectives evolve with additional observations and information, just as they did in the case of asteroids.

It struck me that this line of inquiry may have profound implications for the future of astrological theory and practice, specifically the way we understand the difference between the ten planetary archetypes and the indefinite number of asteroidal archetypes. Does the unique geophysical history underlying planet formation correlate with a uniquely potent and living archetypal signature (that of a planetary god or goddess), such that astroids and dwarf planets (i.e., non-spherical bodies) must be treated more as underdeveloped demigods or shattered spirits? My limited exposure to astrologers who foreground asteroids suggests they would bristle at the idea of them being less archetypally significant than planets.

Or, if Pluto is a dwarf planet or an asteroid, perhaps that says something profound about the evolutionary power of these chaotically orbiting fragments of rock and ice. They are reminders of the violent history of our solar system, of the fact that tremendous destruction (i.e., an entire eon composed of nothing but mega-collisions between orbiting bodies, appropriately referred to by geologists as the Hadean) prepares the way for the miraculous emergence of more or less orderly living worlds.

In any event, this whole dispute between astronomers and planetary scientists about the status of Pluto has me wondering what experts in a third and for too long marginalized paradigm, astrology, can contribute to the conversation.

Here’s the video I submitted to Cséfalvay for his Prague exhibition:

_____________________________

*For example, Harvard astronomer Owen Gingerich, the Chair of the IAU committee that voted to demote Pluto, disagreed with his own committee on this issue.

Participatory Spirituality in an Evolving Cosmos

Here’s my talk from the INTERSECT: Science & Spirituality conference in Telluride, CO earlier this summer. It’s titled “Participatory Spirituality in an Evolving Cosmos”