“The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.”
–Alfred North Whitehead

“Philosophy in the Age of Technoscience: Why We Need the Humanities to Navigate AI and Consciousness”

I ended up giving a brief (10 minute) impromptu talk at Edge Esmeralda today, and this is a transcript of what came out: 

Hey, everyone, can you hear me okay if I speak at this level? Great. So, yeah, I’m Matt Segall and I am a philosopher. I teach in this wonderful graduate program in San Francisco at the California Institute of Integral Studies in the Philosophy, Cosmology and Consciousness Program. And what I wanted to share in the couple of minutes that I have with you is what it’s like to be a philosopher in an age of technoscience, an age of the ever-increasing domination of academia by STEM. And I want to try to pitch the value of the humanities, but not the humanities that are isolated from what’s happening in terms of technological and scientific advance, but actually, I would hope that we can overcome that divide.

I’m curious, so, how many of you have a degree or work primarily in some kind of STEM field or computer science? Does anyone here have a degree in literature or philosophy? A few, that’s great. We need to work together, and I think the way to do that is to recognize that there are many things that bring value and meaning to human life that can’t be automated, or that might be automatable, but maybe we don’t or shouldn’t want to automate them. And there are certain value decisions and judgments that we make as human beings that I don’t think the most sophisticated AI is actually capable of making. AI can augment and enhance our capacity to make judgments as human beings by using these tools. But I don’t think we should be rushing to offload the capacity to make judgments—whether ethical or aesthetic, political or legal—to our machines.

So I am kind of half in the STEM domain. I got a degree in Cognitive Science as an undergraduate, but I was lucky to study with some philosophers and phenomenologists who really brought home to me the importance of approaching the study of consciousness and cognition and intelligence from an experiential point of view. Brains are embodied, bodies are embedded in worlds, and consciousness is not just something that’s inside of our heads.

And as I continued to explore the philosophical options, I increasingly became convinced by a perspective called panpsychism, or panexperientialism. Are you all familiar with what that means? The basic idea there, for those of you who aren’t, is that what it is to exist as a physical process—whether you’re talking about electromagnetic radiation, or, you know, a bacterium propelling itself along a glucose gradient, or a multicellular mammalian human like us—is to be expressive of and to be capable of feeling something. And so even, you know, the sun and the photons that are radiating from the sun are enjoying their existence as themselves, for themselves. And when we bask under the sun, we’re getting a little sense of what it’s like to be the sun, right? And so experience is pervasive.

And when we try to understand the possibility of machine consciousness or artificial intelligence, having already accepted a kind of panpsychist view, I think it opens up a lot of interesting questions. From my point of view, what it is to be an electron is difficult to imagine, but I feel like, yeah, there is something it feels like to be an electron. It might be such a simple experience, but it’s probably nothing at all like what it’s like to be an organism, a highly evolved multicellular animal, like you and I, that has billions of years of evolution on this planet under our belts.

And so, yes, I think it’s possible that as the architectures of microprocessors and different computing technologies develop, we will have something like conscious machines at some point. But we will have a very hard time understanding and relating to that consciousness because it will be so different from our own biological form of it. And I was reading some of Mike Johnson’s work here about how we may have religious wars in the future about how to treat these conscious machines, right? And that may be sooner than we expect, actually. We might have new religions forming around particular AI systems, with groups of people who cathect with them beginning to worship them. We’ll have different AI religions forming around different AI systems.

And so part of my pitch for the relevance of the humanities is to say, hey, we’re not done with religion here. We’re not even done with occultism and demonology. These more esoteric understandings of disembodied spirits and ritual performance are things that we might initially have suspected that technology and science would allow us to “grow out of,” but when I read about what computer engineers are dealing with when they try to understand the sorts of intelligences that they’re unleashing in these new systems, I see them turning to demonology to do that.

And so I would say we still have a lot to learn from religion, spirituality, and particularly these esoteric forms of spirituality that come out of the very—I would call it a very broad and deep—Neoplatonic tradition that runs through not only the West, but the Islamic world as well. And as a philosopher who is not a mathematician, is not a physicist, I try to make friends with mathematicians and technologists and biologists and physicists to learn from them about how to put all of these pieces together. We might dismiss ancient religious as overly anthropocentric or indeed anthropomorphic. But I think from my point of view, we need to recognize that before we rush to transcend the human, we have to understand what we are, and all of our sciences are themselves inevitably anthropocentric.

We have a human perspective on the universe, and so while we can imagine that we can step outside of what we’re studying as if to observe it from a distance and gain control and the capacity to predict what’s happening, I am not sure that is finally a realistic posture to take. I would like to see an approach to science that would be more participatory, where we recognize that we’re not outside the system we’re studying. Anything we do to that system, we’re doing to ourselves. Second-order cybernetics already, you know, in some sense, invited to this meta-perspective, that whenever we’re studying a system, we’re studying our own interaction with that system. But I think we can go even further than that by bringing this experiential and qualitative dimension into the picture so that we recognize that what it is to be a human being is to be profoundly malleable and plastic and protean. We are the species that creates itself. And that’s a huge responsibility, because there’s no fixed essence that we can fall back on if we make a mistake. We’re on an evolutionary trajectory. Every decision that we make, every technology we enter into a relationship with changes us forever, and we can’t undo that.

So the stakes are very high. And so, again, rather than being in a rush to transcend the human, rather than being in a rush to eliminate death, I think we should slow down, look at these spiritual and humanistic traditions that have been with us for thousands and thousands of years. Buddhism and Neoplatonism are only two of them, but all the world religions have something to teach us. I think, you know, Buddhism is wonderful, but there’s—well, I have about two minutes to say this, so I’ll say: There’s something called Buddhist modernism, which is this idea that Buddhism is actually more compatible with modern science than any of the other religions. But that’s a kind of—actually, that’s a construct, that’s a product of the colonial encounter.

Actually, it was a lot of the Buddhist lineage holders and practitioners coming out of Asia, when they first met Western anthropologists and protestant missionaries, who repackaged Buddhism to make it sound more scientific so that Westerners would take it up in the modern Western world. And then in the ’60s here in California, that process continued. And so we have this idea that Buddhism, just as a kind of mindfulness practice or “mind science,” is neutral as regards any metaphysics, neutral to any cosmological or ritual context, when if you go and look at the history of Buddhism, it’s full of spirits and demons and reincarnation, and all of this rich metaphysical—I don’t want to say “baggage,” but from a scientific materialist point of view, you would say baggage. And so I would want to question Buddhist exceptionalism in terms of which of the religious traditions secular scientific people feel it is safe to draw upon. We should be wary of a kind of subtle colonial extraction of only those aspects of this rich tradition that we think are compatible with the materialistic metaphysics of brain-bound consciousness research. 

I have a minute left. I’ll stop there and see if there are any questions or if you want to throw things at me. Either sounds great.

[Audience member]: Tell us about your religious practice.

Uh, I can’t do that in 45 seconds. Maybe we’ll talk later. Yeah. Great question for later.

[Audience member]: What kind of people do you like to connect with?

Anyone interested in the relevance of philosophy to the study of consciousness and AI, and anyone who’s open-minded and maybe has something that they feel like I’m missing, that they could teach me. I would love to learn something.

[Audience member]: Yeah. So, obviously there’s a bunch of religions currently—Buddhism and Christianity or whatever—but the way you’re talking right now sounds like there’s multiple interpretations of each of them as well. And so, in essence, with AI on top of that, there’s gonna be so many religions, right? Is that true?

I think so. Yeah, yeah. I think we’re in an age of the proliferation of religion rather than the growing out of it as we might have thought. Yeah.

[Audience member]: Awesome. Thank you.


Posted

in

by

Comments

One response to ““Philosophy in the Age of Technoscience: Why We Need the Humanities to Navigate AI and Consciousness””

  1. Grant Castillou Avatar
    Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

What do you think?