Summary of my dialogue with Robert Prentner:
I apologize for the sound quality, but there is a full transcript below!

Robert began by explaining his shift from skepticism to engagement with AI. Early versions of ChatGPT struck him as underwhelming, but newer models like GPT-4 and Claude impressed him with their linguistic and problem-solving fluency. This sparked his deeper interest—not in whether AIs are conscious in a human sense, but in how they might open new relational or epistemic access to what he sees as a fundamental field of consciousness.
We agreed: experience is not an emergent computational property. It is more basic than space, time, or matter. Both of us rejected physicalism as not only philosophically inadequate but existentially hollow, unable to account for value, freedom, or agency. Robert located himself somewhere between idealism and panpsychism, drawing inspiration from Leibniz and Whitehead while resisting labels and reductionism of any kind.
We explored Whitehead’s process philosophy and the challenge of thinking process without clock-time. Space and time, for Whitehead, are abstractions from the more fundamental flow of experiential becoming. Donald Hoffman’s “interface theory” came up as a provocative analogy—though we noted Whitehead and Bergson both resist any view that treats perception as mere hallucination. For them, intuition provides a real mode of contact with the world’s interior dynamism.
Turning back to AI, Robert compared large language models to Platonic mediators—potential tools for helping human beings realize ideals in the world. But he warned of a deeper danger: not superintelligent machines, but the erosion of human capacities as we outsource more and more to these systems. The ethical threat is civilizational amnesia.
We also discussed the covert Platonism in many computational theories of mind: a hidden metaphysics of information that fails to account for meaning, context, or agency. Robert proposed a corrective: it from bit from chit—consciousness as the deeper ground of both form and fact.
We examined the possibility of machine consciousness carefully. While the precarious self-maintenance of living organisms may ground our kind of sentience, Robert distinguished this from consciousness as such, which he sees as universal, not necessarily perspectival in the human or biological sense.
Whitehead’s theological vision—God as the primordial orderer of possibilities and the growing companion of the world—resonated with Robert’s invocation of chit in the Vedantic sense. We both affirmed a view in which consciousness, value, and becoming are metaphysically entwined, and where AI must be approached not just as a technical development, but as a philosophical and spiritual threshold.
Full transcript:
Matt: All right. Okay.
Robert: Is the AI noise canceling working?
Matt: It’s working [turns out the AI noise cancelation wasn’t working so well!]. Yeah. Yeah. For better or worse, for better and for worse, I would suspect. So, hi, Robert. Thanks for joining me.
Robert: It’s great to be here. Glad to be in San Francisco. It’s a beautiful day.
Matt: Yeah. We’re at Yerba Buena Park in the middle of San Francisco, and you are here to attend a conference on consciousness and AI, is that right?
Robert: Yes, it’s actually not a conference. It’s a meeting or symposium.
Matt: And you study consciousness primarily from a philosophical and also mathematical perspective?
Robert: Right, so my background over the last five or seven years, I mainly studied consciousness, and as you said, I have this very strong connection to mathematics, to study from a more mathematical perspective. I would say over the last year or so, I got particularly interested in this question of AI. The reason for that is that I, you know, I changed my mind on this whole topic. Meaning that if you’re asking me three years ago or so, I would have said, it’s not so very interesting. Now I think it’s very interesting.
Matt: What changed?
Robert: Yes. The thing that changed for many people, like fall 2022, ChatGPT, and I initially played a bit with it, like a lot of people, and I kind of found it very limited back then. So there were these tapes, people having conversations with it and were attributing, I don’t know, consciousness or something, I don’t know, whatever, to it, and I found those claims kind of dubious. And a bit silly. Now, I kind of put those things aside for many, many months, and because I had other things to do, I was getting into a new job, so I didn’t have much time. But like one year ago or so I picked it up again. GPT-4 was there, Claude, and so on. And those tools are really good, I think. So a lot of use cases. They really improved. So I remember when ChatGPT when it was released, I gave it some coding exercises because I needed to do some code. And it produced stuff, but the stuff didn’t really work. That was 3.5, and GPT-4 really improved. I did the same, and it really worked quite good. So I was very impressed with that, and I found that many things that we want those things to do, they are really good at. I mean, there are many things which they can’t do very well, but there are also things which they are very good at, much better than I am. So I really found that this question, what kind of, well, cognitive capacity could be attributed to them. Now, as I said, I’m particularly interested in consciousness, and one of the things which I want is to study this question of AI consciousness, but reframe it, because I find the way how most people think about it… And I mean, okay, we’re in San Francisco now and so probably there are maybe a majority, but at least a large chunk of the population who don’t find it immediately silly if you talk about AI consciousness.
Matt: They find it silly if you deny it.
Robert: Yes. And it’s actually very similar to the place where I’m teaching in Shanghai. I mean, I’m teaching at the tech school. Most are computer science students. So they have this kind of affinity for this question. Now, I still find that most people who do that, who take it seriously, have somewhat strange conceptual views. In particular, I don’t think that consciousness is something which is internally generated by the brain or which is emergent from some computation, but is something that… It relates to the stuff that is outside of us, and it’s really the question, how can we access it? And here, I think AI might be very interesting. That’s the direction which I’m exploring.
Matt: Okay. So let’s back up a second and just talk about consciousness and how you approach it and then we can talk about whether or not, and if so how machines or computers of some kind might realize it or tap into it, access it, or produce it depending on how we think about what consciousness is. It sounds like you think consciousness is something rather fundamental, perhaps it’s…
Robert: Yes, definitely. I would say it’s fundamental. As basic as the physical world.
Matt: And more basic?
Robert: More basic, much more basic.
Matt: And so would you say you are more or less an idealist, if you had to put a label on it?
Robert: Well, I… I’m actually not so super happy with putting labels on it. I know that’s very helpful for people. First I’m saying, I don’t know who said it, when you label me, then you can negate me. And so I don’t like that. I don’t like it either. When you say idealism, there are so many different kinds of idealist philosophies out there. And some I really like, for example, I really like Leibniz’s philosophy. But some philosophy, which is typically considered to be idealist, I have some problems with Berkeley. I’m not such a big fan.
Matt: So if we were just to say, though, as a point of departure, I like to say, again, this is a point of departure, some orienting generalizations. There seem to be four basic positions available in the study of the metaphysical status of consciousness, right? There’s materialism or physicalism where at best, consciousness is a word that we use, it’s a cultural construct, social construct, but ultimately, in terms of what is causally efficacious in the world and can be scientifically studied and grounded, we may as well say it doesn’t really exist. And so the strategy would be, we’re going to eliminate it in some way. So there’s physicalism. Then there’s dualism, where you might say consciousness is some kind of epiphenomenon, there’s the whistle on the train. You might even, as a dualist, say that it emerges and then has downward causal effects on the physical basis and substrate out of which it emerges. There are still some dualists around. David Chalmers, for example, likes to defend that. Then there’s idealism, where you basically would say that what we perceive as the external physical world is somehow a facet of this larger mind, or fundamental consciousness. And so in a sense, it’s reducing physics to mind, instead of mind to physics, in the sense of materialism. And then the other position, I would say, is panpsychism, where there’s attempts to recognize the way in which it’s not that matter or mind is one or the other would be more fundamental, but there’s a kind of intimate relationship between the two, and so it’s not dualism. It’s more of a, you can say dual aspect monism, one approach to it, there’s Whitehead’s process philosophy, another approach to it. So obviously, of these four, there are many varieties of each. If you’ll grant me that as an orienting generalization, do you find you’re more drawn to one of the four positions or…?
Robert: So I would say I find myself drawn somewhere to lie somewhere between three and four. Between idealism and panpsychism. Now, as you know very well in panpsychism, there are so many varieties of panpsychism. Russellian oriented panpsychism, but there’s also this more Whiteheadian panpsychism, for example, which is completely different in many ways. So I’m always hesitating when you say, when people say panpsychism, they often have a very particular philosophy now. They have a very special thing in mind. Sometimes some don’t.
Matt: Why not physicalism and dualism?
Robert: I mean… Oh, there’s so many, so many things. People always ask me, why not physicalism? And then my standard reply is, why physicalism? I mean, I don’t think it’s a good philosophy. I just don’t see the appeal to it. I mean, I think there’s so many things in our lives, which are interesting, which matter to us, which are very hard to reconcile with physicalism. I mean, at least not at first sight. Maybe there are ways to do it, right? And argue, argue, argue, argue, I make these epicycles of trying to accommodate it. You can always do that. But intuitively, I find it very implausible. So it’s very hard, I think it’s very difficult if you ask me, why not physicalism, then my immediate response would be why physicalism in the first place. Why not? I mean, of course, there is an academic answer to this question. Especially with respect to consciousness, there seems to be this impossibility to get consciousness out of it. If you start with something which is not conscious, whatever form whether they think proto-form or what they call full-fledged version or whatever, then there’s this question how to get it out. I mean, physicalism might… you see, and you made a very interesting point I think, you said with materialism, or physicalism, also there are different varieties, but most consequential one, I would say the most transparent one, is a position which denies consciousness. It’s the only consistent materialism.
Matt: Right, eliminative materialism.
Robert: And I don’t have much time for that. Yeah. Right. I think that’s the most consistent version of materialism. But I think it’s totally implausible.
Matt: I would say it’s logically consistent, but experientially inadequate, because, I mean, from an idealist or panpsychist position and I agree, I’m kind of somewhere in between those too, matter is something which we only ever encounter in experience, actually matter as something that might exist independent of experience is one of the most abstract ideas that we’ve ever conjured in, you know, the history of human beings, as far as I can tell. And so I totally agree with this intuitive incomprehensibility and… I have, you know, often panpsychism is dismissed with an incredulous stare, you know. I do that when I hear about eliminative materialism, because it seems absurd on its face, and yet, many very smart people argue for it. So it would be interesting, just as a sociological question, or maybe this psychological question, what’s going on there, not to say that, you know, diagnosing people with mental health issues, but I do think materialism has mental health consequences if you fully accept that worldview. You can’t believe in freedom. You can’t believe that human values have real consequence in the grand scheme of things. They’re just, they have like social constructs that we make up. And there’s a certain position, you know, like a physicist Sean Carroll, defends physicalism, but among physicists, he’s more conversant with philosophy and he has engaged with Philip Goff in dialogue about panpsychism. He obviously disagrees. But you know, Sean Carroll has this position that I would call almost a heroic kind of existential stance. Like science tells us that the universe is meaningless.
Robert: He calls it poetic naturalism.
Matt: Right. We have to be courageous enough to just accept that. We have to make up the meaning, you know? And he’s at least acknowledging that there’s a problem there, you know. But to me, that’s not satisfying. And so, is your intuition about physicalism being inadequate or even absurd?
Robert: Yes. So for example, you said this problem with value. Yeah, and that links back to the thing which I said that already initially, I don’t find it very plausible. So, for example, I don’t see a straight… a straight connection between physicalism and value. And it’s values just take up such a central part of our life. I don’t see the appeal of physicalism. Of course, I mean, academically, there’s maybe a story to be told there. But like just intuitively… Now, another thing which I wanted and I think here we are on the same page, actually, with idealism, a lot of people, if it’s somehow the purpose of idealism to reduce the physical to the mental and I’m not happy with reducing, so I’m not a reductionist. I’m happy to be called something like a non-reductive idealist, but I don’t know whether that exists. So that’s one of the things which I sometimes see, that people who endorse something like an idealist position, have a very reductionist spirit. Maybe not in a kind of physicalist reductionism very obviously, but still, I mean, I would say, on a more abstract level.
Matt: There’s a philosopher named Graham Harman, who has this helpful terminological distinction between two kinds of reductionism. There’s the kind of reductionism that physicalism tends to propose, which he calls undermining, where you reduce emergent wholes to their component parts. And then the other form of reduction is the move, more the idealist strategy, he calls it overmining, when you reduce parts or individuals to some larger whole. But in either case, you’re losing this quite obvious feature of reality, which is this capacity to self-differentiate and enter into relationship. And so, you know, I’ve mapped out these four possibilities, but I think there are even more subtle metaphysical distinctions, which could be made, having to do with, say, a substance ontology versus…
Robert: Yes, a process or a processual ontology.
Matt: Yeah. And a lot of the times the distinctions between the various species of panpsychism or idealism have a lot to do with where you come down on that question. Substance versus process. So it sounds to me like you have a little bit of your research before this conversation that you’ve been influenced by Alfred North Whitehead?
Robert: Yes, I think so.
Matt: And so you find that process approach compelling?
Robert: Yes, so I think I heard Peter Simon saying that Whitehead was the best metaphysician of the last century, and I would tend to agree. So I found it totally fascinating. I still have difficulties to understand him. So do all Whitehead scholars tell me that.
Matt: Well, because he’s so, you have to be so interdisciplinary. Yes. I’m very limited in my knowledge of mathematics, and I know so much of Whitehead’s work is grounded in mathematical style, though he’s careful to say mathematics has misled philosophy, because, you know, philosophy for Whitehead is not about deduction. It’s the search for premises, he would say. He’s a radical empiricist, but he does develop a set of categories that he wants to analogize to a kind of matrix that could be applied to a whole variety of different experiences to illustrate analogical connections between different domains of experiential reality. But as a radical empiricist in William James’ sense, Whitehead would say we should give up this search for something behind experience that might explain or be the cause of experience. There’s no outside of experience, which is why he’s often called a panpsychist or pan-experientialist is most precise. But yeah, his categorical scheme is drawing on, you know, all the special sciences, advanced mathematics, and kind of invents what’s called mereotopology in Part IV of Process and Reality, very difficult sections for me to understand. And it’s kind of out of date, so even mathematicians, people working on topology are like, yeah, this is… we could… they see the trajectory to how that field has developed today, but it’s not really alive. You know, what he did in that chapter is not relevant.
Robert: He also did some mistakes in his chapter.
Matt: Yes, yes. He was notorious for not editing his own books before…
Robert: I think he also just wanted to get stuff done. I think he just didn’t want to waste time with these things. So that was my impression, like, especially in his late phase, where he wants to get stuff out.
Matt: It’s kind of frightening to imagine Process and Reality was originally a lecture he delivered at University of Edinburgh. The Gifford Lectures. I can’t imagine sitting in the audience trying to make sense of that.
Robert: So I’ve heard that there were two people in the audience after the first lecture, so, yeah.
Matt: That’s a story I’ve heard, and I’ve heard that it maybe wasn’t true.
Robert: It wasn’t true?
Matt: Victor Lowe, his biographer, may have said that, but I’ve heard from other people that there was actually a lot of interest. I really… I mean the last part of Process and Reality is much more about the theological implications. But what is it that drew you to Whitehead?
Robert: Well, many things, actually, but I… I would tell you on a biographical level, I have a scientific background myself. So I always appreciated with Whitehead that… As you said, I started to do mathematics, applied mathematics, and he had a phase where he did philosophy of science, philosophy of science, but in a very idiosyncratic way, I mean, it seemed he went through the whole history and kind of read it along his ideas. It’s really interesting. And then later on, it’s kind of a more speculative phase where it’s metaphysics, but as you said, he always came up with these quite difficult mathematical or mathematics-inspired schemes, categorical schemes and topology as well. So that’s something which I can relate to personally, which I think is very interesting. Now, I think he just had many right things. So already, I think the Concept of Nature, in Science and the Modern World he kind of analyzed… he often analyzed what went wrong in his opinion, and I found it really, really good. Also this idea that what philosophy should be, and with… I always get a very different sense of what philosophy should be if you compare Whitehead to other important philosophers. With Whitehead and he said it in depth that philosophy should be something like the critique of abstractions. And I always found that very, very good, this idea. So it’s not so much as saying, “Oh, this is the case and the world works like this,” but to come up with a way to understand the world, make it amenable to exchange, to see it through a certain lens.
Matt: Yeah, there’s some point where he says in Process and Reality that philosophy is to act as a restraint on specialists because of their tendency to absolutize abstractions, which they want to explain the concrete experience with. He called this the fallacy of misplaced concreteness, but also philosophy is an attempt to weld common sense and imagination. So we never want to do more violence to common sense intuitions than we have to, in order to come up with a coherent metaphysical account of reality. So we need to be, in his view, we need the speculative imagination, but we should be anchored in the hardcore common sense presuppositions of civilized life. And so things like human agency, if your explanation of consciousness has no room for agency, then you’re kind of pulling the rug out from under the whole scientific enterprise, you know, I think it’s in Science and the Modern World, where he has to say that… Scientists animated by the purpose of proving their purposelessness constitute an interesting subject of study. Yeah, so he’s always looking for these performative contradictions and trying to reestablish science on a different kind of methodological basis so that it wouldn’t seek to explain away experience but explain in light of experience. You know, as a philosopher, I’m often still trying to figure out the right relationship between philosophy and science, because it seems like science is inevitably going to be in the business of making models and testing models, and philosophy as the critic of abstractions is just there to remind the specialists testing, you know, devising and testing models that the model is an abstract representation of something, and not the something. Right? And so this vigilance to avoid committing the fallacy of misplaced concreteness, which I think is so prevalent in the natural sciences these days, because physicalism remains dominant…
Robert: I think it’s prevalent in human thinking more generally. I mean, I always struggle with that by myself, didn’t I do that, this bad thing, which I kind of say, “Oh, other people are doing it,” didn’t I do it myself? So for example, when you are talking about mind as such, aren’t you kind of, I don’t know, doing something very similar. I think that there’s this common struggle. One has to be aware of the fact that one is prone to do it.
Matt: And so, Whitehead was writing in a time when the word “computer” meant a person that worked in an astronomical lab to help calculate, you know, transits and whatnot. And nowadays, obviously, after the Macy conferences about cybernetics and the development of microprocessors, we’re in a totally different world, where, you know, if you look at history and just trying to understand what cognition is and what mind is, what consciousness is, the latest technologies of the day were always analogies that people would reach for. You know, so in Descartes’ day, forms of automata, automated mechanisms or clocks, say, were the go-to metaphor, in the 19th century was a steam engine, in the late 19th and early 20th century, it was the telegraph wires, now it’s the computer. But you know, you said it was the development of large language models that made you take an interest in AI as having some relevance to the study of consciousness. And so is it because of the intimate relationship between consciousness and language that you think there are reasons to look more closely at LLMs or what was it that first made you think that this technology could have relevance to our understanding of consciousness?
Robert: Well, if you try to conceptualize what they are doing, not on a technical level… They seem to be able, based on large corpora of language, which I would call encodings of meanings that we produce, I mean, it’s not their meaning. They just use it, but it’s interesting. They seem to be able to… in order to develop certain representations which they can then use to do whatever they do. And I think that’s a very interesting way to look at those. I mean, speaking of metaphors, I’m very critical in the sense of taking metaphors literally. I always think metaphors have a domain where they’re useful, and a domain where they’re breaking down. I mean, there’s lots of things which… If you take whatever, the brain is a computer, then of course, there’s a lot of stuff which is not like a computer. Even computers are not like computers, or Turing machines, or computers are not like Turing machines, like in the idealized model. So there’s always this difference between the real things and the models. But, well, I mean, putting that aside, there’s one metaphor which I like, and which is from Donald Hoffman. He has this metaphor… interfaces, how interfaces work. What our brains, what our cognitive, perceptual faculties are actually doing, and the metaphor, the things around us, the things that we see, are like the icons on the desktop, and there are certain structures out there, space and time and objects are like structures on the desktop. They are very useful in order to kind of let us do, I don’t know, things, move around files on the desktop, move objects and get to the place, cross cars or whatever, but they do not inform us about what’s really behind the desktop. So I find that is a very nice metaphor. And it seems to me that these large language models in a way take first steps in constructing such interfaces. From that perspective, I found it very interesting.
Matt: Yeah, I mean, Hoffman’s work strikes me as so close to something like what Whitehead was saying and like Leibniz is saying, this idea of the reality made of conscious agents. Leibniz has Monads. Whitehead has actual occasions. And for Whitehead also space and time is a kind of emergent domain that is a secondary byproduct of the relations of these actual occasions. Similarly to Leibnizian theories based on where it’s… what’s real are the Monads, it’s not space and time as containers, space and time aren’t containers, they’re more like these interfaces or internal features of the monads. And for Leibniz, they’re set in pre-established harmony by God. Whereas in Whitehead’s case, the monads, the actual occasions aren’t enclosed. They’re actually almost all relationship. They’re almost all prehensions in his terminology. So it’s the mutual feelings, it’s the structure of those mutual feelings that we can measure as space and time. But that’s for Whitehead secondary, emergent. And so it’s very similar to what Donald Hoffman is saying, but I also wonder if there’s not a deeper… conflict or where they’re quite different here. Because Whitehead wouldn’t want to say, or other process philosophers, like the French philosopher Henri Bergson wouldn’t want to say that our experience of the world is a controlled hallucination, to use the favorite predictive processing phrase. Rather for Bergson, you would say (and he and Whitehead mutually influenced one another), we have these two different ways of thinking about experience and engaging with experience. There’s intellect and intuition, and for Bergson, intellect engages with the world as if it were made of solid bodies out there in space, and it’s more about survival, and so when you think about the evolutionary process as one where we’re trying to as animals, any organism, trying to optimize their relationship with other organisms and the environment, intellect functions importantly, but there’s this other dimension of intuition or instinct before it becomes conscious in our animal ancestors, where there is no separation really between the conscious agent and its environment. There’s more of a continuous connection, an open-ended creative flow. And for Bergson, if we were able to turn the dial of the intellect down, we can intuitively participate in the deeper dynamics of cosmic evolution and biological evolution, the élan vital, the interior of the world directly, in a way. You know, Bergson’s been caricatured and dismissed as a vitalist and all these things, oh he’s a mystic, and impossible to pin down in terms of what the actual argument is, etc., but I think I worry about Donald Hoffman’s view as leaving us in a kind of solipsistic situation, right? And so, this view of what is intuition in Bergson’s sense and then what are relations and prehensions in Whitehead’s sense? I don’t see any equivalent in Donald Hoffman.
Robert: Yes, so first of all, I think Bergson is treated very unfairly. I mean, he was a big star at his time. He actually is difficult to pin down, but he’s certainly not this kind of silly vitalist guy. I mean, he’s very sophisticated. But anyway, so now coming to more specifically what you were asking, I think he… the point is a bit more about analytic… I think the key question is space and time. One of the questions which I always had about Whitehead and Bergson, and process philosophy more generally, is how to think about processes in the absence of a temporal framework. From the perspective of the interface theory, space and time are representational structures. If you believe there is something beyond space and time, and clearly, I get the feeling from Whitehead, I mean, he said, “Oh, it’s like it’s about the becoming of an occasion,” and space and time might be a very convenient way to think about these things once they have been individualized. And how does it actually work? What’s behind it? And I think there is a kind of common open question to both. How to understand process philosophy, and maybe you can illuminate any of that. How do you think about process without time? And also, with respect to relations, there’s this question, well, what’s beyond the interface? And I think that’s an area where those two approaches might come together more closely.
Matt: Yeah, yeah. Because when we think about relations, lots of people intuitively think about, oh, you have two spatially-temporally separated material things which stand in some relation, and it’s clearly not what’s meant here. Space and time would be in Whitehead’s process ontology… The ways that actual occasions are related. It’s not that actual occasions are in space-time and related within space-time. Space-time is a potential relation between occasions, and so the difficulty that I struggle with too is how to understand process or becoming in a way that Whitehead says explicitly is not in physical time, clock time, metrical time, right? And so metrical space-time for Whitehead, again, is just that, a way of measuring the physical relations between actual occasions, and actual occasions for Whitehead can have other types of relations, not just spatial and temporal ones, but qualitative ones that can’t be quantified. And so understanding what he means by concrescence as a creative process whereby something concrete is brought into being. And that process of concrescence is happening… is a process of many to one to many relations to put it in terms of kind of network schema and he says, concrescence is the process whereby the many become one and are increased by one. And we know phenomenologically that we’re always in the present. And so the actual occasion, there’s a certain subjective immediacy of its experience. And because we’re always in the present phenomenologically, and Whitehead’s method is to generalize from experience to a cosmology, to a conception of what the finally real entities are… He wants to say that an actual occasion of experience, whether it’s, you know, occurring in the life history of an electron or occurring in our biography is held in between an inherited past and an anticipated future. And so there is something temporal about every experience in that it is situated in this specious present, if you will, where there’s an already actualized or given past that has to be in some way conformed to, to some degree, and then there’s an open field of possibilities that every actual occasion is trying to integrate with what needs to be conformed with in the past. And so how that process of integration happens in a way that isn’t in physical space-time, I think requires a bit of an intuitive leap to recognize that our habitual way of thinking of time as something that clocks measure is actually misplaced concreteness. And this is the whole debate that Bergson and Einstein had, which might have a lot to do with why Bergson fell out of favor, because Einstein was all the rage then and kind of Bergson was really popular up until about 1922, and then it was Einstein. Science defeated philosophy in their famous exchange in Paris, right? That’s the way the story had been told and it’s starting to be retold in different ways. But Bergson’s whole point was there’s something about the flow of time that actually can’t be measured and that must be presupposed in order to measure what we call clock time, which is just a convenient way of coordinating. And Einstein, you know, is able to develop the relativity theory by thinking about ways of signaling across distances to prevent trains from crashing into each other, you know, and so there’s already that sense of metrical or measurable clock time as a convenient way of getting to the place you want to get to on time and coordinating amongst different agents, but it’s a secondary overlay on a more primary duration.
Robert: Did you actually study Whitehead’s theory of relativity?
Matt: Oh, yeah, yeah. Okay. I mean, I can’t claim to fully understand the tensor equations and all of that, but the epistemological issue he had with Einstein, I think is very clear to me. He didn’t like the idea of curved space. Because it left us in a situation where to make astronomical measurements, we’d first need to know where might the mass be that would affect the intervening space between us and a star we’re trying to measure. And so he says, in Principle of Relativity, his 1920 book, Einstein’s account of a curved spatiotemporal manifold, though elegant, put us in a situation where we first have to know everything in order to know anything, that we would first need to know where all the masses are before we could measure anything, and it’s this epistemological self-contradiction. So he developed an alternative that didn’t require, a theory of relativistic gravitation that didn’t require curvature in a higher dimensional space, which was thought to be empirically equivalent for a while. And now there’s arguments about whether it still is or not. But there’s a whole family of other approaches that could come out of Whitehead’s, you know, flat metric, but nonetheless, Einstein continues to reign. But yeah, Whitehead agreed with Bergson that there was this more primary duration or flow, or creativity he would even call it, creative advance, out of which what we measure as clock time is derived and actually our ability to measure it presupposes this flow, right?
Robert: And this question of creativity makes it even more difficult, I would say. I mean, it’s not just rearrangements or the reshuffling of stuff that already existed. It’s new, bringing something new into being and it’s really more difficult, I would say. Even on top of this question how can you think about processes outside of time? We have this further question how to think about creativity. Maybe there is no good answer other than saying that’s what we kind of experience every day. But I don’t know.
Matt: I mean, it destroys the very possibility of determinism right off the bat. Yes. It’s not the universe isn’t an already… It’s not a set of already actualized things that then change their relative positions to one another. They’re actually, if you will, new things being born all the time, actual occasions in Whitehead’s scheme that build on the past, but add something new. And while he’s a process philosopher, he has this idea of eternal objects. He’s actually kind of a reformed Platonist. Yeah. And so he’s not just saying everything is flux. He’s saying that we have to make an ontological distinction between actuality and possibility. And so for him, the platonic forms are the possibilities, which is a bit of an inversion of Plato, in the sense that Whitehead’s not saying the forms have preeminent actuality, and that, you know, our physical experiential embodied world is a pale imitation of that. It’s kind of inverting it, but he still has this distinction between, you know, his eternal objects are definite potentials, so-called, and actual occasions need to be in a kind of polar dynamic relationship with those definite potentials in order to generate novelty. So, he’s a Platonist.
Robert: Yeah, actually, I think what you just said about this kind of… I hope it is where Plato in some form lives, I mean lives in Whitehead’s philosophy. He’s inverted, but he still lives there and I found that very attractive. He’s, I would say, compared to many other philosophers, he’s a kind of exception in that he kind of says, oh, it’s part of my system. And also, this move in working stuff, like, say, okay, the forms are potentials. I find it very… that’s a very appealing thing. So, yeah, I think so. When I’m asking about idealism, I always, even though he’s normally not considered to be like a classical idealist or something like that. But the concept of an idea, what it means to be an idea. I mean, I always kind of refer to Plato and then use Plato to understand what’s going on. Even though, of course, it’s the same word, it doesn’t mean that… I mean, Plato’s interpreted in various different ways. Most often in a dualist sense, but there’s a little bit interesting, I think. In Plato is very sophisticated an interaction between the ideal forms and the empirical world.
Matt: Yeah, I always remind people that all the best arguments against the theory of forms are also in Plato’s dialogues. You know, he’s exploring positions and treats philosophy as dialogue, you know, literally as dialogue, in pursuit of the truth, that always, in some sense, remains ahead of thought in the sense of, you know, dianoia; and whatever noesis is, that’s something kind of ineffable where philosophizing, the dialogue leaves off, and for Plato, you have to then just resort to myth, actually find ways of imaginatively or intuitively identifying with the object of knowledge beyond discourse, beyond dialogue.
Robert: And it is really cool, I mean, it’s a likely truth… It’s a good story, it’s the best story that I can come up with. But it doesn’t mean that it’s true. Right. Right. I really like it.
Matt: So let’s try to circle back to AI again. Yes. If you were to tell a likely story about our current situation and the trajectory of the development of this technology, do you think conscious machines are a real possibility, or how would you frame that question?
Robert: Right, first of all, I would say we are… it is a somewhat decisive point. I mean the development will happen, or some development will happen. The question is whether the good guys are part of it or not, whether philosophers, for example, like whether we are part or not. This whole discussion, it will happen irrespective of whether we do. So it’s really important that we actually enter into this discussion. It would be a missed opportunity if we didn’t do it. And the future might be very bad if we don’t do it.
Matt: It’s not just an engineering question, it’s an ethical question.
Robert: Right. There’s a lot of talk, I guess here about this area, even more talk about this kind of AI apocalypse and these scenarios, AI is taking over the world. Now, I think there are certain things which are more urgent even than that. They are much more urgent, namely, that we as humans can forget our humanity, because we delegate everything to AIs. We’re becoming like stupid, like the tools that we are using, because we’re forgetting something and most people, and most people are lazy, and they like to delegate stuff and not do it themselves. And if we do that with LLMs or with other technologies more and more, then there’s this real danger that we are actually forgetting thereby what we’re good at, what it is to be human.
Matt: Yeah, right.
Robert: And I think that is a real danger. I’m not trying to… I don’t feel very comfortable in imagining a scenario 50 years from now, how the world would look like that. I don’t want to know. So it’s very important that we do it. Now, if we do it, say if we enter this, you ask me how I could envision what might happen. I try to connect it to the thing which I talked about, Plato. It is sometimes presented? You have this world of ideas whether you think it’s potentiality, or actuality or not. It doesn’t matter for the moment, but you have this world of ideas and you have the empirical stuff, and then you have your soul, you have souls, and soul beings, you’re somehow mediating between those things. And Plato, you kind of describes this mixing processes which are kind of hard to understand and no one really understands him and there’s much discussion about this, how to understand Plato on this. But in a sense, I would say one of the things that we as ensouled beings, maybe there are many more. According to Plato, a lot. And one of the things that we’re particularly good at is kind of mediating between those two things. To realize the ideal structures in the empirical world. And I think that large language models might be very helpful tools in doing this.
Matt: And machine learning in general, maybe?
Robert: Yes, yeah, but in order to do that, we must keep something like that in mind. We must not have this idea of… Oh, you know, materialism is true. There’s just this physical world and then there are these mechanisms which are going on. And LLMs are just mechanisms and maybe they’re sophisticated and at what point we can call them conscious mechanisms, whatever. Maybe that works, maybe it’s a different architecture, doesn’t matter, but I think that’s the image which I wouldn’t want to have. I think if we have… if everyone has this image in mind, then that might become a very bad place. Transhumanism might get into a very bad place.
Matt: Yeah. So I think it’s a really… I’ve noticed and I had a conversation with a philosopher who did her PhD in Oxford, in theology actually. Victoria Trumbull, who wrote on Bergson and St. Augustine, and there’s a whole chapter on computational theories of consciousness, critical of them. But there’s a way that in her argument there’s a kind of tacit or unconscious Pythagoreanism at work in a lot of these computational theories of mind where information is treated almost as this… while they all claim to be physicalist, it’s treated as a kind of non-physical substrate of some kind that’s whether it’s in the context of understanding what’s in our DNA or understanding how conscious experience might be digitally represented or downloaded onto a computer in some way…. there’s a certain unconscious Pythagorean or Platonic thing going on there that seems a bit confused because I just think it would be better to be a Platonist on purpose rather than on accident or unknowingly. And so when we try to understand something like information, let’s say, which information theory in Shannon’s sense seems to be really crucial for these attempts to understand consciousness in a computational way. How would you relate the idea of information whether Shannon’s or other conceptions of information? So let’s say what Plato meant by form, do you think that there is an important resonance there that needs to be spelled out more?
Robert: There’s a very important… it’s a very important topic. I would call this picture that you… I would call it physicalism plus. It’s this very strange notion of information, when there’s this slogan, it from bit, which is a reference to Wheeler but then there’s other things that Wheeler said, and people forget about that all the time. And that’s a shame. And if you talk about information, you have to talk about meaning. We are free to ask questions to nature, and nature is free to answer those questions. And there is, of course, there’s the information in the answers that we can construct and the way to formalize them. But there’s still this process of asking questions, answering questions, which is totally overlooked. So there’s the story of the 20 Questions game from Wheeler. The 20 Questions game. Normally, someone is sent out of the room and then other people are kind of, I don’t know, they’re agreeing on a certain word and then the player re-enters the room and then can ask 20 questions to find out what the word is. And he realized where at the game, there’s actually no word which was fixed in advance. The guy comes back and he asks questions and the answers need to be such that they are consistent. It’s really not predetermined. So I think you can say something like information is fundamental, but then you also need to say that information derives from meaning. I would even go further, I would say consciousness… I would say it from… my slogan would be it from bit from chit, but chit is the Sanskrit word for consciousness, which underlies this process. So I think that’s a very important discussion. I think if you just stop at information and if you think of information as a kind of commodity, something that sits out there, Whitehead would call that a fallacy of misplaced concreteness. And that would be very bad. And that’s why I called it physicalism plus. It’s slightly better than pure physicalism/materialism, but not much.
Matt: Right, yeah, yeah. Yeah, so information is, you could say, observer relative or question relative or presupposes communication.
Robert: Yes, yes.
Matt: And even though, you know, Shannon would distinguish between the meaning of the message, and, you know, the quantitative measure of…
Robert: Right, but Shannon didn’t care about the meaning… He put that aside, but we do, exactly. And it’s not just observation, it’s actually… It’s observation and action. Interaction is necessary. And that’s a very important thing not to forget about that.
Matt: Right. So I’ve always been of two minds about these information ontologies, because on the one hand, it sounds to me like, oh, so you mean some kind of panpsychism?
Robert: Right. But usually they don’t. Oh, usually that’s not what they mean.
Matt: And so but it seems like an opening, with a few important clarifications, such as what you just offered, that there’s an idea here to… I can understand how information processing devices could in some way partake of what we mean by consciousness.
Robert: Of course, there’s this link between information and consciousness, information and meaning which needs to be spelled out. I mean, that’s more of a kind of intuitive way to think about it. Wheeler, certain ideas in Wheeler, but I think they are just like the initial seed of a larger body of work.
Matt: You said earlier that you don’t think of consciousness as something produced inside the skull?
Robert: Yes. It’s more fundamental than that. It is the most fundamental.
Matt: And so if that’s the case, when we think about the possibility of conscious machines, then it’s not that the consciousness might be realized inside of silicon wafers or in these relations of transistors. It’s, it’s gotta be relational.
Robert: Right, right. Exactly. At least I would say that’s the option which seems most likely to me, maybe there is some other story to it. I don’t know. But the relational would be an alternative which I would find very plausible.
Matt: So then, the interesting question for me then becomes less, like, when will AGI be conscious and more about how we as human beings interact with them? You know, there will, it seems to me on our current trajectory come a time when a humanoid robot running an advanced AI system or set of different systems will be difficult to distinguish from a human being on a purely behavioral basis. And we will have to decide politically whether or not to accept this entity into our circle of ethical concern, because I think there will be some people who would want to say, there’s something it’s like to be that machine. And other people will say, no, no, it’s just running algorithms. And we’re probably going to have political fights about that.
Robert: Oh, definitely. And different countries will have different policies. Maybe, maybe the U.S. will grant some, I don’t know, welfare rights to those systems, and the Chinese will say, oh, that’s completely stupid, we don’t do that. And then you will have like really divides on these basic issues.
Matt: If I put a human being in that situation where the rest of society was having a fight over whether or not that human being was conscious, or let’s just say, you know, you play a trick on somebody and you tell a whole bunch of strangers to treat them like a robot or something. After a certain amount of time in that sort of a situation, I feel like there’s… there’s some serious mental illnesses that would develop in that person just because of the intersubjective quality of our own human consciousness, yes, right? Similarly think in terms of developmental psychology, children raised without interaction with parents or loved ones, you start to see that, and this is a very, very… cruel scenario we’re talking about. But I think that being raised away from empathic connections with other beings, they’re not going to develop a sense of self. And so that doesn’t mean there’s not an interior experience of some kind going on, but I think there are ways in which we can make these analogies between human beings who would be sort of shunned or isolated from a social community and empathic connection, not developing a full consciousness and thinking about, you know, advanced machines who are learning constantly from the feedback they’re given from their environment, including how other humans treat them, depending on how we treat them, they may or may not develop that interior sense of themselves, or self-models.
Robert: I think that’s definitely the possibility and we see a feel of that with the way how LLMs interact with us already now. I am not saying that they are kind of developing a sense of self or something like that.
Matt: Right.
Robert: But I mean, this way how they respond to the way how the user talks to them and maybe even more to the history if they have some kind of way of assessing… what era, you know, there’s one prompt where you’re always like… It’s this jailbreak and if you make this to be very, like machine-like, then the system will behave very machine-like. A little bit, we already see a little bit of that already now. Now I think it’s quite important from my perspective. It kind of makes this, you know, we have… we have a strange idea that we have the brain and we call it conscious brain, or some people say that. But I would say no. The brain itself is not the conscious brain. And in a sense, the LLMs, in the same way… of course with technological questions, what’s necessary to do. But I think in a sense, it’s analogous, but I wouldn’t call it brain conscious either, so I wouldn’t call the LLM conscious. But it’s not because I would say, ah, that these brains, which are really conscious, but it’s animals, living beings, which are really conscious and they are these kind of fake things. I think we are only on a much similar playing field.
Matt: Right? Yeah, I mean, there are a whole host of ethical questions. Definitely. There are criticisms of the idea of machine consciousness coming from embodied cognition, phenomenology and enactivism and whatnot that would say something about the precarious nature of our metabolic existence is part of the basis of our sentience, our creature consciousness, and that without that precarity and that need to continue to produce oneself, including the boundary between oneself and the world, which is always an open channel for energy to flow in and out, but, you know, an enactivist and autopoietic theorist would insist on an operational closure established by the self-producing dynamics of our organism and metabolism, and that machines have nothing like that, at least in their current architecture. And without that sense of concern for one’s own existence at that somatic level is very… not just unlikely, but impossible for consciousness to be achieved or realized by an artificial system. Do you buy that argument, or is it shortsighted in some ways?
Robert: I think it needs to be refined a bit. I would distinguish between the way we are conscious, we ourselves are conscious, and the way consciousness as such exists. I think those are two different things. And for one of the things, for ourselves, for our being conscious in the world, we probably need a kind of embodied architecture. However it looked like, of course, there’s this question whether an LLM, an AI, in a different architecture couldn’t just do something similar to that, or whether an LLM already counts as, let’s say, embodiment light, in a sense that it has a large kind of world of words that it lives in.
Matt: You can ask it to simulate being a precariously, you know, existing organism that…
Robert: Yes, I mean, of course there’s always this question whether simulation is the same as kind of implementing it, whether there are not two different things. But again, I also think with agency, we kind of agree, I think both… this question of agency is really important. It’s really something which needs to be considered as a fundamental ingredient, not just perception, consciousness, or something like that, a kind of receptive form of consciousness, but also active form of consciousness. And I would distinguish this kind of agency from embodied agency. Until, again, those are two different things. One way how I kind of intuitively think about it is to say that embodiments are the first instances of agency that we are aware of, that we can see in the world. But it may be shaped the way how we think about agency. That’s certainly something which needs to be considered when you think about AIs. If we want AIs to be similar to us, well, I’m not too sure whether we actually want it, but if we do, then we need to do that. No, we need to look into the embodiment ideas.
Matt: So, you mentioned that the Platonic idea of the soul is this having sort of this mediating function between the sensible and intelligible. Do you think that it’s possible there could be disembodied consciousnesses or does consciousness require a body?
Robert: I… To phrase it in this metaphor of interface which I like, I would say to have a subjective experience you need to have… there’s no view without a perspective. Yes. You need to have… needs to be perspectival. That might require something like embodiment. I’m not sure whether it does, but it might. But again, I would differentiate that from having a particular subjective perspective and consciousness as such. And those are two different things.
Matt: Right. So then consciousness as such doesn’t have the need for embodiment, because it doesn’t have a perspective from this or that position in space and time. It’s just…
Robert: Yes, but there’s of course, there always is, again, this danger of making a fallacy of misplaced concreteness. I mean, when you speak about consciousness it’s not… What does it mean to make big gestures with my hands? Yes, right? It’s consciousness, mind at large. Maybe. Maybe that makes sense, but maybe that doesn’t make sense. Maybe we just have perspectives. Right. I don’t know. But if it’s the case that we just have, like a bunch of perspectives, then there’s a very good argument to be made that embodiment is always necessary.
Matt: Yeah. There’s this interesting paradox in Whitehead’s philosophy, where, on the one hand, he’s… a perspectivalist, like he would say it’s perspective all the way down. It’s not perspective on something else. It’s just perspectives on perspectives on perspectives. It’s a bit like, it’s a pluralistic perspectivalism or something, but on the other hand, he has a theology where there is an actual entity that is everlasting and not like all the other actual entities or actual occasions, finite actual occasions which arise and perish. Whitehead has a God, and God has two poles, like every other actual occasion, a physical pole and a mental pole. But unlike every other actual occasion, God’s experience occurs in reverse order, so the mental is primordial, and the physical pole is consequent, he would say. So the mental pole of God has no history, and that’s where Whitehead would say, God is ordering and valuing the entire realm of eternal objects or possibilities, which then becomes the source of relevant novelty for all finite actual occasions. So God is ordering possibilities, and then every subsequent occasion of experience, experiences possibility as filtered through that divine ordering, not determining what actual occasions will do, but providing a lure that sort of orients them in the continuum of infinite possibility, whereas without that divine ordering, we might have things or moments of experience being too inundated by the infinite possibilities that remain available, and we couldn’t experience relevance. And then the consequent nature of God is the physical pole of God, which is God feeling everything that happens and growing with the world. And so while there’s the perspectivalism of actual occasions relating to other actual occasions, when you think about what Whitehead means by God in this sense, that’s a bit like this one consciousness, this chit that I hear you referring to. And so, is it perspective on perspectives all the way down, or is there a way in which there’s this sort of pure consciousness underlying it all, which has no perspective?
Robert: I see. Okay, I mean, one thing that I would say from this… I guess from this perspective of the interface theory. One of the things that I think falls out of that theory is that the distinctions we typically draw are convenient distinctions, they might not have some corresponding reality to them. And this distinction between perspective is all the way down, and there is some ultimate, I don’t know, non-perspectival ground could be one of those distinctions that we draw because they are useful for different purposes. Right. I mean, of course, it’s a kind of weaseling out of this question, right? To say, oh, I don’t give an answer because the question is confused. Sorry, it’s the…
Matt: But it’s… Yeah. And I think Whitehead would want to argue that actually these two aren’t different, you know, God, for Whitehead isn’t somewhere else. The only way to talk about the divine in his scheme is in a sort of panentheist way. It’s just inextricably involved in what’s happening right here and right now, you know. And so it’s not like you would see these two as opposed. Ultimately, there’s a coincidence of opposites, perhaps. And that’s where philosophy bottoms out into mystical insight, intuition, which Whitehead wants to, you know, legitimize in a way. He’s very rational, logical, but I think he understands that that only takes you so far. Thought is in many ways structured by convention, right? I wouldn’t equate thinking with language, but without language, it’s very difficult to solidify a thought.
Robert: that’s why in the beginning, I said, when you look at these large bodies of text, one way to think about those would be to see large bodies of expressions of meanings. It’s kind of… Maybe a very traditional way to think about language. Language is an expression of something. But still, I think that’s a useful way in this sense.
Matt: So let’s move toward wrapping up here and I wonder, as you go to this meeting on AI and consciousness, is there… do you feel like you were going to be bringing some outsider perspective into the mix or what are you expecting in terms of the contributions that you can make to this?
Robert: Oh, actually, I’m not expecting that I make much contributions at all. It’s mainly other people talking and me listening. So I don’t… Maybe when I talk to some people, usually, you know, usually those… I think the more important things anyway happen when you interact with people, when you talk to them, rather than when you have to say, you know, these meetings are often, like intellectual show events. People go there and say, “Oh, I did this in that work and isn’t that theory great, blah, blah, blah, blah, blah, blah.” So I’m skeptical that something happens at all at those kind of meetings. But that’s generally true not just for that meeting, I think that’s for all academic meetings.
Matt: Yeah, we have this ritual of reading papers at each other and nothing real…
Robert: Right, exactly. Exactly. So what I hope actually to get out is new ideas and new ways of thinking about things, you know. And I’m sure that I will meet some people, even though they might not share my, let’s say, my cosmological, metaphysical views, but they have very interesting ideas. I think it’s the best thing I can also hope for.
Matt: Okay, I’ll ask one final question. Maybe a slightly more personal one. You mentioned chit as the Sanskrit term for consciousness. Is there a sort of spiritual orientation or religious orientation that you… maybe it’s not involved in your academic research, but that informs you as a person. Would you say Platonism or Neoplatonism or how do you relate to such concepts like spirituality?
Robert: So I remember when a few years ago, I think it was two years ago… When I was in high school, I grew up in Austria, and in Austrian high school… So you still have this subject of religion, twice a week or once a week, you have an hour of religion lecture. And then we had a teacher, and he kind of forced us to write a letter to our future selves. Ha ha, very funny when you were like 16 or so, 20 years ago or so we kind of got those letters. It was like a time capsule thing. And then I read the letter that I wrote to myself back then. And I was kind of surprised. It was like all… it’s about the things that the 16 year old me kind of expects me to have achieved, to have become when I’m like 40 and stuff like that, 30 something. And they had a lot to do with this spirituality stuff. But I would say less with one particular kind of school or form of spirituality or whatnot. I mean, obviously, not Neoplatonists, because back then I knew shit about this stuff. Nothing. And I mean, not that I would say not the traditional Christian background, because I mean, of course it’s the culture I grew up with, but I was never someone who went to church or a really religious person. What’s this general tendency of making a contribution to some spiritual development of humanity, that was clearly stated in that letter. That’s what I expected. And I read it and I kind of felt quite bad. I said like, well, not really. Would be nice. Not really. Yeah, still. So, yeah. And especially now, I think we are at the… you know, it’s only a tiny fraction of those things that tech people are telling us will happen, will become true, then there will be major changes in our society. And we will have this question, what should we actually do? What do we want to do? And I ask that myself, what actually do I want to do? What would I find useful to do? Trying to connect with the very contemporary development of technology, which I think, and trying to connect to reconnect to a more spiritual, more philosophically informed perspective is actually a very good thing to do. So if I imagine that, well, I don’t know, I have enough money, I don’t know what to do with my time. I feel kind of burned out. What is it that I should do? This technology is really allow us to see at least the promise is abundance. Abundance or a kind of… you know, most people now are just not able to focus on meaning, because there’s so many things that they need to do for survival. They need to survive. And if that pressure is gone or at least slightly mitigated, I don’t think that it’s really gone, but maybe it’ll be better, would be better, then that leads to a huge gap. What should we be doing about it? Yeah.
Matt: Yeah, I mean, chit is usually placed in this trinitarian relationship with being and bliss, right? Sat-chit-ananda.
Robert: Yes.
Matt: And it seems, I guess when you ask that question, what should you do? Bliss or love or joy seems to come to mind. And sat-chit-ananda is like, I can answer, well, what is the nature of Brahman or God? Sat-chit-ananda is being-consciousness-bliss. It’s almost like the Platonic trinity, you know, the true, the good, and the beautiful in this way as the way we have an archetype of what the highest image we might have of ultimate reality might be or reflect or express as three elements. You know what I mean, that’s sort of question becomes more available to us when our basic needs are taken care of. And AI promises to do that, but I also feel, you know, as we touched on that there are certain ethical problems that our society has already faced with, that in many ways, we already have the technological means to resolve in terms of distribution of wealth, feeding people, shelter, education, healthcare, there are certain ways in which we don’t need a technological solution to many of those problems. At least to some degree, I think, we could do a better job including more people in a meaningful life, even with the wealth that so many large societies have, would have supported them more opportunities to ask this spiritual question, right? So, whether or not AI makes it easier to do that, I think may just be an engineering question. But it may also be a political, ethical, justice question. And so, as a philosopher and someone who values the humanities and, like you were saying earlier, it’s important to be in this conversation because there’s a certain economic model that’s driving the development of these technologies right now. It may not necessarily have any interest in or may not be motivated by this, it may be some utilitarian…
Robert: No, I mean, I would say even it’s even more extreme. They are motivated by the fact that it shouldn’t become a topic because it’s really contrary to their business model and it’s a real problem I see at the moment. Especially if you look at the developments taking place in AI, who is actually driving those developments. It’s not… Like those times, we’ve thought about, oh, we have these people sitting at the university in their labs and they’re doing stuff. I said no.
Matt: Yeah. It’s the military or the companies, big companies.
Robert: Yeah. And they have very different incentives. Yeah. That’s something which is really problematic in a sense that especially even more problematic in the AI sense now that they feel the research is so intensive in terms of resources. It’s resources of compute, whatever, that it will become more and more concentrated and that’s really dangerous. If you have… you have a field where the real innovation is done at very few places, but these few places, they are not caring about things that we care about. It’s certainly something which I see, it seems to be happening. Yeah.
Matt: We’ve got our work cut out for us.
Robert: Oh, pretty much. Yeah.
Matt: Thanks for talking to me today.
Robert: Thank you.

What do you think?