“The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.”
–Alfred North Whitehead

Taming the Technological Dragon, with Michael Levin

Selections from the transcript of this conversation (complied by AI):

Matt Segall: I just finished your paper “Technological Approaches to Mind Everywhere” that came out last year and really enjoyed it. The more I read your stuff, the more I am shaken free of some philosophical commitments that I think I had arrived at because I didn’t realize technology had advanced to the point where there were experimental means of actually reviewing the ways minds can operate on various platforms. The way you’re redefining classical philosophical categories is really fruitful for me in forcing me to stretch and change some of my priors. I’m grateful to you for that.

Michael Levin: Thank you. That’s a great outcome I could have hoped for with these papers.

MS: Yeah, I guess what precipitated this conversation was something I shared on Twitter about how the old definition of a machine has been superseded. Classically, the difference between a machine and an organism had something to do with the relationships among the components – Aristotle and Kant recognized this. In an organism there’s a way the parts reciprocally produce one another for the sake of the whole, so there’s a kind of purpose that tells the parts not only what to do to maintain a sense of wholeness, but also how to produce and relate to one another. But it seems now that technology has advanced to the point where that distinction no longer holds – there are machines where the parts produce one another and solve problems and pursue goals of their own.

So we need to rethink these categories. But I wonder what else might be at stake when we make a decision about which metaphor or word to use. Words have connotations that go beyond precise scientific definitions. For the broader public, “machine” and “organism” raise all kinds of ethical questions. So I wonder, why might we want to go with the term “machine” in this expanded sense rather than talking about synthetic organisms or artificial organisms? What are your thoughts on that question?

ML: I’m 100% with you that we need to understand and manage the consequences our paradigms have on how people relate to each other and the outside world.

Let me take a step back and think about what we want these words to do. Why do we have these categories and words? What’s the point? In particular, I come to it with the belief that as scientists and philosophers, rather than shoehorning things we find into colloquial usage, we should lead. Our job is to be the tip of the spear – at some point people will say “that’s not how most people use that word” and we can say “Actually, this is how we should use the word, and someday everybody else will catch up.” But as scientists and philosophers, we’re supposed to improve the situation and hope people catch up.

Both of those things are important – we need to do a better job providing categories so people can have healthy relationships with themselves, others, and the planet.

In doing all that, I think these words are supposed to tell the listener what kind of relationship is optimal with a given system. When you tell me something is a simple machine, a more complex machine, a learning agent, a human person, or some other kind of person – you’re telling me what kind of relationship I can and should have with it.

So when you tell me a system is this kind of machine or agent, you’re telling me how to interact with it using particular tools – and you can go wrong in either direction, like spending time trying to convince a mechanism or treating systems as simpler than they are, leading to terrible moral lapses.

Importantly, I think this is a continuum, not binary categories, which lead to pseudo-problems. And the right relationship is an empirical question, not just armchair philosophy – you have to find out through experiments. The question is, what relationship is possible and ethical? What can you expect in terms of using physical rewiring versus persuasion? This spectrum runs through medicine as well.

The body has machine-like aspects that can be exploited as well as complex competencies beyond these tools. We need to identify what those are. For example, even simple molecular circuits can learn – so amazing! We didn’t have to evolve that ability to learn, we got it for free as soon as we evolved components that could implement logic gates.

So my point is that these terms are relationship protocols – there’s nothing binary useful about them. We need to improve ethical interactions with each other. I know not many are happy with this view – mechanists say I’m dragging agency where it doesn’t belong, while organicists say I’m putting machines on the same spectrum with organisms. But the sooner people catch on the better – we need to give up binary categories. Pretty much any combination of living and engineered systems is going to be some kind of agent. We really need to give up sharp classifications based on origin stories or materials.

MS: Your approach is basically searching for functional invariants that unify cognitive behaviors across levels, all the way from physical principles of least action up to rational agents and decision-making. I see this also in something like the placebo effect – a profound example of top-down influence, where our cognitive intentions filter down to change molecular events in cells. So these levels are connected: each layer has its own capacities for problem solving and yet they can communicate across levels.

One of the classical distinctions between a machine and organism has to do with the locus of purpose – in an organism it’s intrinsic, whereas in a tool or machine it’s imposed externally by a designer. But now we can give machines their own goals, lessening this distinction. Yet in your “Technological Approaches” paper, you say “there’s nothing magical about evolution as a forge for cognition. Surely we can do at least as well using rational construction.” Doesn’t this imply a subtle dualism, as though our rationality can step outside the system that produced it?

ML: I wasn’t trying to make a duality there. I was arguing against those who say engineering can’t reproduce the products of evolution. If you asked me to choose between a random, blind evolutionary process versus rational engineers who can also use evolutionary techniques, I’d bet on the engineers – we can do at least as well as blind evolution itself. I’m not saying rational engineers are outside the evolutionary process, just that we aren’t limited or monopolized by it.

There is an interesting dualism in terms of rational thought emerging once you build a system capable of implementing logic gates and truth tables – now you get principles of mathematics and computation for free. So there are useful free lunches or inherent affordances that come into view once you build systems able to exploit them. Evolution is like a search that gives you pointers into this space of affordances. Andreas Wagner calls this the “arrival of the fittest” – where do good variants come from? So there’s a little bit of a dualism between physical space and this adjacent space of affordances.

MS: I see this in thinkers like Brian Goodwin and Stuart Kauffman – the notion of order for free, the “adjacent possible,” etc. Whitehead sees it as a polarity between actual occasions and eternal objects – possibilities that are related mathematically and qualitatively. So evolution seems to involve remembering what worked before while also searching out novel forms and possibilities. As possibilities are actualized, new possibilities open up – there is creativity advancing into an open future.

There’s a domain that isn’t fully actual but exerts an influence on the process of actualization as a kind of lure. As evolution advances, it exploits ordered possibilities. I think contemporary science is learning from biological notions of agency, and turning back to physics to understand things like self-organization in more generic terms. As Robert Rosen said, biology exhibits forms of self-organization more generic than physical mechanisms, so biology can help physics generalize. Your work seems to be pushing the envelope here.

There may be a need for a kind of dualism or polarity between an intensive, generative dimension implicit in physical processes and the extensive, measurable features science focuses on. Whitehead calls the former dimension the “subjective aim” – it’s necessary to grant this dimension for improved predictive power, as you show.

ML: I completely agree this is intensely practical. The cognitive light cone model has an interesting feature – it refers not to what you can think about, but the largest system you can actively work towards. For example, humans can hold the thought of mass suffering in our heads, but there’s a linear drop-off in how much we can care about – a limited number of beings we can actively have as a goal. This could potentially be expanded with certain practices.

I hope we aren’t the largest cognitive light cone out there! I hope there are beings somewhere who can care about tremendous numbers of others in a linear way. But our light cone is limited, especially for compassion.

MS: It’s interesting to think about levels of intelligence and what’s driving the evolution of these levels. Like Aristotle saw it as matter, life, mind – the vegetative, sentient, and intellectual soul. Becoming virtuous involves getting them to play nice together.

With our stage of civilizational development, our capacity for symbolic thought seems disconnected from our biophysical existence – consider our economic system versus ecological limits. We seem to be at an evolutionary bottleneck where our tech capacities have outrun our ethical capacities.

Your work empirically exploring agency is a game changer here. But there’s an ethical dimension to deciding how much agency to grant entities based on how well we can predict and control them. We’re entering a space where machine-organism hybrids are nearing human capacities for intelligence and agency. I asked Marvin Minsky about this once – will we need to pay them, give them rights? He seemed shocked I’d even ask; he said it’s a problem for politicians. But how do you grapple with the moral responsibilities here?

ML: I completely agree we always need to consider whether our work is ethical. Our ability to predict consequences is limited, but I don’t compare to some perfect utopia – right now, things are terrible for many beings. Doing nothing is as much a choice as doing something. What keeps me up at night is failing the duty to make progress and improve this situation. I think we have an absolute moral duty to fix problems like aging, defects, susceptibility to disease, etc. Every birth is an “inauspicious birth” until we remedy this.

And you’re right, just focusing on prediction and control is problematic – the more general concept is relationships. With complex agents, we aren’t looking to control but to open up to their agency. What relationship do we actually want to have with new forms of intelligence?

MS: You’re searching out functional invariants across levels while also asking what drives the evolution toward greater complexity. If it’s just persistence, rocks do that better than us. There seems to be a striving toward aesthetic intensity and joy, glimpses of which we see in animal play, sexual selection, etc. What seem like transcendent human ideals – truth, beauty, goodness – have roots in our biology. Understanding this helps establish important lines of continuity. There are lessons for human organization – as you show, intelligence is always collective. We wrongly treat human associations as governed by self-interest, but we can expand empathy and resonate interior to interior, transforming our sense of self and interests. Softening these boundaries would revolutionize our politics and economy. Your cellular work could vastly improve human systems.


My earlier conversation with Levin:

Comments

One response to “Taming the Technological Dragon, with Michael Levin”

  1. Ed Levin Avatar

    You’re two of my favorite thinkers. Delightful to see you interact. Is there more of this before or after?

What do you think?