“The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.”
–Alfred North Whitehead

Thinking With Machines

A rough transcript of the video above of me cleaning dog poo out of my robot vacuum:

A few days ago, I was talking with Michael Levin, a developmental biologist, about our topic of how we think about and relate to our machines. I have tended to want to distinguish between organisms and machines because I think organisms have abilities and a nature that is distinct from machines. Organisms are alive, for example, while machines are not. Organisms have an internal purposiveness, sentience, and capacity to make creative decisions from moment to moment that spring from some kind of interior sense-making capacity. Machines, on the other hand, are programmed from the outside. They don’t have their own purposes – the purposes of their designers are imposed upon the matter of which they are made. This is an important distinction to remember and keep in mind.

Yet I agree with Michael Levin that machines are becoming increasingly intelligent. There is probably something like a spectrum or continuum of intelligence that includes machines at the bottom and human beings and perhaps other intelligent alien species above that. Who knows where this series of spheres of intelligence ultimately terminates? This leads to all sorts of interesting questions. However, even when we think about intelligent machines that run on AI, I think there are reasons to be skeptical of their abilities. Machines don’t really evolve – even if they use machine learning algorithms and have learned certain behaviors in the relatively short term. When you talk about an organism that has evolved, you’re talking about something that’s been embedded in a process of structural coupling with whole historical series of environments on this planet over billions of years. A lot of tricks have been picked up along the way. A lot of memory of how to be in sync with the rhythms of this planet has been acquired over that long, multi-billion year process.

Any machine, even if it begins to learn through neural nets about how to interact with specific environments, language, etc., is still going to lack that depth of memory and historical embedding in the Earth environment. This might give us reason to suspect that our machines, intelligent though they be, might be particularly blind to rather important features of the Earth environment that we as organisms are just more constitutively prepared for. We would expect even the most intelligent machines to make really stupid mistakes, because they just haven’t encountered certain things before. They haven’t been around long enough. They also make other mistakes that you would expect the designers to have accounted for…

In any event, we are being challenged with questions like whether there is an ultimate difference between an organism and a machine. I don’t think there is an ultimate difference, but I do think there is a significant difference in degree that is so great it might approach something like a difference in kind, though not really. For all intents and purposes, however, there is an important distinction to be made.

The broader question about our relationship to technology is also important to keep in mind, because technology shapes our very ability to think at all. For us to try to think about technology, we will always be in some sense blind to the material and formal ways in which it is granting us that very capacity. It’s like an eye trying to see itself or a mind trying to think itself, because our minds have always been intimately bound up with our tools. From the moment we started fashioning stone implements, and certainly when we developed metallurgy, writing, the alphabet, the printing press, the telegram, radio, television, and now the internet and artificial intelligence – we have always been co-evolving with these machines.

So when we ask questions like whether descendant AI systems will ever be conscious, it’s like – have we ever been conscious without being in some kind of symbiotic relationship with a technology or artifact of some kind? I don’t think so. What we think of as peculiar to human beings, at least that kind of consciousness, has never existed without technology. Certainly there are animal forms of consciousness or sentience that pre-exist our species. But the kind of self-conscious, self-reflective intelligence that can make machines in the first place does seem to distinguish our species, even if we’re not totally different in kind – we’re still animals. But we have this new degree of capacity to transform our environment and engage in symbolic reflection that makes us almost different in kind.

So how are we to imagine our way forward in such a situation, given that we can’t simply say yes or no to technology any more than we could say yes or no to our own nature as human beings? It’s more a matter of becoming more conscious of our relationships to these machines, admitting that we are already cyborgs. This isn’t a future we may one day face – it is our present reality. So how does that change our conception of what it means to be human, and what it means to be a machine? These are important questions we face.


My earlier conversation about machines and organisms with Levin:


Posted

in

by

Comments

2 responses to “Thinking With Machines”

  1. ptero9 Avatar

    Curiously, as I was reading your post, a friend texted me to ask if, like AI, could we also be predicting the next word?

    Here’s my two cents:

    Hmm. My gut reaction says humans use language as symbolic representation of the prelingual environment we’re immersed in.
    Even when it seems we are predicting the next word, the drive and necessity underlying our language usage stems from a different source than an AI machine’s.

    AI, perhaps, and hopefully, presents us with an opportunity to reflect on the ideas of intelligence and consciousness by mirroring back to us our ideas of AI. For example, is intelligence merely cognitive, or embodied to the extent that distinctions begin to blur? What does a cell know?

    When we say AI might be, or become conscious, aren’t we merely imagining a human mind inside a machine? … and by mind, again, we’re stuck in a cognitive mode of perception.

    Perhaps we might see the difference as top down versus bottom up. Organistic evolution being bottom up, but without any discernable bottom.

    To imagine AI machines as the next leap in evolution, as has been suggested by some, misses the point entirely the crucial difference between whatever evolutionary source might underlie our existence, and the limits of machines assembled by humans.

    It’s an interesting opportunity. I heard it said that we’re not trying to be God through our ability to create super intelligence, but trying to create God.

    I really enjoy your conversations Matthew, especially with Michael Levin.
    Debra

    1. Matthew David Segall Avatar

      Agreeable thoughts here, thanks Debra!

Leave a reply to ptero9 Cancel reply