“The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.”
–Alfred North Whitehead

Polycomputing and Process Philosophy

Tim and I were at it again this afternoon. I begin by introducing some ideas from this preprint by Joshua Bongard and Michael Levin: “There’s Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-scale Machines.”

Here are some of the ideas we discussed in this video: 

  • Polycomputing in Biological Systems: Introduction of the concept that biological systems can perform multiple functions simultaneously on the same substrate.
  • Generalizing Computation Beyond Technology: Discussion on expanding the notion of computation to encompass biological systems, which not only perform but also produce new functions.
  • Unprestatable Nature of Biological Functions: Exploration of the idea that biological systems generate new, unpredictable functions, making them fundamentally uncomputable.
  • Theological Implications in Computational Models: Examination of the influence of theological ideas, specifically Whitehead’s dipolar process theism versus Bayes’ Calvinism, on the development of varying computational models.
  • Limitations of Existing Computational Theories: Critique of current computational theories when applied to the dynamic and creative aspects of biological organisms.
  • Need for New Conceptions of Computation and Information Processing: Suggestion that new models are needed to truly capture the generative and coactive nature of life, beyond prediction and control.
  • The Role of Observer in Computational Systems: Consideration of how observation and observer roles, including the aims and purposes of observers, might be integral to understanding computation in biological systems.

Posted

in

, ,

by

Comments

3 responses to “Polycomputing and Process Philosophy”

  1. rehabdoc Avatar

    I would caution when using the computer as a functional metaphor for a living organism. This is like the Emissary left hemisphere attempting to ‘hack’ the Master right hemisphere. Not really such a great idea… IMHO. And potentially highly problematic when you think it through. It is like reaching to imagine that a fabricated machine (which is what a computer is–it is NOT an autopoietic living organism and never could become one because it is categorically different from a living organism) could have the functionality of a living organism. This is where it is important to really look carefully at the work of James Filler (which John Vervaeke referred to near the end of his talk at the ‘Thinking with Iain McGilchrist‘ conference) on ‘relationality as the ground of ontology‘ and the recent book by Adam Frank, Marcelo Gleiser, and Evan Thompson titled ‘The Blind Spot. Why Science Cannot Ignore Human Experience.’ I would extend that to beyond the human to the nonhuman living organism as well–ie. to ‘Organismic Experience’ in general. And, even more significantly, at the relational biology of the late Robert Rosen who showed–using the mathematical theory of categories– that living organisms and fabricated mechanisms–like computers–are categorically distinct. It is also probably not a good idea to not take the Incompleteness Theorems of Kurt Gödel into account. You cannot ‘totalize’ a living organism. Yes, you can understand some things about how it is constructed and how it functions–but that understanding will always be incomplete. Organisms resist totalization, they resist formalization. Mechanisms are formalized by definition! By definition, they comply and are fully described by the mechanistic formalism–ie. ‘Newtonian mechanics’. Organisms do not and, as a result, they are not computable. Mechanisms admit to an ‘ontology of states’. Organisms do not. Mechanisms can be modeled using predicative mathematics. Organisms require impredicative mathematics. Mechanisms can have their dynamics fully described by a Hamiltonian that has only REAL numbers. Organisms (and quantum systems) do not have such simple dynamics–the Hamiltonian for an organism requires the incorporation of imaginary/complex numbers. This is a feature that makes organisms, in Rosen’s work, ‘complex’ systems. Mechanisms, due to their limited entailment power and their strict predictability (ie. they can be shown to be strictly deterministic), are, according to Rosen, ‘simple’ systems, no matter how elaborate their structure may be. Do you think the Emissary has a legitimate chance of hacking the Master? Or do we need a fundamentally different approach?

    1. Matthew David Segall Avatar

      I completely agree with everything you’ve so beautifully articulated here. My sense with Mike Levin, though, is that he believes the line between organism and machine is going to be made quite blurry as a result of various cyborganic hybridizations. So the old categories begin to become at least semi-permeable. In terms of the classical definitions of organism and machines, though, I’m with you.

      1. rehabdoc Avatar

        Interesting that I just had an email exchange initiated yesterday about this exact same issue with Mike! 

        You could think of this as the ‘Frankenstein’ game, or the ‘Game of Golem’, about which dire moral warnings were generated well over a century ago (in the case of the Golem, back to the 16th century and the lifetime of the ‘Maharal of Prague’). I think it is really important to keep ethical concerns close at hand when playing with this kind of fire. One wonders about the degree of ‘permeability’ between a machine which is confined to being algorithmic, computable, simulable and strictly deterministic, because it, by definition, is restricted to the mechanistic formalism–the old familiar Newtonian mechanics about which process philosophy critiques and raises questions (eg. with regard to the perilous ‘bifurcation of Nature’)–and a living organism which admits to none of these boundaries on its functionality. In a sense, for the human (and other mammals) the left hemisphere is an inherent ‘tool’ of the right hemisphere and the whole problem with modernity, about which Iain McGilchrist warns, is that the ‘tool’ has been separated from and has taken control of the one that is supposed to be wielding it. What happens, then, if we open things up for the ‘tool’ to be empowered to dominate the one meant to wield it? Are we not accelerating the nominalistic deficiencies, and potential hazards of modernity? What I would hope the overarching goal would be is the attainment of true integration. But that requires, I think, a totally different mindset corresponding to a Gebserian transformation, or ‘mutation’, in the structure of human consciousness.

        I am not objecting to what Mike is talking about in terms of the exploration of new opportunities for enhancement of treatment of human pathology (like the use of artificial limb technology, or environmental control systems for quadriplegics and locked-in patients), but I think it needs to be explored with great care putting ethical concerns ahead of the science.

What do you think?