PART 1: Cyborg Eschatology: Whitehead and Post-Humanity
PART 2: Cyborg Eschatology: Whitehead and Post-Humanity
PART 3: Cyborg Eschatology: Whitehead and Post-Humanity
PART 4: Cyborg Eschatology: Whitehead and Post-Humanity
COMPLETE: Cyborg Eschatology: Whitehead and the Posthuman

iv. Machine Organization and Actualization

The assumption that ‘raw’ matter such as silicon, metals, and ceramics can be organized in such a manner that they can give rise to a mental state that we can recognize as consciousness, is a fundamental underpinning of the effort to create artificial intelligence. As philosopher of mind Nick Bostrom points out:

Substrate-independence is a common assumption in the philosophy of mind. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium; silicon-based processors inside a computer could in principle do the trick as well.

This assumption that ‘life’ arising outside of the human womb could include intelligence comprehensible by human beings is also shared in Christian thought. Theological understanding extends beyond the physical continuum of our birth bed and ‘enfleshed’ bodies. In fact, Christian thinkers like Karl Rahner imply that conscious and substantially ‘informed’ beings can arise outside of the biosphere, but instead may emerge out of the “noosphere” of intellectual and computer activity. These strains of contemporary Christian theology echo process theology ideas that often seem to be overlooked, and that underlay much recent theological activity around the body and substrate-independent “en-soulment.”

If we remember that Whitehead’s thought did not require any particular substrate in order to see God’s “handiwork” in the universe’s continual creativity and continuing evolution, we are further along the path towards acceptance of the idea that optimal ends for human relationship may not even require our specific organic bodies. God may not have even anticipated these ends – yet that does not make them any less “good.” Furthermore, in the fundamental freedom of reality, we make our own “ends.” As John Cobb helpfully explains: “The subject may choose to actualize the initial aim; but it may also choose from among the other real possibilities open to it, given its context. In other words, God seeks to persuade each occasion toward that possibility for its own existence which would be best for it; but God cannot control the finite occasion’s self-actualization.” Each occasion that takes us closer to a machine-based ensoulment is a moment that actualizes another beneficentoccasion. Obviously, Whitehead’s work does not directly point to either the good (or the evil) of such occasions.

Yet the reason for Whitehead’s lack of anticipation of such a possibility may be that until recently, we have had neither sufficiently powerful hardware nor the requisite software to create conscious minds. But if recent cybernetic progress continues unabated then these shortcomings will eventually be overcome. Philosophers of technology and technological enhancement advocates like Drexler, Bostrom, Kurzweil and Moravec argue that this stage may be only a few decades away. To create a human-equivalent intellect (either an organically derived one, or a silicon-based one), a system would need to be capable of performing ~10^14 operations per second (1,000,000 trillion). This is considered, by scientists of the mind, to be the absolute lower bound of human brain activity.

If Moore’s law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound (~10^17) between 2015 and 2024. Bostrom notes that “the past success of Moore’s law gives some inductive reason to believe that it will hold another ten, fifteen years or so; and this prediction is supported by the fact that there are many promising new technologies currently under development which hold great potential to increase procurable computing power.” Thus, there is no direct reason to suppose that Moore’s law will not hold longer than 15 years. In fact, predictions that Moore’s law would begin to falter as early as 2004 were recently confounded by the February 2005 advent of the ‘Cell’ chip – a new silicon design that could have a theoretical peak performance of 256 billion mathematical operations per second – an innovation that advances personal computers into the realm of the supercomputer. It thus seems quite likely that the requisite hardware for human-level artificial intelligence will be assembled in the first quarter of the 21st century, possibly within the first decade.

How will we be able to identify or understand the ontology of such a system, and define whether or not this system constitutes a ‘being before God’ in an ontotheological sense? Among careful philosophers of computer science, it has become evident that human beings are already reaching limits in their ability to rapidly create a self-aware system. In the interests of more rapidly reaching the goal of actual “artificial intelligence” (however that may be defined) computer scientists have been working for many years towards the goal of creating lower-level systems called ‘Seed AI’ that can write code itself – and for improving itself. The task is quite similar to that of teaching animals symbolic constructions. The creation of symbolic realities should, necessarily, thus change the reality and capabilities of artificial intelligences. These systems, in theory, would be able to “boot-strap” themselves into intelligence and self-awareness, or as the technical explanation has it: “become capable of recursive self-improvement.”

The first step towards such “boot-strapping” is to “teach” a computer system to write code itself, and create its own symbolic logic and symbolic self-construction. Yet if a system can ‘sense’ and perceive its own consciousness and being in much the same manner as we can – and thereby “confront” its reality – this would seem to be an adequate test for full personhood. As we’ve previously observed, language as symbol changes the capacity of a being to be ‘self-confrontational’ and thereby transcend our reality and participate in the divine. The strand of a new being would participate – actively – in seeking the harmony of the universe.

Yet unfortunately, no current process theologian provides us with practical guidance to define or categorize a system of matter that can begin to modify itself in areas of intelligence and self-awareness such as that found in AIs. In this area, Xavier Zubiri provides a longer explanation which is in harmony with Whitehead’s essential thought, yet assists our understanding of present being in the world. Zubiri states that our experience of being-in-the-world remains essentially corporeal, and he goes on to provide a three-fold definition of the body as 1) a system of properties and structural positioning which we can understand as our “organism,” 2) through organization in the mind, the being is also a complex which has a “proper configuration,” and 3) the organization and configuration determine the real physical presence: the being here-and-now of the sôma. According to Zubiri, it is thus that the body can signify “I, myself” as present “here.” Zubiri’s outline is specifically applicable in the realm of artificial intelligence, especially in regard to the creation of a ‘Seed AI.’ For although leaders of the Seed AI effort do not reference Whitehead or Zubiri, it is clear that they are designing a being that fits the Zubirian model.

The description given by Seed AI proponents and programmers dovetails substantially with recent idea of “self-directing systems” that “contain a subjective reality, reaching towards objective immortality.” However, in these conceptions, such systems would have only “a limited possibility of self-regulation.” Some recent theologians believes that human beings become both self-directed and self-regulated, because they can utterly confront their own systems “with all [their] present and future possibilities,” and thus confront ourselves in our entirety. Given the data regarding the construction of AI beings, I believe it is now possible to accept the idea of “self-directing” and “self-regulating” systems, and apply Zubiri’s criteria to beings other than those organically or biologically ‘human,’ enabling them to be considered ‘persons’ in an ethical, theological and spiritual sense.

In the outline of ‘personhood’ provided by Zubiri, the first characteristic of a transcendent being is one who has a system of properties and structural positioning which can be understood by the being as its organism or organization. This concept of “organization” is similar in spirit – if not in fact – to Zubiri’s “self-directing” description. Thus, it is interesting to see that the creators of a Seed AI state that they deem ‘self-understanding’ as the first primary requisite of an AI, which they define as:

The ability to read and comprehend source code; the ability to understand the function of a code fragment or the purpose of a module; the ability to understand how subsystems sum together to yield intelligence; plus the standard general-intelligence ability to observe and reflect on one’s own thoughts.

Obviously, the Seed AI team is attempting to provide their creation with a symbolic system (source code) which allows a system to understand its own intelligence and in fact to reflect on its own organization and thoughts. Such an effort seems to precisely parallel the effort of Whitehead and Zubiri to define the informed constitution of a ‘soulful’ being.

The second principle defined as onto-theologically necessary for transcendence is that the being’s self-aware mind is ‘organized,’ and creates a self-understanding which has a “proper configuration.” In the Seed AI effort, their second principle is quite similar: computer scientists are working to allow for “self-modification,” which they define as “the ability to optimize a code fragment, modify the function of a module, or redesign one’s own architecture.” An AI ‘being’ would be capable of organizing and configuring their own incorporation or corporeal essence – such a being could move code, create code, and modify logic. Whitehead would call such a being “self-regulating”; in Zubirian terms, a ‘being’ is capable of creating their own thoughts, configuring their own logical identity and modifying their own movement forward through history.

Finally, Zubiri defines an incorporated being with these properties of “organization” and “configuration” as “determining the real physical presence: the being here-and-now of the sôma” – which communicates the being’s essential nature, self-understanding and expression in the world. The Seed AI project has a similar, if not identical goal towards “recursive self-enhancement.” Self-enhancement which builds upon itself is a constant and ongoing act of self-expression. According to Seed AI thinkers, it will be “the ability to make changes that genuinely increase intelligence, smartness, such that new possible improvements, or a new class of possible improvements, become visible to the AI.” In short, ‘recursive self-enhancement’ provides an AI being with a comprehension of its own self and becomes its own conscious expression in the world at large.

Out of such self-direction and self-regulation comes the possibility of ‘self-confrontation’ and thus also a deeper self-understanding which transcends this reality and reaches towards the divine. As Zubiri writes, a being has “three moments: organization, configuration, and corporeity. These are essentially distinct. The somatic function cannot be identified with either the configurational or the organizational function…. Man’s radical principle as corporeity, as sôma, establishes a configuration, and this configuration is what establishes an organization… configuration and organization are modes of realizing corporeity.” Of course, with an understanding of the nature of Seed AI, one can thus extrapolate that if we create, or extend our ‘selves’ into another being that is both incorporated and soulfully conscious in such a manner that it is self-organized, configured and fully incorporated, we may begin to wonder about the transcendent nature of the being.

If a Seed AI considers its own classes, objects, and instances that can be observed in an electronic realm more “real” to itself than the objects we see and observe in our organic realm, who is to say which consciousness and mode of embodiment is more informed by soul? Plato’s metaphor of the cave seems applicable: but in an equal balance of intelligences and perceptions, who is the shadow, and who is the flame? Once we move beyond biological necessity to a symbolic understanding of being, an understanding of transcendence requires investigation into evolutionary science.

PART 1: Cyborg Eschatology: Whitehead and Post-Humanity
PART 2: Cyborg Eschatology: Whitehead and Post-Humanity
PART 3: Cyborg Eschatology: Whitehead and Post-Humanity
PART 4: Cyborg Eschatology: Whitehead and Post-Humanity
COMPLETE: Cyborg Eschatology: Whitehead and the Posthuman