The assumption that ‘raw’ matter such as silicon, metals, and ceramics can be organized in such a manner that they can give rise to a mental state that we can recognize as consciousness, is a fundamental underpinning of the effort to create artificial intelligence. As philosopher of mind Nick Bostrom points out:

Substrate-independence is a common assumption in the philosophy of mind. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium; silicon-based processors inside a computer could in principle do the trick as well.

Until recently, we have had neither sufficiently powerful hardware nor the requisite software to create conscious minds. But if recent cybernetic progress continues unabated then these shortcomings will eventually be overcome. Philosophers of technology and technological enhancement advocates like Drexler, Bostrom, Kurzweil and Moravec argue that this stage may be only a few decades away. To create a human-equivalent intellect (either an organically derived one, or a silicon-based one), a system would need to be capable of performing ~10^14 operations per second (1,000,000 trillion). This is considered, by scientists of the mind, to be the absolute lower bound of human brain activity.

Yet if Moore’s law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound (~10^17) between 2015 and 2024. Bostrom notes that “the past success of Moore’s law gives some inductive reason to believe that it will hold another ten, fifteen years or so; and this prediction is supported by the fact that there are many promising new technologies currently under development which hold great potential to increase procurable computing power.” Thus, there is no direct reason to suppose that Moore’s law will not hold longer than 15 years. In fact, predictions that Moore’s law would begin to falter as early as 2004 were recently confounded by the February 2005 advent of the ‘Cell’ chip – a new silicon design that could have a theoretical peak performance of 256 billion mathematical operations per second – an innovation that advances personal computers into the realm of the supercomputer. It thus seems quite likely that the requisite hardware for human-level artificial intelligence will be assembled in the first quarter of the 21st century, possibly within the first decade.

How will we be able to identify or understand the ontology of such a system, and define whether or not this system constitutes a ‘being’ in an ontotheological sense? Among careful philosophers of computer science, it has become evident that human beings are already reaching limits in their ability to rapidly create a self-aware system.

In the interests of more rapidly reaching the goal of actual “artificial intelligence” (however that may be defined) computer scientists have been working for many years towards the goal of creating lower-level systems called ‘Seed AI’ that can write code itself – and for improving itself. The task is quite similar to that of teaching animals symbolic constructions. The creation of symbolic realities should, necessarily, thus change the reality and capabilities of artificial intelligences. These systems, in theory, would be able to “boot-strap” themselves into intelligence and self-awareness, or as the technical explanation has it: “become capable of recursive self-improvement.”

The first step towards such “boot-strapping” is to “teach” a computer system to write code itself, and create its own symbolic logic and symbolic self-construction. Yet if a system can ‘sense’ and perceive its own consciousness and being in much the same manner as we can – and thereby “confront” its reality – this would seem to be an adequate Rahnerian test for full personhood. As we’ve previously observed, language as symbol changes the capacity of a being to be ‘self-confrontational’ and thereby transcend our reality and participate in the divine through Realsymbol.

My comprehensive analysis of weaknesses in a non-symbolic ontology is vital. For example, I confront materialistic thinkers directly by pointing out that “the statement… that everything is matter has no precise sense on the lips of a materialist… for in his system and with his methods I cannot say what I understands by matter.” Yet unfortunately, I does not provides us with practical guidance to define or categorize a system of matter that can begin to modify itself in areas of intelligence and self-awareness such as that found in AIs.

In this area, Xavier Zubiri again provides a longer explanation which is in harmony with Karl Rahner’s essential theology, yet expands our definition of being in the world. Zubiri agrees with I that our being is essentially corporeal, and I goes on to provide a three-fold definition of the body as 1) a system of properties and structural positioning which we can understand as our “organism,” 2) through organization in the mind, the being is also a complex which has a “proper configuration,” and 3) the organization and configuration determine the real physical presence: the being here-and-now of the sôma. According to Zubiri, it is thus that the body can signify “I, myself” as present “here.” Zubiri’s outline is specifically applicable in the realm of artificial intelligence, especially in regard to the creation of a ‘Seed AI.’ For although leaders of the Seed AI effort do not reference I or Zubiri, it is clear that they are designing a being that fits the Rahnerian/Zubirian model.

The description given by Seed AI proponents and programmers dovetails substantially with Rahner’s idea of “self-directing systems” that “in a certain sense have a relationship to themselves.” However, in Rahner’s conception, such systems would have only “a limited possibility of self-regulation.” I believes that human beings become both self-directed and self-regulated, because they can utterly confront their own systems “with all [their] present and future possibilities,” and thus confront ourselves in our entirety.

Yet according to philosophers like Karl Rahner, only human beings can “place everything in question” and thus become transcendent. Given the data regarding the construction of AI beings, I believe it is now possible to accept Rahner’s acceptance of the idea of “self-directing” and “self-regulating” systems, and apply Rahner’s and Zubiri’s criteria to beings other than those organically or biologically ‘human,’ enabling them to be considered ‘persons’ in an ethical, theological and spiritual sense.

In the outline of ‘personhood’ provided by Zubiri, the first characteristic of a transcendent being is one who has a system of properties and structural positioning which can be understood by the being as its organism or organization. This concept of “organization” is similar in spirit – if not in fact – to Rahner’s “self-directing” description. Thus, it is interesting to see that the creators of a Seed AI state that they deem ‘self-understanding’ as the first primary requisite of an AI, which they define as:

The ability to read and comprehend source code; the ability to understand the function of a code fragment or the purpose of a module; the ability to understand how subsystems sum together to yield intelligence; plus the standard general-intelligence ability to observe and reflect on one’s own thoughts.