top of page

Artificial Intelligence, Philosophically Speaking (Pt. 1)


ree

We recently wrote about how the meaning of being "human" might be framed in relation to AI (through the thought of Giordano Bruno, William of Ockham, Thomas Hobbes). We've always had to talk about the existence of God (understood as metaphysics, that which goes beyond the physics we know or can ever rationally know) not because we are attached to God but because without metaphysics there would be no discussion: AI would be a piece of nature. To be fair, a careful philosopher who included quantum physics might object that Heisenberg's uncertainty principle could suggest the existence of a world 'with a metaphysics', but without necessarily including a God.

ree

The uncertainty principle, postulated by Werner Heisenberg in 1927 and which radically changed all human knowledge up to that point, states that at the subatomic level particles do not "move" in a deterministic way (as, according to classical physics, a tennis ball hit by Jannik Sinner at a specific point would) but instead move in a probabilistic way; each particle can be in multiple places at the same instant and have an undetermined quantity of motion (which is the quantity of mass multiplied by the particle's velocity). Worse: the uncertainty principle shows that it is not possible to simultaneously measure a particle's position and momentum without altering one or the other, given that the act of measurement definitively changes the system (like when you ask your partner to go on vacation alone; once asked... the relationship changes, regardless of what you actually do). Now, because of this principle, and only because of this principle, materialists could accept the concept of free will. In fact, if we remain in classical physics (where it is perfectly known where tennis balls will end up once hit), there would be no free will. The Big Bang plus chemical laws and physical laws would 100% determine our entire future; there would be no human different from nature (as Giordano Bruno and Baruch Spinoza said); there would be no excess beyond an already written story.

ree

So, it is precisely thanks to the uncertainty principle that we could hypothesize a reality in which God does not exist and, at the same time, a form of free will exists. Well, parking for a moment this criticism, to which we artificially respond, and without claiming to say something right, that in the end quantum physics is always a physics, we remain, in this article, in dualism: either God exists, and therefore we philosophically ask what role AI has in relation to humans, or God does not exist, and therefore the question loses its philosophical weight, certainly retaining its economic, social, and ethical weight.

Raffaello Sanzio The School of Athens detail
Raffaello Sanzio The School of Athens detail

The first to wonder about artificial intelligence was Plato. But first, let's clarify a point: computer science does not mean "computers," but instead means "manipulation of information." Computer science exists even without computers and even without electricity. Computer science is computable information. Every time a finite sequence of calculations is formal, repeatedly executable, with an effective, determinable result, and arrives at the result in a finite time... then we are dealing with computer science.

Jacques Louis David The Death of Socrates
Jacques Louis David The Death of Socrates

Okay, okay... someone might object that quantum physics opens us up to quantum computers, which would violate part of what I just wrote, but let's move on, they're not yet commercially available for mere mortals! So, as I was saying, Plato around 450 BC in the text Euthyphro, reports that Socrates asked his friend Euthyphro how one could formally and universally define being pious or impious. The issue was that both were queuing at the Archon for justice, Socrates accused of misleading young people towards unconventional ideologies and deities, Euthyphro queuing to denounce his father for murder (because he had left a servant to die of starvation who, in turn, had, while drunk, killed a slave). Socrates thinks Euthyphro must be truly pious to accuse his father, so he asks him for a sure system to understand when one is certainly pious: "What exactly are the characteristics that make acting pious, so as to use them as a standard when I judge other men?"


ree

Socrates is asking his friend what computer scientists call "an executable procedure." For Plato, all knowledge should be represented by executable specifications: as Hubert Dreyfus points out in What Computers Still Can't Do – A Critique of Artificial Reason (1972), if in "know-how" the "know" does not become "how," we are not in the presence of knowledge, but rather a "belief." Plato, however, partly manages to contradict himself, as does Descartes, since both fail to abandon the concept of "intuition," the former, and the concept of "consciousness," the latter.

ree

It is necessary to reach Hobbes to do without any excess with respect to matter and establish that everything is computable. For Hobbes, when a man "reasons," he is logically processing formal elements. The culmination that then leads to a complete theory of information (computer science) arrives with the invention of the binary system (0 and 1, thanks to the mathematician Leibniz in 1703 in Explication de l'Arithmétique Binaire) and George Boole's Boolean logic, who, following in the footsteps of Thomas Hobbes, believed that the way man reasons could be formalized and thus "invented" an algebra of logic (little gossip, Boole was married to the daughter of Sir George Everest, who gave his name to the eponymous mountain).

Boolean logic provides: True AND False = False / True OR False = True / True AND/OR True = True / False AND/OR False = False / NOT True = False / NOT False = True to which we add for clarity: IF True THEN True / IF False THEN "anything goes." The last step towards computation, and thus towards the possibility of seeing artificial intelligence proceeding apparently autonomously and with adequate speed, arrived with the conception of the first computer by Charles Babbage (1835), which although it was mechanical, it truly replicated the functioning of a modern digital computer (thanks to punched cards and logical and mathematical functions "hardwired" by means of gears - like a complicated clock).

An analog computer proceeds by changing physical measurements (such as liquid volume or electrical voltages), while a digital computer proceeds on a purely formal basis encoded with binary signals (BITs). Babbage's was a mechanical computer with the formal power of a digital one, therefore programmable, but constructible without electrical components. Much earlier than Dreyfus's writings of 1972, it is worth noting that in 1950, in Mind magazine, Alan Turing wrote an article titled "Computing Machinery and Intelligence" in which he described what would become known as the Turing test: virtually all subsequent articles on artificial intelligence are based on this article.

From this first historical dissertation, we can therefore understand that "artificial intelligence" is something that must be defined within what is formally computable, but at the same time the definition requires proof of real intelligence. To date, the intelligence levels of automatic systems have been schematized into various levels, which we summarize as follows:

  • Narrow AI, when the computer is able to do better than humans on specific tasks (like playing chess)

  • Generative AI, what we have today, where it is able to write texts, create summaries, identify sentiment

  • General AI, when AI will be able to solve problems at a human level

  • Super AI, when AI will be able to pose problems and solve them even where humans cannot reach

Stefano Tamburini Ranxerox

To this list should be added Artificial Life (Alife) where the ethical and ontological theme forces us to define what life is; Alife is the last frontier for researchers as AI is no longer just an oracle gently waiting to be used, but it is part of a living being, totally digital (but on the "totally," we shall see).

In the period between General AI and Super AI (also called strong AI), according to Makoto Kureha, professor of ethics at Waseda University (in Implications of Automating Science, 2023), AI will be able to generate scientific documents; the task of scientists will be to understand and "translate" the results of such research so that they are understandable and applicable.

An Idea, a Dream

What has brought us from Babbage, or even from Plato, to today? Our example of Plato can be considered AI only retrospectively and for didactic purposes; even in the 1970s, with already supercomputers available, the scientific community had many dreams, but decidedly scarce and discouraging results, yet they went on undeterred, certain that the result would come.

It is always Hubert Dreyfus in '72 who writes: "In spite of grave difficulties, workers in Cognitive Simulation and Artificial Intelligence are not discouraged. In fact, they are unqualifiedly optimistic." At the base of their "unqualified optimism" there is a main basic assumption: the way humans reason proceeds discretely, we could say timed, exactly like that of computers. If nature, with this method, has produced human intelligence, science, with the same method, will be able to produce equally valid artificial intelligence. It is interesting at this point to summarize the assumptions of these scientists in those years underlying their research; they are:

  • The biological assumption. As mentioned, the human brain proceeds in a timed manner and neuronal connectors can communicate with signals formalizable as on/off.

  • The psychological (or philosophical) assumption. David Hume's empiricism is based on the fact that all knowledge in the human mind arrives by means of impressions, which we can compare to information stored in BITs. In contrast, it is Kant's idealism, with its categories of the intellect (among which we could also include Boolean logic), that suggests how the human brain also has its "basic software." Hume and Kant are the basis for providing a model for computer scientists; the path was paved. We will return to this very interesting point soon.

  • The epistemological assumption. Human intelligence can be represented formally, and thus executed by an automaton, like a computer. This does not imply that humans reason like a computer, but that their reasoning is formalizable. To make it clear: thanks to differential equations, we can model the motion of planets, even though the planets "know" nothing about differential equations.

  • The ontological assumption. All knowledge must be expressible in BITs, thus it must be discrete (countable), explicit, and determined. As we said above, up to this point the hypothesis of indeterminacy is not taken into consideration: everything must be representable clearly. This hypothesis suggests that when dealing with intelligence, the whole must be assembled from the parts; there is no room for unintelligible knowledge.

These assumptions were made in the distant 1972, certainly at a moment very different from today, but their formulation, like others we will discuss, are a fundamental step in the development of AI.

 
 
 

Comments


bottom of page