The Death of Intelligence
- Deodato Salafia
- Sep 8, 2024
- 7 min read

From April 2024 until today, on this magazine, we've explored philosophy, history, and perspectives of artificial intelligence (AI), ethics, and even touched upon spirituality. It's time to take stock and offer some reflections.
Blessed Nature, Blessed Man

We've established that AI is just another part of nature, of creation (that's precisely the point). AI is part of the realm. In this realm, man has always been the most intelligent, more than animals, more than anything else in nature. Nature is beautiful, but man is the intelligent being. It's not easy to define what intelligence is, but whatever the definition, if we were to create a scale, man has always been at the top. What's happened in recent years is that AI has begun to appear to us as if it's on the verge of challenging this supremacy. As discussed in a previous article, the theory of computation and the high power of machines could ultimately prevail in many areas that have always been reserved for human intelligence. It's estimated that the books written by humanity number 130 million, while all the information produced to date, including the Internet, videos, etc., amounts to 147 zettabytes, estimated to be 181 in 2025, whereas it was only 64.2 in 2020. A zettabyte corresponds to a trillion gigabytes, a quantity expressible with a number with 21 zeros of bytes. Statistical rules applied to very large numbers (Big Data) will lead computers to predict our behavior with an extremely small error. Many people say that humans can do many things because we have a culture passed down through generations, which perhaps even shapes our physiology. If this is true, let's try to imagine what zettabytes of experience can do.

This fear of losing our primacy generates a lot of unease in society, to the point of creating panic. One might object: after all, man created AI, so whatever power it might reach is an achievement attributable to man. But in reality, it's not like that; it's like saying that because man created the atomic bomb, he can also nullify its effects if it explodes. Furthermore, man is also a part of nature, and in the case of AI, we can't speak of an invention, like an airplane or a refrigerator. AI is first and foremost a concept, a formalism, so much so that it was theorized more than half a century ago. AI isn't an invention, but rather a discovery, like black holes, quantum physics, and other things that simply exist. Conceptually, AI has always existed; it just wasn't possible to use it until now. AI also needs artifacts to be executed, but it's primarily a theoretical model that has existed at least as long as man (we've pointed to Plato as the first computer scientist). Be careful, because the fact that AI is a formalism, or at least a concept, elevates its status. This connotation forces man to understand that, all things considered, to be intelligent, it wasn't necessary to be intelligent. AI threatens man, proving itself more intelligent than him and stripping him of his claim to be the most intelligent in the realm.
But the scenario appears to be much, much more unsettling, which is: intelligence doesn't exist, except as an absolutely theoretical concept. And after the death of God by Nietzsche, Marx, and Levi Strauss, the death of man by Michel Foucault (The Order of Things, 1966), and the death of art by Arthur Danto, now it's intelligence that is dying. AI is a software and a computer architecture; it isn't truly very intelligent. In fact, it performs statistical predictions, mimics, and simulates what humans would do after observing them and reducing their actions into numerical matrices. Recently, provocatively, Noam Chomsky declared that AI is a plagiarism software. Nevertheless, despite the evidence of these statements, not many of us think so; some are already enthusiastically speaking, even in scientific circles, of artificial life (Alife), while others, conversely, are terrified (we discussed it here).
The Ethical Question

Why does AI software (you can also call it a system, platform, or architecture) carry a higher ethical burden compared to other computer and social constructs? In the case of AI, the issue isn't about the ethics of content or interactions, as it might be for other social platforms, from X to OnlyFans. In the case of AI, the ethical concern is linked to the fact that AI appears to generate content and actions relevant to humans, simulating being human itself. The real problem is that many ordinary people are starting to see AI as possessing a certain humanity. This isn't true, but it appears that way, and if something appears a certain way, that's probably what matters. What's more, the ethical burden increases when considering the speed with which AI improves its ability to simulate humans. Many are debating how to limit and regulate the use of AI and even its scientific development, so that humanity can legally define the boundary between man and machine and ensure that man remains in control. There's agitation in regulating this because there's a fear that man could truly lose control, or at the very least, thanks to AI, it could happen that a few men could control many others. In reality, all this is already happening with social media and apps, but what's terrifying is the combination of two factors: being controlled by software without even knowing it. Just think of the devastating tsunami about to engulf the world of information, where we'll no longer truly know if information is true and verified by a human, or false and invented by a software.
A Different Point of View

However, throughout the published articles, our approach to the ethical question has been very different. We haven't delved into the question of the ethical use of AI. Instead, we've asked what ethics is available to man (and not to AI) if AI reaches extremely advanced levels (super AI and general AI). Simply put, how would man interpret himself if AI were to reach a level (real or simulated, it matters little) of intelligence, humanity, empathy, and effectiveness in solving not only technical problems but also problems typically related to man (psychoanalysis, medicine, law, spirituality, childhood, support for the elderly, just to give a few examples). Stretching the imagination: how should man interpret himself if he were to lose the primacy of being the most intelligent in creation? The ethical question is therefore first and foremost an ontological question: what is man. Until today, we knew one fundamental thing: man is exceptionally intelligent, he governs nature and his own nature, and although not perfect, it's at such high levels that it seems to be the image of a god. With the advent of AI, this conviction is crumbling. Man is working to create an artificial version of himself. For millennia, and especially after Darwin, man has wondered how life could arise from matter. Now, while still unable to answer that question, man is indeed doing it; it seems he's not too far from this result. Unlike other discoveries, even much more phenomenal than this one, with AI, man himself is called into question. Think of the Copernican revolution or the discovery of relativity to imagine something undoubtedly more phenomenal for their time—enormous discoveries, but they left man where he was.

Whether one believes in a god or not matters little; atheists and believers are united by the awareness that man is a special piece of nature. Even the best disciple of the materialist Karl Marx or the evolutionist Charles Darwin cannot in any way accept assimilating a human being to a lizard, a bacterium, a rock, or a computer. Atheists and believers have always given man a different role compared to nature. We've addressed the issue following several directions, which we can summarize as follows:
the positions of some eighteenth-century philosophers who assigned nature such a powerful role as to accept anything, including a reduction of man's supremacy (Thomas Hobbes, Giordano Bruno, Baruch Spinoza);
universal Darwinism, which expands Darwin's theory even beyond biology and chemistry, pushing it to bionics;
we investigated computation and asked whether every one of our reasonings is computable (executable by a Turing machine);
we asked ourselves if there is something for man (consciousness, soul, or something else) that is beyond his physical and biological nature, which by definition cannot be executed by any computer;
we deepened the relationship between bionic and biological reality, investigating the role of metabolism in the latter;
we defined what information is, how it's measured, what doubt is, and how doubt in turn presupposes some information; provocatively going back, we said that the source of all doubt is God;
finally, we revived the philosophy of the theologian Bernard Lonergan to argue that man has an innate drive to know everything, which therefore provides him with movement and creative drive.
In this journey, certainties and opinions have emerged. There are two certainties. We've hinted at one: that our society is genuinely concerned about the evolution of AI, to the point of questioning its own identity. There are people who talk to deceased loved ones transported into AI, as we discussed, or who even marry it. The second certainty is that without a metaphysics (be it a god or something else), we would be forced to appeal only to nature. If this were the case, chemistry and physics, which are the basis of biology, would be as powerful as a computational system that simulates them. Without a metaphysics, the resulting materialism would nullify free will; the world would proceed out of necessity. This doesn't imply that it "is worth less"; the analysis is philosophical, not value-based. We've seen how some contemporary philosophers and scholars identify quantum physics as something that transcends materialism, providing man with a kind of consciousness and free will, and at the same time not implying the presence of a god or a specific transcendence. For now, there are no concrete developments in this direction, with the exception of the works of Roger Penrose and Stuart Hameroff; however, the theory is highly controversial. The attempt of some researchers is to demonstrate that man has a consciousness that can never be transferred to a software, but without this implying any metaphysical or divine reality.

Among the various opinions, we count two: one very disturbing and the other reassuring. The first disturbing one is that super-fast computers can perform extremely complex computations, simulating the computational power of many human years in 1 minute. This means that the machine will be better than man at almost everything, or everything, and this could happen very soon. The second opinion emerged with the article "I'll Tell You About the Theologian Who Proved Why AI Will Never Surpass Man" and tells us that man has a drive for knowledge that no machine can ever have. Will this make the difference? Or will it be a meager consolation? We fear that it won't be possible to limit the use of AI through laws (we started talking about it here), just as it wasn't possible to prevent atomic weapons from spreading worldwide. Another opinion that emerged in this journey relates to universal Darwinism: if we try to do without the romantic necessity of being linked to biology and species, the world will proceed with its own categories, as nature, or God, designed it. The death of intelligence isn't the transformation of something that existed and now no longer exists, but rather a simple interpretation of a possible scenario, already coded from the beginning. It has nothing to do with God; believers shouldn't be agitated. A simple reading is always a good reading, and, after all, God is also a simple reading. Revelations can be exogenous or endogenous, but the truth is only one.
Comments