top of page

Red Lines for AI: The Factory of Moral Consensus

ree

Epistemological Anatomy of an Appeal


On September 22, 2025, Maria Ressa presented an appeal to the UN General Assembly signed by over 200 personalities—10 Nobel laureates, former heads of state, artificial intelligence pioneers—demanding binding "red lines" for AI by the end of 2026. Engineered pandemics, mass disinformation, systematic manipulation: the risks listed are apocalyptic, the urgency palpable, the signatories irreproachable. But before asking what they propose, we should ask what is happening. It is not a technical proposal seeking validation—it is a performative act that transforms the very nature of the debate, a technological issue into a moral imperative, where expertise yields the ground to authority and precision is sacrificed for consensus. When a topic moves from the technical to the moral domain, the epistemological regime changes—who can speak, how statements are evaluated, what counts as an argument. In the technical regime, validity is measured with data and experiments. Authority derives from demonstrated competence. The debate takes place among experts who share a common language. In the moral regime, validity is measured by the consensus of authoritative people. Authority derives from public prestige. The debate occurs through appeals and media pressure on public opinion. This transition is not degeneration—it is often necessary. Complex issues that affect everyone cannot remain confined to specialized journals. But it entails a transformation: what was an open discussion becomes moral positioning; technical uncertainty becomes ethical certainty; "probably" becomes "evidently."


The Nobel as Universal Certifier


A Nobel laureate in Physics does not have superior technical competence in AI compared to a senior machine learning researcher. Yet their signature weighs infinitely more. The Nobel functions as a universal epistemic certifier—it confers authority that transcends the specific domain. When Geoffrey Hinton signs the appeal, he brings genuine competence. But when economists, former presidents, famous authors also sign, they are not technically validating the claims—they are conferring moral legitimacy to the concern. They are saying: "We too, not AI experts, deem this important enough to put our public reputation on the line." It is political communication, not scientific evaluation. And it works precisely because it does not claim to be the latter.


The Rhetoric of Inevitability


The appeal presents the risks not as possibilities to be evaluated, but as imminent threats. It is the rhetorical structure of the "point of no return"—the same as in the climate debate or the arms race. It does not invite debate on probabilities—it closes it by replacing it with an imperative: we must act, now. "By the end of 2026"—why that date? There is no technical justification. The deadline serves to create urgency, to short-circuit reflection: "we don't have time for details, we must act." Urgency transforms uncertainty into certainty, prudence into paralysis, disagreement into irresponsibility.


The Paradox of Necessary Vagueness


The appeal does not specify exactly what these "red lines" should be. This is not a defect—it is functional to its purpose. Vagueness maximizes consensus. Those who fear surveillance, those who fear unemployment, those who fear autonomous weapons—all find space. Everyone projects their fears onto a surface generic enough to accommodate them all. Vagueness avoids technical conflict. "Banning AI from impersonating human beings"—does it include chatbots? Synthetic voice actors? Avatars in video games? Any specification would generate a fault line. Better to stick to principles, where everyone agrees that "bad things are bad." Vagueness shifts responsibility: "someone must do something" without specifying what. No one can attack it technically because there is nothing technical to attack.


ree

The Confusion of the Three Levels


When we say "red lines for AI," are we talking about regulating scientific knowledge (the algorithms themselves), specific tools (GPT-4, facial recognition systems), or concrete uses (mass surveillance, scams, discrimination)? It's not a detail: it changes everything. Regulating research means limiting science itself—like banning statistics because it can manipulate public opinion. Regulating tools leads to paradoxes: is a chatbot dangerous if it helps a depressed patient or if it scams an elderly person? It's the same tool. Only regulating uses works concretely: we don't ban facial recognition, but its use for mass surveillance without a judicial warrant. We don't ban chatbots, but impersonating real people without declaring it. As with nuclear power: we don't ban atomic fission, but the use of bombs. If they were to say "we want to ban certain algorithms," many scientists would withdraw. If they specified "we want to ban certain systems," industry would object and the problem of defining which ones would remain. If they clearly stated "we want to ban certain uses," they would have to enter the thorny technical-legal detail. Better to remain vague: "red lines for AI." Everyone interprets as they prefer. The general consensus hides deep disagreements that will emerge when a real law needs to be written.


Lessons from History


Television in the 1950s generated identical fears: manipulation, mind control, cultural decay. But regulation was built technically: commissions with communication experts, psychologists, and legal scholars worked for years to define specific rules. Not "TV must not manipulate" (vague), but "hidden advertising is prohibited," "obligation of equal time," "ban on violent content before 10:30 PM" (specific, verifiable, sanctionable). Nuclear power: we don't regulate "atomic fission"—impossible, it's physics. We regulate who can enrich uranium, for what purposes, with what controls. The Non-Proliferation Treaty (1968) works because it translates the moral imperative into verifiable technical devices: counting centrifuges, isotope analysis, IAEA inspections. It took 23 years from the bomb to the treaty. Not due to diplomatic slowness, but because it was necessary to construct technical definitions, verification protocols, and enforcement mechanisms. The lesson is clear: it works when we regulate uses, not knowledge or tools.


The Three Phases (and the one that always skips)


Every serious technological regulation goes through: Phase 1—Moral Alarm: "This is dangerous, something must be done" (appeals, signatures, media) Phase 2—Technical Elaboration: "Here's what to do concretely" (expert commissions, pilot projects, comparative analysis) Phase 3—Binding Law: "Here are the rules and how we enforce them" (parliaments, treaties, sanctions) The problem with the Red Lines appeal? It skips from Phase 1 to Phase 3. From the alarm to the 2026 deadline for an "international agreement," without the laborious Phase 2. The appeal says: "AI must not impersonate human beings." Transforming this into applicable law requires: what constitutes "impersonation"? A chatbot? A deepfake? An avatar? Are there legitimate exceptions (film dubbing, assistants for the blind)? How do you verify that a system "cannot" impersonate? Who is responsible—developer, distributor, user? What sanctions? This requires years of multidisciplinary commissions, pilot studies, adjustments. Not 15 months. Without Phase 2, we risk vague agreements that no one knows how to apply—as with climate.


The Risk of Premature Satisfaction


There is a subtle risk: anticipatory moral satisfaction. We have made the appeal, the important signatures, the UN presentation, the newspapers. We feel we have done something important. And in a sense, it's true—we have shifted attention. But this feeling risks replacing the actual doing of something. Look at climate: decades of appeals, summits, Greta Thunberg, intense coverage. Overwhelming moral consensus. And emissions continue to rise. Why? We are very good at Phase 1 (building moral consensus) and inadequate in Phase 2 (translating it into functional technical devices). We skip from morality to politics without passing through technique. For AI, we risk the same: 2026 big treaty, 2027 everyone signs vague principles, 2028 no one knows how to implement them, 2030 "AI generated the feared problems, but we had made an appeal!"


The Effective Function of Appeals


Appeals serve a purpose. Not to provide technical solutions, but to: Shift public attention: from "technical stuff" to "a matter for governments" Build coalitions: over 70 coordinated organizations Create legitimizing pressure: a politician can now act without seeming technophobic. But when society stops here, believing that "200 important signatures" are equivalent to "we know what to do," we confuse consensus building with technical elaboration. Over 300 news outlets covered the appeal—an extraordinary media success. But how many then followed up with articles on how to concretely implement verifiable red lines?


Theatre and Construction Sites


The Red Lines appeal is political theatre. And theatre serves a purpose—it creates the space where action becomes possible, transforming a technical issue into a collective imperative. With 10 Nobel laureates on their side, a politician has the moral cover to act. But theatre does not build bridges between moral principles and technical implementation, between alarm and solution. The next step—difficult, less media-friendly, thankless—is to transform it into technique: defining what "red line" means, how to verify it, who enforces it, how to manage edge cases. That is where it will be decided whether in ten years we will say "AI was regulated sensibly" or "we made great appeals but then what we feared happened." History—from nuclear to climate—suggests that we are much better at raising the alarm than at building solutions. Perhaps the real red line not to cross is this: confusing having signed an appeal with having solved a problem.

 
 
 

Comments


bottom of page