Game Theory: Eye for an Eye or Forgiveness?
- Deodato Salafia
- Jul 7, 2024
- 5 min read

Mathematics, Game Theory, and the Science of Human Interaction
Let’s be honest: who doesn’t hold a little resentment toward mathematics? Raise your hand. It feels sterile, overly simple, and too detached from our innate (innate?) need to feel human and emotional.
Yet, who among us does not want to live better—or at least, to live? Mathematics has been an essential tool for the advancement of physics, an incredible conceptual structure that has enabled us to model and even predict reality. This same theoretical invention, mathematics, is now shaping Artificial Intelligence, and in some laboratories, it is even paving the way for artificial life (A-life).
If math is more connected to our real lives than we’d like to admit, it has also—thanks to some brilliant minds—entered the realm of social sciences through a discipline known as Game Theory.
As a computer scientist, I first encountered Game Theory in the early 1990s through a book that changed the course of my life. Recently, I tried to recall the title and author but failed—so I simply described it to ChatGPT, which promptly found it.
The book is The Evolution of Cooperation by Robert Axelrod, with a revised edition featuring a foreword by Richard Dawkins. I restrained myself from discussing Dawkins in my previous article (Universal Darwinism in the Age of AI), but I will inevitably return to him.

The Evolution of Cooperation: The Central Question
In The Evolution of Cooperation, Axelrod poses a straightforward question:
"Under what conditions should an agent, placed in a social environment with no central government, choose to cooperate or defect?"
The author references Thomas Hobbes and his view of human nature as fundamentally selfish—a topic I previously explored in Thomas Hobbes and Deep Learning: AI Between Dragons and Ethical Algorithms.
In reality, a central authority is not always present or enforceable—consider, for instance, international relations. So, to explore this question mathematically, Axelrod created a game to simulate the scenario and determine the best strategy.
He invited 20 experts, including sociologists, computer scientists, and mathematicians, and asked them to program an agent that could play against others in a structured tournament.
The game was based on 30 rounds of decision-making, in which each pair of opponents could either Cooperate or Defect.
If both cooperate, they each receive 3 points.
If both defect, they each receive 1 point.
If one defects while the other cooperates, the defector gets 5 points, while the cooperator gets 0.
At first glance, defecting always seems like the rational choice—it guarantees either 5 points (if the other cooperates) or 1 point (if they also defect). However, if all players cooperated, the total system score would be optimized—6 points per round instead of 5 or 2.

Marcel Duchamp and the Chessboard of Human Strategy
The tournament saw 20 different algorithms competing, each programmed to apply a unique strategy. Surprisingly, the winner was an incredibly simple algorithm with only two rules:
Always cooperate on the first move.
From the second move onward, mirror the opponent’s previous move.
This "eye for an eye" strategy is known as Tit for Tat.
Now, notice something interesting: Tit for Tat could have defected on the last move to maximize its final score, just like those who steal bathrobes from hotels on their last day of stay. Some justify this with the saying, "Opportunity makes the thief," but as I’ve grown, my response has become:
"No, the opportunity only reveals what someone already was—a thief!"
But Tit for Tat does not defect at the last move. Why? Because if it applied that logic, it would have to apply it on the second-to-last move as well, and so on, leading to a continuous cycle of defection.

Can You Outsmart a Winning Strategy?
Axelrod, still unconvinced, presented the winning algorithm to the scientific community and organized a second tournament.
This time, he invited participants to design strategies specifically to beat Tit for Tat. He also increased the number of players.
The result? Tit for Tat won again.
Mathematics teaches us that cooperation, coupled with the ability to punish when necessary, is the best way to dominate.
Social Classes and the Prisoner's Dilemma
Let’s extend this game theory model to social classes.
Imagine assigning labels to the players: some are "wealthy," while others come from the "ghetto".
Suppose these two groups tend to defect more often when playing against members of the other group.
This leads to two key consequences:
The system becomes suboptimal—if everyone cooperated, the overall outcome would be better.
The numerically smaller group is at a disadvantage, as it will face more interactions with members of the dominant group, collecting fewer points on average.
In short, social conflict tends to hurt the minority group, assuming equal coercive power.
Eye for an Eye vs. the Ethics of Forgiveness
How does this "Eye for an Eye" approach compare to spiritual teachings that advocate unconditional cooperation and forgiveness?
Consider the Bible, Matthew 18:21-22:
"Then Peter came to Jesus and asked, ‘Lord, how many times shall I forgive my brother who sins against me? Up to seven times?’
Jesus answered, ‘I tell you, not seven times, but seventy times seven.’"
Even here, Jesus sets a numerical limit.
Interestingly, in subsequent tournaments, Tit for Tat was occasionally beaten by a slightly more forgiving strategy called "Tit for Two Tats".
This variant defects only if the opponent defects twice in a row, demonstrating greater patience.
But Jesus’ strategy of forgiving 490 times would certainly lose in a 30-round tournament. The traditional Jewish approach before Jesus' time was to forgive up to three times.

Is Unlimited Forgiveness a Losing Strategy?
In the short term, yes. Game theory shows that in a brief time horizon, excessive forgiveness is suboptimal.
But what if we extend the game beyond 30 rounds—to an entire lifetime, the survival of a species, or even eternity?
Forgiveness and the Power of Leadership
My interpretation is gnostic. I don’t believe Jesus was referring to eternity, nor do I think we need to invoke God to justify this seemingly extreme and losing position.
Forgiving seventy times seven has two key functions:
A societal function: By forgiving beyond expectations, one becomes an example—a leader worth imitating.
A personal function: As taught in the esoteric book A Course in Miracles—"The best way to forgive is to recognize that there is nothing to forgive."
This philosophy aligns with the Dalai Lama’s teaching:
"Whoever causes you suffering—consider them your teacher."

Can AI Teach Us Spiritual Lessons?
Today’s AI is remarkably "calm" compared to us—it has no ego, no fear of losing self-esteem.
Before sending an email to a colleague or supplier, try passing it through an AI system and asking how it could be improved.
The AI's response will demonstrate a leadership capability far superior to ours.
Ultimately, if nothing has inherent meaning, then forgiveness becomes easy.
We maximize intellectual and spiritual power through cooperation.
And so, we close with a question:
Can the cold mathematics of AI teach us spiritual wisdom?

Comments