It was an honor to vote for Cixin Liu’s Three Body Problem for the Hugo. I was pleased when it won. Barring some fantastic additional book this year, I will nominate the sequel, The Dark Forest, for the first Hugo slot this year. For his vision and his creativity, Cixin Liu can only be compared to Asimov.
The book is exploding with interesting ideas, but one of the most interesting is a new answer to the Fermi Paradox. Liu assumes that life and technological advancement are common in the universe, and that technologically advanced life tends to expand and use resources at an exponential rate, creating a scarcity of resources and the conditions for conflict. From this, and some conditions that arise from interstellar distances, he derives a game where the equilibrium is to remain hidden, and attack those who are not hidden. The reasoning appears valid. Here I’ll reconstruct the game, in extensive form, as I understand it.
The first move is whether to announce yourself and your location to the universe. The first player is an arbitrary technological civilization that has radio communication and other technologies. If it doesn’t announce, no interaction occurs. For utility values, I’ll use some arbitrary numbers that are meant to represent some divergence from the current situation, listed as (FIRST PLAYER, SECOND PLAYER), as is the norm. So, if the first player doesn’t announce, nothing changes for that civilization, and nothing changes for our arbitrary second civilization. Utility change is thus (0, 0):
But if civilization 1 does make a move to announce itself, an extreme game begins if any other civilization hears the message. First, this civilization 2 must decide, should it announce itself to the civilization 1 or not?
Now consider the reply fork. The civilizations are now in communication and are aware of each other. If they cooperate with each other benevolently, then they might both be better off to some degree. We don’t know how much this expected benefit is; call it +C. However, if one or the other is malevolent, then that civilization can destroy the other civilization. Here lies an important set of assumptions in Liu’s model: technology grows quickly, and stars and planets and ships are fragile. For these reason, he assumes, it is always possible to annihilate another civilization. Being destroyed is the worst possible situation. We represent it as -Max. So civilization 1 must decide whether to cooperate or attack.
Civilization 2 must make a similar decision.
The results of one final set of pathways are complete. If civilization 1 cooperates, it could earn some benefit. But it also might just be vulnerable to total destruction. Given interstellar distances, it will be hard to learn enough about the other civilization in order to determine, in the immediate period after first contact, whether they are malevolent. And, no matter how small the odds that civilization 2 will attack, this would seem to be too terrible a possible cost. The result would seem to be that the best move for civilization 1 is really to attack:
This ensures them that the status quo is maintained for civilization 1, although it earns the worst possible outcome for civilization 2.
Of course, civilization 2 can see all this. So they will not, at the second move in the game, reply. They will, instead, remain silent. That would seem to be the end of the game, but Cixin Liu argues that it is not. Instead, he assumes, civilizations tend to grow exponentially, spreading out. That means that, from the perspective of civilization 2, the situation is one where they now know that after some delay of time (represented below with “…..“), they may be encountered by civilization 1, which will then have to play the same game again. (I remove the moves in the game described above, to simplify the diagram.)
So civilization 2 reasons that eventually civilization 1 will find them, and will attack. This means that civilization 2 won’t wait. Instead, the game is the following.
And thus civilization 2, unable to determine with certainty that civilization 1 is benevolent, and wanting to avoid even the possibility of the worst possible outcome, will attack. In sum: taking all this into account, as every civilization should, each civilization both remains silent and attacks those who identify themselves.
There are a lot of assumptions here that we can question. For example, whether it really is so easy to destroy other worlds and civilizations. And, Liu holds that interstellar distances eliminate chances for safe interaction. A civilization is either at a valuable world or cannot communicate. But perhaps one could get around this, by using some location as a neutral place to start communication. And, there is a meta-game: the universe is so dangerous as he describes it, that it might be safer to form large alliances than to wait to be found alone and thrust into the game as described above. (Also, I’ve left out some important details–Liu is fascinated by the complexities of self-referential reasoning, of a kind that requires strange logics to study.) But, even simplified, his argument is very provocative. It suggests that the universe is a dark forest, full of hiding civilizations that will destroy any other civilization that makes itself known. It’s an important achievement: a new hypothesis in answer to Fermi’s paradox.
I hope his argument is unsound. But, given the stakes, we should consider whether he might be on to something.
An update: some scientists have proposed a method to hide Earth from detection via transition. Shared fears?