In the 1960s, Frank Drake, then a professor of astronomy at Cornell, formulated the Drake equation. It’s a probabilistic estimate of the number of intelligent civilizations in the Milky Way galaxy. An intelligent civilization, as he defined it, is one advanced enough to communicate with other extraterrestrial civilizations.
Drake's Equation was not an attempt to quantify intelligent civilizations precisely, but to invoke humans’ curiosity of aliens’ existence. If you’re still reading this, 60 years after, then Drake has been successful.
In this article, we discuss humanity’s attempts to find aliens. We comment on the Fermi paradox, the apparent contradiction between the vastness of the universe and the lack of evidence of any alien life. We then explain game-theoretical models taken from a class paper I co-authored to reconcile such a paradox, and whether or not we should try to reach out to extraterrestrial life. So, buckle up and get ready to wander in the social cosmology seas.
We live in a prodigious universe where there are more stars than there are grains of sand on Earth. Many stars have planets orbiting them. There’s even a list of potentially habitable exoplanets maintained by the University of Puerto Rico at Arecibo. The Drake equation states a high probability of such aliens’ presence that could not be ignored. Ruling out that they are already here, we must ask ourselves why we haven’t found them on Earth. We’ve certainly tried.
In the 1970s, we tried to learn more and more about these civilizations. In 1972, we sent a spacecraft to Jupiter, Pioneer 10. As you’re reading this article, this spacecraft is heading towards the constellation Taurus, before eventually leaving our solar system, finally giving aliens a chance to capture it! In case they exist, and they do catch it, we’ve sent a plaque (below) that describes how we look and symbols of the world where we live.
In 1974, we sent the Arecibo message, an interstellar FM radio message created by the famous science communicator Carl Sagan and Frank Drake. It was a demonstration of technological achievement rather than contacting aliens, but that doesn’t give us, conspiracy theorists, enough data to work with.
Many assumptions are baked into such an interstellar FM radio message. FM stands for frequency modulation, a technology that might be too (or not enough) advanced for aliens. Who knows? Radio signals also attenuate the further they go into the universe, but we can’t rule out that aliens’ have susceptible technology that spots every weak signal, can we?
That was by no means the last time we attempted to contact aliens and give away all our secrets! In 1977, Carl Sagan compiled natural sounds and images to portray the diversity of life and culture on Earth to the Voyager spacecraft’s golden record. As of 2004, Voyager 1 had left our solar system and is in interstellar space, the space between stars, finally giving aliens a chance to learn something about us. NASA, in July 2015, uploaded the audio contents of the record to SoundCloud. About time!
At this point, you probably think that it’s a one-sided relationship; we keep making contact, but we never get any calls back. But in Summer 1977, that was all about to change. Scientists in Ohio State University's Big Ear radio telescope received the “Wow!” Signal from the direction of the constellation Sagittarius. What’s special about this signal is that it was a powerful one. It was as if we were meant to get this one. Maybe aliens saying hello in their language?
The Fermi paradox
Enrico Fermi, the architect of the nuclear age and the 1938-Noble laureate in physics, dabbled in cosmology. In a paper on the origin of cosmic radiation, Fermi introduced a contradiction between the presumed probability of extraterrestrial life and the fact that we have not received any contact. This contradiction came to be known later on as the Fermi paradox.
Now that we’ve introduced the Drake equation, the Fermi paradox, and our attempts to communicate with aliens, let’s explore a proposed explanation for why equally intelligent civilizations have averted communication.
Although there is a cornucopia of posited explanations to the Fermi paradox, we discuss the Dark Forest theory and model it as a sequential game with incomplete information. The Dark Forest theory is described below by Liu Cixin, a Chinese science fiction writer, in his trilogy “Remembrance of Earth’s Past.”
“The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful because everywhere in the forest are stealthy hunters like him. If he finds another life—another hunter, angel, or a demon, a delicate infant to tottering old man, a fairy or demigod—there’s only one thing he can do: open fire and eliminate them.”
The Dark Forest theory states that our galaxy does contain civilizations in abundance described in the Drake equation. These civilizations have still intentionally forgone communicating with others out of fear that other civilizations might destroy them. The theory also states that civilizations that have not practiced this caution have already been destroyed under such circumstances.
The Search for Extraterrestrial Intelligence Institute (SETI), previously a government body and now a nonprofit located in Mountainview in California, postulates that the theory is not implausible. The official policy within the SETI community is only to collect information and not respond to any signals or evidence of extraterrestrial intelligence out of fear that this could be the end of life on Earth.
Here we verify Cixin’s conclusion using informal incentives-based reasoning starting with two axioms:
- Any given civilization’s goal is survival.
- Civilizations continuously grow and expand, but resources in the universe are finite.
Given these axioms, and the physical nature of the universe in which stars are extremely distant from one another, communication between civilizations would initially take place at a drastically slow rate of tens to hundreds of years, since the speed of light limits us. Cixin describes a “chain of suspicion” that is created between any two civilizations as they cannot confidently evaluate an honest intention or a potential threat the other poses. By the time a civilization has gathered enough information to consider another unnegotiable, that other civilization could be well underway to destroy them.
Furthermore, leaving a less technologically advanced – and thus less threatening – civilization alone is not necessarily a safe option due to the potential for exponential and unpredictable technological advancement rates a civilization can undergo. Even if a civilization’s technological progress never outpaces that of another’s, it could broadcast information about that civilization to other civilizations, who might themselves be more technologically advanced and decide to destroy it.
A game-theoretical explanation of the Dark Forest Theory
We explain the dark forest theory using two scenarios, and then we generalize them further to reach a game-theoretical model faithful to the Dark Forest Theory.
The First Scenario
Two civilizations on two different planets already know the existence of one another. They are both advanced enough to destroy the other, and doing so would give them access to additional resources.
Mathematically speaking, the payoff of being destroyed is -inf, and the payoff of doing nothing is zero. However, the payoff of destroying another civilization is some number theta, where theta > zero since some of the finite resources in the universe have now become available. These newly-freed resources allow the destroyer to use them to expand, serving the Cixin’s second axiom.
Thus, The first scenario is an extensive-form game with two rounds and the following properties:
- There are two civilizations (C1, C2) that are aware of one another.
- C1 takes its turn first, then C2 takes its turn.
- Each civilization has the same two possible actions: Destroy (the other civilization) or Do Nothing.
It is straightforward that the dominant strategy and subgame perfect for C1 is to destroy C2. By choosing the destruction action, C1 ensures a payoff of theta > 0. If C1 were to choose “Do Nothing,” it would leave C1 under the mercy of C2. Using backward induction, “Destroy” is the only safe option for C1.
Corollary: If a civilization can destroy another, it will.
The Second Scenario
A civilization could broadcast its existence to other civilizations. The second scenario is an extensive form game with two rounds and the following properties:
- There are two civilizations (C1, C2) that are not aware of one another.
- C1 takes its turn first, then C2 takes its turn.
- Each civilization has the same three possible actions:
- Destroy a civilization: This action can only target civilizations if it knows their existence.
- Broadcast: Let the other civilization know of its existence.
- Do Nothing.
It is the dominant strategy and sub-game perfect for C1 to Do nothing. Again broadcasting puts C1 under C2’s mercy. By backward induction, Do nothing is the only safe option for C1.
Corollary: A civilization will never share information about its presence with a civilization that can destroy it.
The Dark Forest Theory
The Dark Forest Theory builds upon the previous scenarios with a few more generalizations:
- The games are infinitely repeated throughout time.
- There are many civilizations (more than two).
- Technology increases somewhat randomly through time.
In repeated games, Civilization A can’t let Civilization B live just because Civilization B can destroy Civilization A in a future turn if its technological level increases. This is closely related to the first scenario.
Civilizations can also broadcast other civilizations’ existence information to much stronger civilizations, threatening their demolition of the revealed civilizations by any other civilizations to which they are revealed. This gives no civilization any incentive to share the knowledge of its existence with any other, be it weaker or more potent in technological advancement. This is closely related to the second scenario.
It becomes clear that it’s Pareto Optimal and even Nash Equilibrium to destroy any civilizations, those one knows of, and not share existence information out of fear of being demolished by a more potent civilization -- or even a weaker one at a future turn of the game. We might also go further and say that civilizations that shared their existence were destroyed.
It might sound gloomy somehow antisocial to not make friends across the vast universe. However, with how little we know about other planets and systems, and in the absence of a common language and understanding, and with the presence of the chain of suspicion, it makes sense to stay silent or face demolition!