There have been two big new developments in the Fermi Paradox in the last few years:
- A 2018 paper showing that the Drake Equation misled us into thinking technological civilization is more likely than it is. (The mistake was shocking. Everybody was focused on the average number of civilizations that should exist across all scenarios, rather than on the number of civilizations in the median scenario. Basic probability theory!)
- A new paper two days ago by Robin Hanson arguing that a hard-steps model of development places the nearest expansionist technological civilization roughly half a billion light-years away. (See videos here and here.)
So I need to get on the record with an idea that hit me a few years back, that I haven't seen elsewhere.
If you take seriously any or all of the Doomsday Argument, Simulation Hypothesis, Modal Realism, and the Fermi Paradox, then the Copernican Principle should make you question why we seem to be such an unlikely/early draw from the space of all possible intelligent observers.
A plausible answer is that we are in a category of simulation designed to explore the rise of artifactual intelligence, and that this would be one of the most frequently-run kind of simulation in a universe dominated (as ours will likely be) by artifactual intelligence.
By "artifactual intelligence" I mean not only the traditional AI of enthusiastic singularitarians, but more importantly the emulated intelligence popularized by Robin Hanson.
This theory answers several questions:
Q: Where is everybody? A: The universe is otherwise uninhabited because alien civilizations elsewhere are irrelevant to simulating the rise of AI here.
Q: Why are we in the kind of universe that seems to allow artifactual intelligence? A: In any base reality that allows AI, the vast majority of its universe simulations would also allow AI. (If the Simulation Argument feels somewhat circular here, throw in some Modal Realism and Anthropic reasoning to see that this answer biases the distribution of possible observers towards our kind.)
Q: Of the likely quadrillions of possible observers among our descendants, why has our sample observation been drawn from so early in the distribution? A: The dawn of AI is intensely interesting to AIs with resources for running simulations.
Tyler Cowen came close to this idea in 2012 when he wrote that "the Fermi Paradox raises the likelihood we are in a simulation." But he didn't point out that simulations would be run by AIs who would specifically be interested in simulating the dawn of AI.
A related idea is Roko's Basilisk, which has been called the world's most dangerous thought experiment, and so I won't describe it here.