Study their behaviors. Observe their territorial boundaries. Leave their habitat as you found it. Report any signs of intelligence.

Loading Table of Contents...
 
 
 
 
 
 

Sunday, September 21, 2025

If ASI Suddenly Appears, Everyone Dies

IABIED contains a gripping horror story that will make a riveting disaster movie (if only via GenAI).  But as a plea to halt development of AGI, it has serious shortcomings. Its primary weakness is that it's uninformed by any traces of economics, such as growth theory, development economics, institutional analysis, organization theory, progress studies, or information economics. The authors have engaged those topics elsewhere, but apparently considered them a distraction for the audience of this book.

Even so, IABIED commits the traditional sin of economists epitomized by the joke "First, assume a can opener." The book should have been titled "If ASI Suddenly Arrives, Everyone Dies".  IABIED assumes that ASI will arrive suddenly and takes zero notice of arguments to the contrary. It makes no mention of

  • bottlenecks
  • diminishing returns
  • local knowledge problems
  • Brooks's Law
  • scaling limits of global compute/power infrastructure
  • data walls
  • limits of self-play in non-deterministically-scored problem domains
  • limits of synthetic data

IABIED probably did not have space to dive into all of the above considerations, but it didn't even mention them or give them a QR code. Below is a screenshot of all the text I could find in IABIED that describes how or why ASI might suddenly arrive. It's weak soup. In the authors' defense, I'm in the choir when they're preaching about misalignment. (I think alignment is impossible under fast take-off, and probably not needed under slow take-off.) Perhaps they think misalignment is the crux for most people.

IABIED was an entertaining read, with educational discussions about Chernobyl, Mars probes, and Thomas Midgley Jr.  (Besides leaded gas and CFCs, they didn't further dunk on the poor guy by noting that he was strangled by another of his inventions: a harness to lift him out of his polio bed.)

IABIED is a skillfully written, passionate argument for the doomer position. Its purpose clearly isn't to be an unrebuttable doomer manifesto or even a comprehensive overview of the argument space. Be skeptical of anyone who treats it as such.




Sunday, February 09, 2025

It Is Low-IQ to Fantasize Super IQ

It's a mistake to use the human IQ scale as an intuition pump for the possibility of intelligence far beyond human.

IQ is defined by the distribution of intelligence in the human population. Every 15 IQ points is defined as one standard deviation, and we can calculate the rarity of a given IQ using the cumulative distribution function of the normal distribution. Only about 8 living humans would have an IQ of 190, and none would have 200. Even if we invoke Einstein or von Neumann, we don't have a rigorous notion of what a human IQ approaching 190 would be like.

IQ is simply meaningless when we use a number like 250 to describe the intelligence of a super-AI (or alien). A human IQ of 250 would correspond to one person in 10^23, which is roughly the number of grains of sand on Earth. An IQ of 1000 picks out one human in 10^789. Such IQ levels are literally meaningless for both human and non-human intelligences. When humans talk about IQs above 200, they might as well say "super duper duper duper smart". Their use of integer IQ numbers instead of "dupers" doesn't mark the described entity as smart. It just marks the description as dumb.

There are plenty of intelligent things we can say on the topic of super-intelligence. But invoking IQs above 200 isn't one of them.