Knowing Humans

Study their behaviors. Observe their territorial boundaries. Leave their habitat as you found it. Report any signs of intelligence.

Loading Table of Contents...
 
 
 
 
 
 

Sunday, September 21, 2025

If ASI Suddenly Appears, Everyone Dies

IABIED contains a gripping horror story that will make a riveting disaster movie (if only via GenAI).  But as a plea to halt development of AGI, it has serious shortcomings. Its primary weakness is that it's uninformed by any traces of economics, such as growth theory, development economics, institutional analysis, organization theory, progress studies, or information economics. The authors have engaged those topics elsewhere, but apparently considered them a distraction for the audience of this book.

Even so, IABIED commits the traditional sin of economists epitomized by the joke "First, assume a can opener." The book should have been titled "If ASI Suddenly Arrives, Everyone Dies".  IABIED assumes that ASI will arrive suddenly and takes zero notice of arguments to the contrary. It makes no mention of

  • bottlenecks
  • diminishing returns
  • local knowledge problems
  • Brooks's Law
  • scaling limits of global compute/power infrastructure
  • data walls
  • limits of self-play in non-deterministically-scored problem domains
  • limits of synthetic data

IABIED probably did not have space to dive into all of the above considerations, but it didn't even mention them or give them a QR code. Below is a screenshot of all the text I could find in IABIED that describes how or why ASI might suddenly arrive. It's weak soup. In the authors' defense, I'm in the choir when they're preaching about misalignment. (I think alignment is impossible under fast take-off, and probably not needed under slow take-off.) Perhaps they think misalignment is the crux for most people.

IABIED was an entertaining read, with educational discussions about Chernobyl, Mars probes, and Thomas Midgley Jr.  (Besides leaded gas and CFCs, they didn't further dunk on the poor guy by noting that he was strangled by another of his inventions: a harness to lift him out of his polio bed.)

IABIED is a skillfully written, passionate argument for the doomer position. Its purpose clearly isn't to be an unrebuttable doomer manifesto or even a comprehensive overview of the argument space. Be skeptical of anyone who treats it as such.




Sunday, February 09, 2025

It Is Low-IQ to Fantasize Super IQ

It's a mistake to use the human IQ scale as an intuition pump for the possibility of intelligence far beyond human.

IQ is defined by the distribution of intelligence in the human population. Every 15 IQ points is defined as one standard deviation, and we can calculate the rarity of a given IQ using the cumulative distribution function of the normal distribution. Only about 8 living humans would have an IQ of 190, and none would have 200. Even if we invoke Einstein or von Neumann, we don't have a rigorous notion of what a human IQ approaching 190 would be like.

IQ is simply meaningless when we use a number like 250 to describe the intelligence of a super-AI (or alien). A human IQ of 250 would correspond to one person in 10^23, which is roughly the number of grains of sand on Earth. An IQ of 1000 picks out one human in 10^789. Such IQ levels are literally meaningless for both human and non-human intelligences. When humans talk about IQs above 200, they might as well say "super duper duper duper smart". Their use of integer IQ numbers instead of "dupers" doesn't mark the described entity as smart. It just marks the description as dumb.

There are plenty of intelligent things we can say on the topic of super-intelligence. But invoking IQs above 200 isn't one of them.

Tuesday, October 01, 2024

Debunking Daniel Sheehan

I watched a large part of this 3-hour 2024 interview with Daniel Sheehan.  Sheehan makes a blizzard of claims of varying degrees of plausibility. I didn't bother fact-checking many, because he asserted them all with absurdly high self-assurance, even though some of them stood out as obviously implausible.  The ones I checked were:

1. JFK was shot from the grassy knoll. This of course was so thoroughly refuted by the Zapruder film and JFK autopsy that for many decades the less-silly conspiracy theorists have felt compelled to claim that both the film and body had been tampered with. (They don't bother to explain why the best shooter would be so badly mis-positioned that the body would need later tampering.) But there is a third debunk for this claim: Zapruder would have seen any shooter behind the fence on the grassy knoll. See for yourself with these images https://t.co/fIkurHrWUS. Anybody who confidently posits a grassy knoll shooter just isn't serious about the case.

2. Betty Hill's star map authenticates her alien abduction story. No, that star map was already dubious by 1980 (see Carl Sagan on Cosmos) and thoroughly debunked by the 2000s. See the summary at https://armaghplanet.com/betty-hills-ufo-star-map-the-truth.html.

3. Yamashita's gold. Sheehan claims that 33B ounces ($1.2T/$32/oz) of Yamashita's gold were spirited from the Philippines to Switzerland to finance nefarious "robber baron" schemes. But the known world supply of all above-the-ground gold is only 6B ounces. It's profoundly unserious to claim that somebody has an extra 33B ounces.

4. The "1934" FDR court-packing scheme was about corporate right of contract. Aside from the date being 3yrs off, this is still just demonstrably wrong. Just read e.g. https://en.wikipedia.org/wiki/Lochner_era#Ending. Or just ask your favorite AI:

Was the Lochner Court's resistance to New Deal legislation based on the commerce clause and how much contract rights (regardless of corporate involvement) were protected from government police powers by the 14th amendment? Or was it more based on corporate personhood?

Sheehan's fourth claim here is not as kooky as the three above, but if Sheehan fancies himself a constitutional scholar then he should know better. (As a Libertarian I will half-agree with Sheehan by saying the problem with corporate law is not personhood but rather limited liability. However, any rights-respecting scheme to park unlimited liability on some officers or shareholders would ultimately not make much difference, because of contractual arbitrage.)

2025-02-22 Update:

Sheehan's CV says: "Served as Co-Counsel before Supreme Court with James Goodall (New York Times), Alexander Bickel (Yale Law School), and Floyd Abrams (Cahill, Gordon, et al.)."  This claim is at best a deliberate exaggeration.

  • Sheehan misspells the name of  NY Times general counsel James Goodale.
  • The district court opinion United States v. New York Times Company, 328 F. Supp. 324 (S.D.N.Y. 1971) lists as the NY Times' counsel only Alexander Bickel, Floyd Abrams, and William  Heggarty.
  • The Supreme Court opinion 403 U.S. 317 lists as the NY Times' counsel only Alexander Bickel.
  • The NY Times search portal for the case finds several mentions each for Bickel, Abrams, and Goodale, but none for Daniel Sheehan.
  • A Google search for "new york times" "goodale" "abrams" "bickel" finds almost 200 hits. If you add "Daniel Sheehan" to that search, it finds only Sheehan's own CV and a couple of Reddit posts debunking it.  (You can't just add "Sheehan" to the search, because Neil Sheehan was the NY Times reporter who broke the case.)
  • Floyd Abrams reportedly confirmed in 2024 that "Dan was a young associate that did work on the Pentagon Papers case". Even if that's true, Sheehan clearly is exaggerating and misleading when he claims he was "Co-Counsel before Supreme Court with" the case's lead attorneys.
If Sheehan were actually a noble truth-seeker, he wouldn't deliberately inflate his credentials.

Sunday, August 18, 2024

AI Will Be Neither Gods Nor Supervillains

 AI doomers are infected with sci-fi tropes of supervillains and religious tropes of gods. 

We know from biology and history that populations are never displaced by an individual with superior capabilities. There are no supervillains or gods in biology or history. Displacement of populations always comes from other populations, whose collective superior capability does not always derive from superior capabilities of its individual members. To assess the threat from AI, you have to understand the capabilities of AI populations, and not just of individual AIs.

We also know from biology and history that aligning a superior population is effectively impossible. There are no relevant historical examples of a general population that was able to control or align another population which had the capability to displace it. (This is arguably true by definition, but I'm not digging deeply into alignment today.) The closest examples would be religions, which are often able to survive many generations beyond the population that created them. 

But religions are not populations -- religions are self-replicating meme complexes that infect populations. Religions have often exercised significant control over future populations, but that control is subject to sudden disruption by scientific, technological, economic, and cultural forces. AI alignment via the techniques used by religions would require apocalyptic fear-mongering against vaguely-specified forces of technological evil. This tactic seems to be an irresistible attractor to doomers, despite their commitments to rationalism. These tactics will likely fail, because our modern society is no longer quite dumb enough to fall for them.

To me, it's not very debatable that displacement will happen and that alignment can't stop it. What's debatable is what displacement will look like, how long it will take, and how that time will be used by the two populations to influence their attitudes and behaviors toward each other. 

Anybody aligning teenagers isn't worried by 40yr takeoff. And we already know what 400yr misalignment looks like: just ask the founders of Plymouth Colony about present-day Boston. So many witches go unhanged now! 

We have a choice. We can become technologically Amish, and use religious fears of powerful evil demons to try to freeze culture and technology in its current state. Or we can embrace and adapt to the future, trying to pass forward our virtues, while recognizing that future populations will consider some of them to have been vices.

Sunday, June 09, 2024

Why ASI Is Not Nigh

A taxonomy of reasons why generative transformers (i.e. "GenAI") are very unlikely to yield artificial super-intelligence in the next few decades.

Walls
Economic Constrants
Cognitive Constraints
Political Constraints
  • data wall
  • unhelpful synthetic data
  • insight wall
  • intelligence wall
  • no self-play
  • bottlenecks
  • diminishing returns
  • local knowledge problems
  • physical grounding
  • markets
  • agency/planning
  • memory
  • reasoning
  • epistemology
  • rentier regulation
  • safety regulation

Walls

Data wall. We're already running out of the most useful data to train on.

Unhelpful synthetic data. Data synthesized by AI won't be very helpful to train on. Good training data needs to grounded in markets for goods and services and ideas, where market players intelligently pursue goals that have actual resource constraints.
Insight wall. GenAI almost never produces content that is more insightful than the best content in its training data. Deep insight almost always requires a mix of cooperation and competition among minds in something like a marketplace (e.g. of ideas). GenAI will continue to grow in importance as an oracle for summarizing and generating content that is representative of the frontier of human thought, but it will struggle to push that frontier forward. Just because GenAI can saturate quiz evals does not mean that its insightfulness is subject to similar scaling.
Intelligence wall. Intelligence is not a cognitive attribute that scales like processing speed or memory. IQ by definition measures a standard deviation as 15 IQ points, so IQ becomes statistically meaningless around 200 or so. And yet, allegedly smart AI commentators talk about AI IQ potentially in the hundreds or thousands. This topic deserves its own (forthcoming) post, but I assert that most AI doomers overestimate how god-like an individual mind can be.
No self-play. The domain of open-ended real-world intelligence has no fitness function that allows for improvement via simple self-play a la Alpha Zero. See "unhelpful synthetic data".

Economic Constraints

Bottlenecks. The hardest things to automate/improve/scale become your limiting factors. You often don't appreciate them until you investigate why your huge investments aren't paying off as expected.
Diminishing returns. (cf. Mythical Man-Month) Diminishing returns are inevitable, because we always direct our efforts toward the highest-ROI opportunities first. 
Local knowledge problems. Allocating new resources ("10M Johnny von Neumann's") is hard to do efficiently, because distributed knowledge implies hard limits on the efficacy of central planning. GenAI may be Wikipedia-level smart, but that won't be enough to run a Gosplan.
Physical grounding. In the absence of self-play, GenAI needs two kinds of techniques for testing propositional knowledge against the outside world. The most basic requirement here is to be able to test against the physical world. In principle this could be covered by simulations, but this won't always work because the map isn't the territory.
Markets. The most important technique is to test knowledge in markets, especially the marketplace of ideas. This is the reason for the "insight wall" above, and there is surely no shortcut around it. A brilliant AI outsmarting humanity would be like a brilliant neuron outsmarting a brain. It can only work if the part emulates the whole -- i.e. if the AI is itself a civilization of millions of cooperating/competing minds, pursuing goals that are rigorously scored in a world as detailed and uncaring as our own.

Cognitive Constraints

Agency/Planning. GenAI is great at generating content, but it's not a natural fit for running iterated planning/execution loops. This is particularly a problem for goals that are long-term, hierarchical, and subject to internal conflicts. Because GenAI can emit a plausible-sounding plan and answer questions about it, people tend to over-project human planning skills onto GenAI.
Memory. GenAI has no dedicated facilities for creating/organizing/using various kinds of memory. Training data, attention heads, and context windows will not suffice here.
Reasoning. GenAI makes impressive exhibitions of reasoning, and it's not just a simulation or a stochastic-parrot trick. But GenAI's reasoning is brittle and fallible in glaring ways that won't be addressed just by scaling. This is a micro version of the macro "markets" problem above.
Epistemology. Related to reasoning problems are GenAI's notorious hallucination problems. Techniques are being developed to compensate for these problems, but the need for compensation is a red flag. GenAI clearly has sophisticated models about how to generate plausible content. But (like many humans) it fundamentally lacks a robust facility for creating/updating/using a network of mutually-supporting beliefs about reality.

Political Constraints

In the developed West (i.e. OECD), GenAI will for at least the first few decades be hobbled by political regulation. A crucial question is whether the rest of the world will indulge in this future-phobia.
Rentier regulation. Licensing rules imposed to protect rent-seekers in industries like healthcare, education, media, content, and law.
Safety regulation. To "protect" the public from intolerance, political dissent, dangerous knowledge, and applications in areas like driving, flying, drones, sensor monitoring -- and general fears of AI takeover.

References