Study their behaviors. Observe their territorial boundaries. Leave their habitat as you found it. Report any signs of intelligence.

Loading Table of Contents...
 
 
 
 
 
 

Sunday, June 09, 2024

Why ASI Is Not Nigh

A taxonomy of reasons why generative transformers (i.e. "GenAI") are very unlikely to yield artificial super-intelligence in the next few decades.

Walls
Economic Constrants
Cognitive Constraints
Political Constraints
  • data wall
  • unhelpful synthetic data
  • insight wall
  • intelligence wall
  • no self-play
  • bottlenecks
  • diminishing returns
  • local knowledge problems
  • physical grounding
  • markets
  • agency/planning
  • memory
  • reasoning
  • epistemology
  • rentier regulation
  • safety regulation

Walls

Data wall. We're already running out of the most useful data to train on.

Unhelpful synthetic data. Data synthesized by AI won't be very helpful to train on. Good training data needs to grounded in markets for goods and services and ideas, where market players intelligently pursue goals that have actual resource constraints.
Insight wall. GenAI almost never produces content that is more insightful than the best content in its training data. Deep insight almost always requires a mix of cooperation and competition among minds in something like a marketplace (e.g. of ideas). GenAI will continue to grow in importance as an oracle for summarizing and generating content that is representative of the frontier of human thought, but it will struggle to push that frontier forward. Just because GenAI can saturate quiz evals does not mean that its insightfulness is subject to similar scaling.
Intelligence wall. Intelligence is not a cognitive attribute that scales like processing speed or memory. IQ by definition measures a standard deviation as 15 IQ points, so IQ becomes statistically meaningless around 200 or so. And yet, allegedly smart AI commentators talk about AI IQ potentially in the hundreds or thousands. This topic deserves its own (forthcoming) post, but I assert that most AI doomers overestimate how god-like an individual mind can be.
No self-play. The domain of open-ended real-world intelligence has no fitness function that allows for improvement via simple self-play a la Alpha Zero. See "unhelpful synthetic data".

Economic Constraints

Bottlenecks. The hardest things to automate/improve/scale become your limiting factors. You often don't appreciate them until you investigate why your huge investments aren't paying off as expected.
Diminishing returns. (cf. Mythical Man-Month) Diminishing returns are inevitable, because we always direct our efforts toward the highest-ROI opportunities first. 
Local knowledge problems. Allocating new resources ("10M Johnny von Neumann's") is hard to do efficiently, because distributed knowledge implies hard limits on the efficacy of central planning. GenAI may be Wikipedia-level smart, but that won't be enough to run a Gosplan.
Physical grounding. In the absence of self-play, GenAI needs two kinds of techniques for testing propositional knowledge against the outside world. The most basic requirement here is to be able to test against the physical world. In principle this could be covered by simulations, but this won't always work because the map isn't the territory.
Markets. The most important technique is to test knowledge in markets, especially the marketplace of ideas. This is the reason for the "insight wall" above, and there is surely no shortcut around it. A brilliant AI outsmarting humanity would be like a brilliant neuron outsmarting a brain. It can only work if the part emulates the whole -- i.e. if the AI is itself a civilization of millions of cooperating/competing minds, pursuing goals that are rigorously scored in a world as detailed and uncaring as our own.

Cognitive Constraints

Agency/Planning. GenAI is great at generating content, but it's not a natural fit for running iterated planning/execution loops. This is particularly a problem for goals that are long-term, hierarchical, and subject to internal conflicts. Because GenAI can emit a plausible-sounding plan and answer questions about it, people tend to over-project human planning skills onto GenAI.
Memory. GenAI has no dedicated facilities for creating/organizing/using various kinds of memory. Training data, attention heads, and context windows will not suffice here.
Reasoning. GenAI makes impressive exhibitions of reasoning, and it's not just a simulation or a stochastic-parrot trick. But GenAI's reasoning is brittle and fallible in glaring ways that won't be addressed just by scaling. This is a micro version of the macro "markets" problem above.
Epistemology. Related to reasoning problems are GenAI's notorious hallucination problems. Techniques are being developed to compensate for these problems, but the need for compensation is a red flag. GenAI clearly has sophisticated models about how to generate plausible content. But (like many humans) it fundamentally lacks a robust facility for creating/updating/using a network of mutually-supporting beliefs about reality.

Political Constraints

In the developed West (i.e. OECD), GenAI will for at least the first few decades be hobbled by political regulation. A crucial question is whether the rest of the world will indulge in this future-phobia.
Rentier regulation. Licensing rules imposed to protect rent-seekers in industries like healthcare, education, media, content, and law.
Safety regulation. To "protect" the public from intolerance, political dissent, dangerous knowledge, and applications in areas like driving, flying, drones, sensor monitoring -- and general fears of AI takeover.

References