Study their behaviors. Observe their territorial boundaries. Leave their habitat as you found it. Report any signs of intelligence.

Loading Table of Contents...
 
 
 
 
 
 

Sunday, June 09, 2024

Why ASI Is Not Nigh

A taxonomy of reasons why generative transformers (i.e. "GenAI") are very unlikely to yield artificial super-intelligence in the next few decades.

Walls
Economic Constrants
Cognitive Constraints
Political Constraints
  • data wall
  • unhelpful synthetic data
  • intelligence wall
  • no self-play
  • bottlenecks
  • diminishing returns
  • local knowledge problems
  • physical grounding
  • markets
  • agency/planning
  • memory
  • reasoning
  • epistemology
  • rentier regulation
  • safety regulation

Walls

Data wall. We're already running out of the most useful data to train on.

Unhelpful synthetic data. Data synthesized by AI won't be very helpful to train on. Good training data needs to grounded in markets for goods and services and ideas, where market players intelligently pursue goals that have actual resource constraints.
Insight wall. GenAI almost never produces content that is more insightful than the best content in its training data. Deep insight almost always requires a mix of cooperation and competition among minds in something like a marketplace (e.g. of ideas). GenAI will continue to grow in importance as a technology for summarizing and generating content that is representative of the frontier of human thought, but it will struggle to push that frontier forward.
Intelligence wall. Intelligence is not a cognitive attribute that scales like processing speed or memory. IQ by definition measures a standard deviation as 15 IQ points, so IQ becomes statistically meaningless around 200 or so. And yet, allegedly smart AI commentators talk about AI IQ potentially in the hundreds or thousands. This topic deserves its own (forthcoming) post, but I assert that most AI doomers overestimate how god-like an individual mind can be.
No self-play. The domain of open-ended real-world intelligence has no fitness function that allows for improvement via simple self-play a la Alpha Zero. See "unhelpful synthetic data".

Economic Constraints

Bottlenecks. The hardest things to automate/improve/scale become your limiting factors. You often don't appreciate them until you investigate why your huge investments aren't paying off as expected.
Diminishing returns. (cf. Mythical Man-Month) Diminishing returns are inevitable, because we always direct our efforts toward the highest-ROI opportunities first. 
Local knowledge problems. Allocating new resources ("10M Johnny von Neumann's") is hard to do efficiently, because distributed knowledge implies hard limits on the efficacy of central planning. GenAI may be Wikipedia-level smart, but that won't be enough to run a Gosplan.
Physical grounding. In the absence of self-play, GenAI needs two kinds of techniques for testing propositional knowledge against the outside world. The most basic requirement here is to be able to test against the physical world. In principle this could be covered by simulations, but this won't always work because the map isn't the territory.
Markets. The most important technique is to test knowledge in markets, especially the marketplace of ideas. This is the reason for the "insight wall" above, and there is surely no shortcut around it. A brilliant AI outsmarting humanity would be like a brilliant neuron outsmarting a brain. It can only work if the part emulates the whole -- i.e. if the AI is itself a civilization of millions of cooperating/competing minds, pursuing goals that are rigorously scored in a world as detailed and uncaring as our own.

Cognitive Constraints

Agency/Planning. GenAI is great at generating content, but it's not a natural fit for running iterated planning/execution loops. This is particularly a problem for goals that are long-term, hierarchical, and subject to internal conflicts. Because GenAI can emit a plausible-sounding plan and answer questions about it, people tend to over-project human planning skills onto GenAI.
Memory. GenAI has no dedicated facilities for creating/organizing/using various kinds of memory. Training data, attention heads, and context windows will not suffice here.
Reasoning. GenAI makes impressive exhibitions of reasoning, and it's not just a simulation or a stochastic-parrot trick. But GenAI's reasoning is brittle and fallible in glaring ways that won't be addressed just by scaling. This is a micro version of the macro "markets" problem above.
Epistemology. Related to reasoning problems are GenAI's notorious hallucination problems. Techniques are being developed to compensate for these problems, but the need for compensation is a red flag. GenAI clearly has sophisticated models about how to generate plausible content. But (like many humans) it fundamentally lacks a robust facility for creating/updating/using a network of mutually-supporting beliefs about reality.

Political Constraints

In the developed West (i.e. OECD), GenAI will for at least the first few decades be hobbled by political regulation. A crucial question is whether the rest of the world will indulge in this future-phobia.
Rentier regulation. Licensing rules imposed to protect rent-seekers in industries like healthcare, education, media, content, and law.
Safety regulation. To "protect" the public from intolerance, political dissent, dangerous knowledge, and applications in areas like driving, flying, drones, sensor monitoring -- and general fears of AI takeover.

References

Sunday, March 10, 2024

Kapor Should Concede To Kurzweil

In 2002, Mitch Kapor bet Ray Kurzweil $20K that "by 2029 no computer or machine intelligence will have passed the Turing Test."  Given the recent progress in LLMs, Kapor's arguments are not holding up very well. The following parts of his essay are now cringe-worthy:

  • It is impossible to foresee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge.
  • While it is possible to imagine a machine obtaining a perfect score on the SAT or winning Jeopardy--since these rely on retained facts and the ability to recall them--it seems far less possible that a machine can weave things together in new ways or to have true imagination in a way that matches everything people can do, especially if we have a full appreciation of the creativity people are capable of. This is often overlooked by those computer scientists who correctly point out that it is not impossible for computers to demonstrate creativity. Not impossible, yes. Likely enough to warrant belief in a computer can pass the Turing Test? In my opinion, no. 
  • When I contemplate human beings [as embodied, emotional, self-aware beings], it becomes extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation, much less to believe that its achievement is within our lifespan.
  • Part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test. Ray Kurzweil's approach relies on an automated process of knowledge acquisition via input of scanned books and other printed matter. However, I assert that the fundamental mode of learning of human beings is experiential. Book learning is a layer on top of that. Most knowledge, especially that having to do with physical, perceptual, and emotional experience is not explicit, never written down. It is tacit. We cannot say all we know in words or how we know it. But if human knowledge, especially knowledge about human experience, is largely tacit, i.e., never directly and explicitly expressed, it will not be found in books, and the Kurzweil approach to knowledge acquisition will fail. It might be possible to produce a kind of machine as idiot savant by scanning a library, but a judge would not have any more trouble distinguishing one from an ordinary human as she would with distinguishing a human idiot savant from a person not similarly afflicted. It is not in what the computer knows but what the computer does not know and cannot know wherein the problem resides.
  • The brain's actual architecture and the intimacy of its interaction, for instance, with the endocrine system, which controls the flow of hormones, and so regulates emotion (which in turn has an extremely important role in regulating cognition) is still virtually unknown. In other words, we really don't know whether in the end, it's all about the bits and just the bits. Therefore Kurzweil doesn't know, but can only assume, that the information processing he wants to rely on in his artificial intelligence is a sufficiently accurate and comprehensive building block to characterize human mental activity.
  • My prediction is that contemporary metaphors of brain-as-computer and mental activity-as-information processing will in time also be superceded [sic] and will not prove to be a basis on which to build human-level intelligent machines (if indeed any such basis ever exists).
  • Without human experiences, a computer cannot fool a smart judge bent on exposing it by probing its ability to communicate about the quintessentially human.
Kapor's only hope in this bet depends on removing the "human experience/quintessence" decorations from his core claim that "a computer cannot fool a smart judge bent on exposing it".  There are no general-purpose LLMs in 2024 that could pass 2 hours of adversarial grilling by machine learning experts, and there probably won't be in 2029 either. But with sufficient RHLF investment, one could tune an LLM to be very hard to distinguish from a human foil -- even for ML experts. 
So Kurzweil arguably should win by the spirit of the bet, but whether he wins by the letter of the bet will depend on somebody tuning a specialized judge-fooling LLM. That investment might be far more than the $20K stakes. Such an LLM would not be general-purpose, because it would have to be dumbed-down and de-woked enough to not be useful for much else. 
I predict that by 2029 we will not yet have AGI as defined by OpenAI: highly autonomous systems that outperform humans at most economically valuable work. A strong version of this definition would say "expert humans". A weak version would say "most humans" and "cognitive work". I don't think we'll have even such weak AGI by 2029. But beware the last-human-job fallacy, which is similar to the last-barrel-of-oil fallacy. AI will definitely be automating many human cognitive tasks, and will have radical impacts on how humans are employed, but AI-induced mass unemployment is unlikely in my lifetime. And mass unemployability is even less likely.