Study their behaviors. Observe their territorial boundaries. Leave their habitat as you found it. Report any signs of intelligence.

Loading Table of Contents...
 
 
 
 
 
 

Sunday, March 10, 2024

Kapor Should Concede To Kurzweil

In 2002, Mitch Kapor bet Ray Kurzweil $20K that "by 2029 no computer or machine intelligence will have passed the Turing Test."  Given the recent progress in LLMs, Kapor's arguments are not holding up very well. The following parts of his essay are now cringe-worthy:

  • It is impossible to foresee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge.
  • While it is possible to imagine a machine obtaining a perfect score on the SAT or winning Jeopardy--since these rely on retained facts and the ability to recall them--it seems far less possible that a machine can weave things together in new ways or to have true imagination in a way that matches everything people can do, especially if we have a full appreciation of the creativity people are capable of. This is often overlooked by those computer scientists who correctly point out that it is not impossible for computers to demonstrate creativity. Not impossible, yes. Likely enough to warrant belief in a computer can pass the Turing Test? In my opinion, no. 
  • When I contemplate human beings [as embodied, emotional, self-aware beings], it becomes extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation, much less to believe that its achievement is within our lifespan.
  • Part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test. Ray Kurzweil's approach relies on an automated process of knowledge acquisition via input of scanned books and other printed matter. However, I assert that the fundamental mode of learning of human beings is experiential. Book learning is a layer on top of that. Most knowledge, especially that having to do with physical, perceptual, and emotional experience is not explicit, never written down. It is tacit. We cannot say all we know in words or how we know it. But if human knowledge, especially knowledge about human experience, is largely tacit, i.e., never directly and explicitly expressed, it will not be found in books, and the Kurzweil approach to knowledge acquisition will fail. It might be possible to produce a kind of machine as idiot savant by scanning a library, but a judge would not have any more trouble distinguishing one from an ordinary human as she would with distinguishing a human idiot savant from a person not similarly afflicted. It is not in what the computer knows but what the computer does not know and cannot know wherein the problem resides.
  • The brain's actual architecture and the intimacy of its interaction, for instance, with the endocrine system, which controls the flow of hormones, and so regulates emotion (which in turn has an extremely important role in regulating cognition) is still virtually unknown. In other words, we really don't know whether in the end, it's all about the bits and just the bits. Therefore Kurzweil doesn't know, but can only assume, that the information processing he wants to rely on in his artificial intelligence is a sufficiently accurate and comprehensive building block to characterize human mental activity.
  • My prediction is that contemporary metaphors of brain-as-computer and mental activity-as-information processing will in time also be superceded [sic] and will not prove to be a basis on which to build human-level intelligent machines (if indeed any such basis ever exists).
  • Without human experiences, a computer cannot fool a smart judge bent on exposing it by probing its ability to communicate about the quintessentially human.
Kapor's only hope in this bet depends on removing the "human experience/quintessence" decorations from his core claim that "a computer cannot fool a smart judge bent on exposing it".  There are no general-purpose LLMs in 2024 that could pass 2 hours of adversarial grilling by machine learning experts, and there probably won't be in 2029 either. But with sufficient RHLF investment, one could tune an LLM to be very hard to distinguish from a human foil -- even for ML experts. 
So Kurzweil arguably should win by the spirit of the bet, but whether he wins by the letter of the bet will depend on somebody tuning a specialized judge-fooling LLM. That investment might be far more than the $20K stakes. Such an LLM would not be general-purpose, because it would have to be dumbed-down and de-woked enough to not be useful for much else. 
I predict that by 2029 we will not yet have AGI as defined by OpenAI: highly autonomous systems that outperform humans at most economically valuable work. A strong version of this definition would say "expert humans". A weak version would say "most humans" and "cognitive work". I don't think we'll have even such weak AGI by 2029. But beware the last-human-job fallacy, which is similar to the last-barrel-of-oil fallacy. AI will definitely be automating many human cognitive tasks, and will have radical impacts on how humans are employed, but AI-induced mass unemployment is unlikely in my lifetime. And mass unemployability is even less likely.

No comments: