Knowing Humans

Study their behaviors. Observe their territorial boundaries. Leave their habitat as you found it. Report any signs of intelligence.

Loading Table of Contents...
 
 
 
 
 
 

Tuesday, October 01, 2024

Debunking Daniel Sheehan

I watched a large part of this 3-hour 2024 interview with Daniel Sheehan.  Sheehan makes a blizzard of claims of varying degrees of plausibility. I didn't bother fact-checking many, because he asserted them all with absurdly high self-assurance, even though some of them stood out as obviously implausible.  The ones I checked were:

1. JFK was shot from the grassy knoll. This of course was so thoroughly refuted by the Zapruder film and JFK autopsy that for many decades the less-silly conspiracy theorists have felt compelled to claim that both the film and body had been tampered with. (They don't bother to explain why the best shooter would be so badly mis-positioned that the body would need later tampering.) But there is a third debunk for this claim: Zapruder would have seen any shooter behind the fence on the grassy knoll. See for yourself with these images https://t.co/fIkurHrWUS. Anybody who confidently posits a grassy knoll shooter just isn't serious about the case.

2. Betty Hill's star map authenticates her alien abduction story. No, that star map was already dubious by 1980 (see Carl Sagan on Cosmos) and thoroughly debunked by the 2000s. See the summary at https://armaghplanet.com/betty-hills-ufo-star-map-the-truth.html.

3. Yamashita's gold. Sheehan claims that 33B ounces ($1.2T/$32/oz) of Yamashita's gold were spirited from the Philippines to Switzerland to finance nefarious "robber baron" schemes. But the known world supply of all above-the-ground gold is only 6B ounces. It's profoundly unserious to claim that somebody has an extra 33B ounces.

4. The "1934" FDR court-packing scheme was about corporate right of contract. Aside from the date being 3yrs off, this is still just demonstrably wrong. Just read e.g. https://en.wikipedia.org/wiki/Lochner_era#Ending. Or just ask your favorite AI:

Was the Lochner Court's resistance to New Deal legislation based on the commerce clause and how much contract rights (regardless of corporate involvement) were protected from government police powers by the 14th amendment? Or was it more based on corporate personhood?

Sheehan's fourth claim here is not as kooky as the three above, but if Sheehan fancies himself a constitutional scholar then he should know better. (As a Libertarian I will half-agree with Sheehan by saying the problem with corporate law is not personhood but rather limited liability. However, any rights-respecting scheme to park unlimited liability on some officers or shareholders would ultimately not make much difference, because of contractual arbitrage.)

Sunday, August 18, 2024

AI Will Be Neither Gods Nor Supervillains

 AI doomers are infected with sci-fi tropes of supervillains and religious tropes of gods. 

We know from biology and history that populations are never displaced by an individual with superior capabilities. There are no supervillains or gods in biology or history. Displacement of populations always comes from other populations, whose collective superior capability does not always derive from superior capabilities of its individual members. To assess the threat from AI, you have to understand the capabilities of AI populations, and not just of individual AIs.

We also know from biology and history that aligning a superior population is effectively impossible. There are no relevant historical examples of a general population that was able to control or align another population which had the capability to displace it. (This is arguably true by definition, but I'm not digging deeply into alignment today.) The closest examples would be religions, which are often able to survive many generations beyond the population that created them. 

But religions are not populations -- religions are self-replicating meme complexes that infect populations. Religions have often exercised significant control over future populations, but that control is subject to sudden disruption by scientific, technological, economic, and cultural forces. AI alignment via the techniques used by religions would require apocalyptic fear-mongering against vaguely-specified forces of technological evil. This tactic seems to be an irresistible attractor to doomers, despite their commitments to rationalism. These tactics will likely fail, because our modern society is no longer quite dumb enough to fall for them.

To me, it's not very debatable that displacement will happen and that alignment can't stop it. What's debatable is what displacement will look like, how long it will take, and how that time will be used by the two populations to influence their attitudes and behaviors toward each other. 

Anybody aligning teenagers isn't worried by 40yr takeoff. And we already know what 400yr misalignment looks like: just ask the founders of Plymouth Colony about present-day Boston. So many witches go unhanged now! 

We have a choice. We can become technologically Amish, and use religious fears of powerful evil demons to try to freeze culture and technology in its current state. Or we can embrace and adapt to the future, trying to pass forward our virtues, while recognizing that future populations will consider some of them to have been vices.

Sunday, June 09, 2024

Why ASI Is Not Nigh

A taxonomy of reasons why generative transformers (i.e. "GenAI") are very unlikely to yield artificial super-intelligence in the next few decades.

Walls
Economic Constrants
Cognitive Constraints
Political Constraints
  • data wall
  • unhelpful synthetic data
  • insight wall
  • intelligence wall
  • no self-play
  • bottlenecks
  • diminishing returns
  • local knowledge problems
  • physical grounding
  • markets
  • agency/planning
  • memory
  • reasoning
  • epistemology
  • rentier regulation
  • safety regulation

Walls

Data wall. We're already running out of the most useful data to train on.

Unhelpful synthetic data. Data synthesized by AI won't be very helpful to train on. Good training data needs to grounded in markets for goods and services and ideas, where market players intelligently pursue goals that have actual resource constraints.
Insight wall. GenAI almost never produces content that is more insightful than the best content in its training data. Deep insight almost always requires a mix of cooperation and competition among minds in something like a marketplace (e.g. of ideas). GenAI will continue to grow in importance as an oracle for summarizing and generating content that is representative of the frontier of human thought, but it will struggle to push that frontier forward. Just because GenAI can saturate quiz evals does not mean that its insightfulness is subject to similar scaling.
Intelligence wall. Intelligence is not a cognitive attribute that scales like processing speed or memory. IQ by definition measures a standard deviation as 15 IQ points, so IQ becomes statistically meaningless around 200 or so. And yet, allegedly smart AI commentators talk about AI IQ potentially in the hundreds or thousands. This topic deserves its own (forthcoming) post, but I assert that most AI doomers overestimate how god-like an individual mind can be.
No self-play. The domain of open-ended real-world intelligence has no fitness function that allows for improvement via simple self-play a la Alpha Zero. See "unhelpful synthetic data".

Economic Constraints

Bottlenecks. The hardest things to automate/improve/scale become your limiting factors. You often don't appreciate them until you investigate why your huge investments aren't paying off as expected.
Diminishing returns. (cf. Mythical Man-Month) Diminishing returns are inevitable, because we always direct our efforts toward the highest-ROI opportunities first. 
Local knowledge problems. Allocating new resources ("10M Johnny von Neumann's") is hard to do efficiently, because distributed knowledge implies hard limits on the efficacy of central planning. GenAI may be Wikipedia-level smart, but that won't be enough to run a Gosplan.
Physical grounding. In the absence of self-play, GenAI needs two kinds of techniques for testing propositional knowledge against the outside world. The most basic requirement here is to be able to test against the physical world. In principle this could be covered by simulations, but this won't always work because the map isn't the territory.
Markets. The most important technique is to test knowledge in markets, especially the marketplace of ideas. This is the reason for the "insight wall" above, and there is surely no shortcut around it. A brilliant AI outsmarting humanity would be like a brilliant neuron outsmarting a brain. It can only work if the part emulates the whole -- i.e. if the AI is itself a civilization of millions of cooperating/competing minds, pursuing goals that are rigorously scored in a world as detailed and uncaring as our own.

Cognitive Constraints

Agency/Planning. GenAI is great at generating content, but it's not a natural fit for running iterated planning/execution loops. This is particularly a problem for goals that are long-term, hierarchical, and subject to internal conflicts. Because GenAI can emit a plausible-sounding plan and answer questions about it, people tend to over-project human planning skills onto GenAI.
Memory. GenAI has no dedicated facilities for creating/organizing/using various kinds of memory. Training data, attention heads, and context windows will not suffice here.
Reasoning. GenAI makes impressive exhibitions of reasoning, and it's not just a simulation or a stochastic-parrot trick. But GenAI's reasoning is brittle and fallible in glaring ways that won't be addressed just by scaling. This is a micro version of the macro "markets" problem above.
Epistemology. Related to reasoning problems are GenAI's notorious hallucination problems. Techniques are being developed to compensate for these problems, but the need for compensation is a red flag. GenAI clearly has sophisticated models about how to generate plausible content. But (like many humans) it fundamentally lacks a robust facility for creating/updating/using a network of mutually-supporting beliefs about reality.

Political Constraints

In the developed West (i.e. OECD), GenAI will for at least the first few decades be hobbled by political regulation. A crucial question is whether the rest of the world will indulge in this future-phobia.
Rentier regulation. Licensing rules imposed to protect rent-seekers in industries like healthcare, education, media, content, and law.
Safety regulation. To "protect" the public from intolerance, political dissent, dangerous knowledge, and applications in areas like driving, flying, drones, sensor monitoring -- and general fears of AI takeover.

References

Sunday, March 10, 2024

Kapor Should Concede To Kurzweil

In 2002, Mitch Kapor bet Ray Kurzweil $20K that "by 2029 no computer or machine intelligence will have passed the Turing Test."  Given the recent progress in LLMs, Kapor's arguments are not holding up very well. The following parts of his essay are now cringe-worthy:

  • It is impossible to foresee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge.
  • While it is possible to imagine a machine obtaining a perfect score on the SAT or winning Jeopardy--since these rely on retained facts and the ability to recall them--it seems far less possible that a machine can weave things together in new ways or to have true imagination in a way that matches everything people can do, especially if we have a full appreciation of the creativity people are capable of. This is often overlooked by those computer scientists who correctly point out that it is not impossible for computers to demonstrate creativity. Not impossible, yes. Likely enough to warrant belief in a computer can pass the Turing Test? In my opinion, no. 
  • When I contemplate human beings [as embodied, emotional, self-aware beings], it becomes extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation, much less to believe that its achievement is within our lifespan.
  • Part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test. Ray Kurzweil's approach relies on an automated process of knowledge acquisition via input of scanned books and other printed matter. However, I assert that the fundamental mode of learning of human beings is experiential. Book learning is a layer on top of that. Most knowledge, especially that having to do with physical, perceptual, and emotional experience is not explicit, never written down. It is tacit. We cannot say all we know in words or how we know it. But if human knowledge, especially knowledge about human experience, is largely tacit, i.e., never directly and explicitly expressed, it will not be found in books, and the Kurzweil approach to knowledge acquisition will fail. It might be possible to produce a kind of machine as idiot savant by scanning a library, but a judge would not have any more trouble distinguishing one from an ordinary human as she would with distinguishing a human idiot savant from a person not similarly afflicted. It is not in what the computer knows but what the computer does not know and cannot know wherein the problem resides.
  • The brain's actual architecture and the intimacy of its interaction, for instance, with the endocrine system, which controls the flow of hormones, and so regulates emotion (which in turn has an extremely important role in regulating cognition) is still virtually unknown. In other words, we really don't know whether in the end, it's all about the bits and just the bits. Therefore Kurzweil doesn't know, but can only assume, that the information processing he wants to rely on in his artificial intelligence is a sufficiently accurate and comprehensive building block to characterize human mental activity.
  • My prediction is that contemporary metaphors of brain-as-computer and mental activity-as-information processing will in time also be superceded [sic] and will not prove to be a basis on which to build human-level intelligent machines (if indeed any such basis ever exists).
  • Without human experiences, a computer cannot fool a smart judge bent on exposing it by probing its ability to communicate about the quintessentially human.
Kapor's only hope in this bet depends on removing the "human experience/quintessence" decorations from his core claim that "a computer cannot fool a smart judge bent on exposing it".  There are no general-purpose LLMs in 2024 that could pass 2 hours of adversarial grilling by machine learning experts, and there probably won't be in 2029 either. But with sufficient RHLF investment, one could tune an LLM to be very hard to distinguish from a human foil -- even for ML experts. 
So Kurzweil arguably should win by the spirit of the bet, but whether he wins by the letter of the bet will depend on somebody tuning a specialized judge-fooling LLM. That investment might be far more than the $20K stakes. Such an LLM would not be general-purpose, because it would have to be dumbed-down and de-woked enough to not be useful for much else. 
I predict that by 2029 we will not yet have AGI as defined by OpenAI: highly autonomous systems that outperform humans at most economically valuable work. A strong version of this definition would say "expert humans". A weak version would say "most humans" and "cognitive work". I don't think we'll have even such weak AGI by 2029. But beware the last-human-job fallacy, which is similar to the last-barrel-of-oil fallacy. AI will definitely be automating many human cognitive tasks, and will have radical impacts on how humans are employed, but AI-induced mass unemployment is unlikely in my lifetime. And mass unemployability is even less likely.

Friday, July 21, 2023

Barbie's Hidden Post-Feminist Message

Spoilers ahead!

Greta Gerwig's Barbie is a very entertaining movie, and is surely the least-flawed feminist manifesto you'll ever find in summer-blockbuster format. The film has a few minor problems and two major ones -- one of which just might be the film's hidden post-feminist message.

The Matriarchy in Barbie Land (BL) starts off as a powerful satire of our Patriarchy. The gender roles in BL are a complete (though sexless) reversal from the power structure that feminists say obtains in the Real World (RW). The indictment of RW Patriarchy is all the more effective because the Barbies innocently find the Matriarchy unremarkable, while the Kens are only vaguely frustrated at having their worth determined entirely by the Barbie gaze. (Gerwig made sure to use "gaze" in the script here.)

There are a few noticeable flaws in the script, that could have been fixed without undercutting the powerful Galt-like speech that Gerwig speaks through her self-insert character Gloria (ably played by America Ferrera). The two most obvious:

  • Gloria's husband is a throwaway character, with maybe 3 uninteresting lines in 3 unimportant scenes. In this film he's the dog who didn't bark, a Chekhov's gun loaded with blanks and never fired. His only purpose in the film seems to be to blunt potential criticism that Gloria's speech is that of a bitter single mom. But his character didn't need to be so glaringly irrelevant. A few minutes of well-used screen time for him could have established that Gloria's indictment can still be validly issued from inside a normal marriage.
  • Ken returns to BL after experiencing Patriarchy in the RW for at most a few hours. He then is able to effortlessly conquer BL off-screen using just the idea of Patriarchy. This gives Patriarchy far too much credit, even considering how innocent the Barbies are. But perhaps the alternative would be problematic: if Patriarchy uses mechanisms instead of magic, then its actual workings would have to be examined, and Ken doing actual work might give him agency and sympathy. Still, other alternatives can be imagined, e.g. Ken returning with patriarchal cultural media. If Patriarchy works like a magic wand, then critiquing it becomes harder than necessary.
A much bigger problem with the film was one on which Gerwig felt forced to hang a lampshade: pretty privilege. That topic is brushed off with a fourth-wall-breaking one-line admission by the narrator that Margot Robbie is still very pretty even when she thinks she isn't. Mattel knew better than to open that can of worms, which is avoided for the rest of the movie. There are attractive plus-size Barbies and attractive wheelchair Barbies, but there is no analogue to Ken's homely friend Allen (inevitably played by Michael Sera).  The topic is almost encountered at the end of the film, when a smartly-dressed Barbie says "wish me luck" as she bounces toward what we're to think is her first job interview in the RW. What viewer could possibly question how a Margot Robbie look-alike will fare in the job market? But mid-brow feminism doesn't want to grapple with subjects like pretty privilege or height privilege. The first rule of Victim's Club is: never admit any privilege or responsibility, because fighting injustice might be harder if we address inconvenient truths. Target the easy wins, because the ends justify the means.
Unlike so many films aimed at youth, Barbie's villains were not villainous because they were businessmen -- they were villainous because they were men.  The script inadvertently gives a stirring defense of capitalism at one point. When Gloria suggests marketing a new normal/average Barbie -- prettiness level unspecified! -- the Male CEO summarily dismisses the idea. But when a Marketing Man computes that this product would be very profitable, Male CEO instantly endorses the idea. Gerwig here seemingly admits that dollars are not only colorblind but also gender-blind.
The only jabs at capitalism in Barbie were some throwaway lines plus a boardroom stuffed with men who -- like every man in the RW with a speaking line -- were 100% caricatures. (And like the Kens, they were admirably diverse. Gerwig can't be expected to oppose sexism and racism in the same film.) By the end of the film, Mattel's image is rescued by the ghost of Barbie's dead inventor. Indeed, the whole movie can be read as a cleverly subversive way to co-opt feminism to defend the Barbie franchise from feminist criticism.
And this gestures toward the true flaw -- or true genius -- of the film. Simplistic anti-feminists will complain that the film demonizes and caricatures men, but our culture's norms have many problems worth criticizing -- and "Patriarchy" is a useful handle onto many of them. Gloria's speech makes a one-sided but powerful critique of those norms. Unfortunately, its effect can be seen as undermined by the climax of the film, when the Barbies overthrow Ken's newborn magical Patriarchy and completely restore the Matriarchy. But under Matriarchy 2, the Barbies are fully conscious of the gender asymmetry -- and they admit out loud that they just don't care. By a Straussian reading, this could be the film's true post-feminist message: women are not only just as good as men, but also just as bad.