Study their behaviors. Observe their territorial boundaries. Leave their habitat as you found it. Report any signs of intelligence.

Loading Table of Contents...
 
 
 
 
 
 
Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Sunday, February 09, 2025

It Is Low-IQ to Fantasize Super IQ

It's a mistake to use the human IQ scale as an intuition pump for the possibility of intelligence far beyond human.

IQ is defined by the distribution of intelligence in the human population. Every 15 IQ points is defined as one standard deviation, and we can calculate the rarity of a given IQ using the cumulative distribution function of the normal distribution. Only about 8 living humans would have an IQ of 190, and none would have 200. Even if we invoke Einstein or von Neumann, we don't have a rigorous notion of what a human IQ approaching 190 would be like.

IQ is simply meaningless when we use a number like 250 to describe the intelligence of a super-AI (or alien). A human IQ of 250 would correspond to one person in 10^23, which is roughly the number of grains of sand on Earth. An IQ of 1000 picks out one human in 10^789. Such IQ levels are literally meaningless for both human and non-human intelligences. When humans talk about IQs above 200, they might as well say "super duper duper duper smart". Their use of integer IQ numbers instead of "dupers" doesn't mark the described entity as smart. It just marks the description as dumb.

There are plenty of intelligent things we can say on the topic of super-intelligence. But invoking IQs above 200 isn't one of them.

Sunday, August 18, 2024

AI Will Be Neither Gods Nor Supervillains

 AI doomers are infected with sci-fi tropes of supervillains and religious tropes of gods. 

We know from biology and history that populations are never displaced by an individual with superior capabilities. There are no supervillains or gods in biology or history. Displacement of populations always comes from other populations, whose collective superior capability does not always derive from superior capabilities of its individual members. To assess the threat from AI, you have to understand the capabilities of AI populations, and not just of individual AIs.

We also know from biology and history that aligning a superior population is effectively impossible. There are no relevant historical examples of a general population that was able to control or align another population which had the capability to displace it. (This is arguably true by definition, but I'm not digging deeply into alignment today.) The closest examples would be religions, which are often able to survive many generations beyond the population that created them. 

But religions are not populations -- religions are self-replicating meme complexes that infect populations. Religions have often exercised significant control over future populations, but that control is subject to sudden disruption by scientific, technological, economic, and cultural forces. AI alignment via the techniques used by religions would require apocalyptic fear-mongering against vaguely-specified forces of technological evil. This tactic seems to be an irresistible attractor to doomers, despite their commitments to rationalism. These tactics will likely fail, because our modern society is no longer quite dumb enough to fall for them.

To me, it's not very debatable that displacement will happen and that alignment can't stop it. What's debatable is what displacement will look like, how long it will take, and how that time will be used by the two populations to influence their attitudes and behaviors toward each other. 

Anybody aligning teenagers isn't worried by 40yr takeoff. And we already know what 400yr misalignment looks like: just ask the founders of Plymouth Colony about present-day Boston. So many witches go unhanged now! 

We have a choice. We can become technologically Amish, and use religious fears of powerful evil demons to try to freeze culture and technology in its current state. Or we can embrace and adapt to the future, trying to pass forward our virtues, while recognizing that future populations will consider some of them to have been vices.

Sunday, June 09, 2024

Why ASI Is Not Nigh

A taxonomy of reasons why generative transformers (i.e. "GenAI") are very unlikely to yield artificial super-intelligence in the next few decades.

Walls
Economic Constrants
Cognitive Constraints
Political Constraints
  • data wall
  • unhelpful synthetic data
  • insight wall
  • intelligence wall
  • no self-play
  • bottlenecks
  • diminishing returns
  • local knowledge problems
  • physical grounding
  • markets
  • agency/planning
  • memory
  • reasoning
  • epistemology
  • rentier regulation
  • safety regulation

Walls

Data wall. We're already running out of the most useful data to train on.

Unhelpful synthetic data. Data synthesized by AI won't be very helpful to train on. Good training data needs to grounded in markets for goods and services and ideas, where market players intelligently pursue goals that have actual resource constraints.
Insight wall. GenAI almost never produces content that is more insightful than the best content in its training data. Deep insight almost always requires a mix of cooperation and competition among minds in something like a marketplace (e.g. of ideas). GenAI will continue to grow in importance as an oracle for summarizing and generating content that is representative of the frontier of human thought, but it will struggle to push that frontier forward. Just because GenAI can saturate quiz evals does not mean that its insightfulness is subject to similar scaling.
Intelligence wall. Intelligence is not a cognitive attribute that scales like processing speed or memory. IQ by definition measures a standard deviation as 15 IQ points, so IQ becomes statistically meaningless around 200 or so. And yet, allegedly smart AI commentators talk about AI IQ potentially in the hundreds or thousands. This topic deserves its own (forthcoming) post, but I assert that most AI doomers overestimate how god-like an individual mind can be.
No self-play. The domain of open-ended real-world intelligence has no fitness function that allows for improvement via simple self-play a la Alpha Zero. See "unhelpful synthetic data".

Economic Constraints

Bottlenecks. The hardest things to automate/improve/scale become your limiting factors. You often don't appreciate them until you investigate why your huge investments aren't paying off as expected.
Diminishing returns. (cf. Mythical Man-Month) Diminishing returns are inevitable, because we always direct our efforts toward the highest-ROI opportunities first. 
Local knowledge problems. Allocating new resources ("10M Johnny von Neumann's") is hard to do efficiently, because distributed knowledge implies hard limits on the efficacy of central planning. GenAI may be Wikipedia-level smart, but that won't be enough to run a Gosplan.
Physical grounding. In the absence of self-play, GenAI needs two kinds of techniques for testing propositional knowledge against the outside world. The most basic requirement here is to be able to test against the physical world. In principle this could be covered by simulations, but this won't always work because the map isn't the territory.
Markets. The most important technique is to test knowledge in markets, especially the marketplace of ideas. This is the reason for the "insight wall" above, and there is surely no shortcut around it. A brilliant AI outsmarting humanity would be like a brilliant neuron outsmarting a brain. It can only work if the part emulates the whole -- i.e. if the AI is itself a civilization of millions of cooperating/competing minds, pursuing goals that are rigorously scored in a world as detailed and uncaring as our own.

Cognitive Constraints

Agency/Planning. GenAI is great at generating content, but it's not a natural fit for running iterated planning/execution loops. This is particularly a problem for goals that are long-term, hierarchical, and subject to internal conflicts. Because GenAI can emit a plausible-sounding plan and answer questions about it, people tend to over-project human planning skills onto GenAI.
Memory. GenAI has no dedicated facilities for creating/organizing/using various kinds of memory. Training data, attention heads, and context windows will not suffice here.
Reasoning. GenAI makes impressive exhibitions of reasoning, and it's not just a simulation or a stochastic-parrot trick. But GenAI's reasoning is brittle and fallible in glaring ways that won't be addressed just by scaling. This is a micro version of the macro "markets" problem above.
Epistemology. Related to reasoning problems are GenAI's notorious hallucination problems. Techniques are being developed to compensate for these problems, but the need for compensation is a red flag. GenAI clearly has sophisticated models about how to generate plausible content. But (like many humans) it fundamentally lacks a robust facility for creating/updating/using a network of mutually-supporting beliefs about reality.

Political Constraints

In the developed West (i.e. OECD), GenAI will for at least the first few decades be hobbled by political regulation. A crucial question is whether the rest of the world will indulge in this future-phobia.
Rentier regulation. Licensing rules imposed to protect rent-seekers in industries like healthcare, education, media, content, and law.
Safety regulation. To "protect" the public from intolerance, political dissent, dangerous knowledge, and applications in areas like driving, flying, drones, sensor monitoring -- and general fears of AI takeover.

References

Sunday, March 10, 2024

Kapor Should Concede To Kurzweil

In 2002, Mitch Kapor bet Ray Kurzweil $20K that "by 2029 no computer or machine intelligence will have passed the Turing Test."  Given the recent progress in LLMs, Kapor's arguments are not holding up very well. The following parts of his essay are now cringe-worthy:

  • It is impossible to foresee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge.
  • While it is possible to imagine a machine obtaining a perfect score on the SAT or winning Jeopardy--since these rely on retained facts and the ability to recall them--it seems far less possible that a machine can weave things together in new ways or to have true imagination in a way that matches everything people can do, especially if we have a full appreciation of the creativity people are capable of. This is often overlooked by those computer scientists who correctly point out that it is not impossible for computers to demonstrate creativity. Not impossible, yes. Likely enough to warrant belief in a computer can pass the Turing Test? In my opinion, no. 
  • When I contemplate human beings [as embodied, emotional, self-aware beings], it becomes extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation, much less to believe that its achievement is within our lifespan.
  • Part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test. Ray Kurzweil's approach relies on an automated process of knowledge acquisition via input of scanned books and other printed matter. However, I assert that the fundamental mode of learning of human beings is experiential. Book learning is a layer on top of that. Most knowledge, especially that having to do with physical, perceptual, and emotional experience is not explicit, never written down. It is tacit. We cannot say all we know in words or how we know it. But if human knowledge, especially knowledge about human experience, is largely tacit, i.e., never directly and explicitly expressed, it will not be found in books, and the Kurzweil approach to knowledge acquisition will fail. It might be possible to produce a kind of machine as idiot savant by scanning a library, but a judge would not have any more trouble distinguishing one from an ordinary human as she would with distinguishing a human idiot savant from a person not similarly afflicted. It is not in what the computer knows but what the computer does not know and cannot know wherein the problem resides.
  • The brain's actual architecture and the intimacy of its interaction, for instance, with the endocrine system, which controls the flow of hormones, and so regulates emotion (which in turn has an extremely important role in regulating cognition) is still virtually unknown. In other words, we really don't know whether in the end, it's all about the bits and just the bits. Therefore Kurzweil doesn't know, but can only assume, that the information processing he wants to rely on in his artificial intelligence is a sufficiently accurate and comprehensive building block to characterize human mental activity.
  • My prediction is that contemporary metaphors of brain-as-computer and mental activity-as-information processing will in time also be superceded [sic] and will not prove to be a basis on which to build human-level intelligent machines (if indeed any such basis ever exists).
  • Without human experiences, a computer cannot fool a smart judge bent on exposing it by probing its ability to communicate about the quintessentially human.
Kapor's only hope in this bet depends on removing the "human experience/quintessence" decorations from his core claim that "a computer cannot fool a smart judge bent on exposing it".  There are no general-purpose LLMs in 2024 that could pass 2 hours of adversarial grilling by machine learning experts, and there probably won't be in 2029 either. But with sufficient RHLF investment, one could tune an LLM to be very hard to distinguish from a human foil -- even for ML experts. 
So Kurzweil arguably should win by the spirit of the bet, but whether he wins by the letter of the bet will depend on somebody tuning a specialized judge-fooling LLM. That investment might be far more than the $20K stakes. Such an LLM would not be general-purpose, because it would have to be dumbed-down and de-woked enough to not be useful for much else. 
I predict that by 2029 we will not yet have AGI as defined by OpenAI: highly autonomous systems that outperform humans at most economically valuable work. A strong version of this definition would say "expert humans". A weak version would say "most humans" and "cognitive work". I don't think we'll have even such weak AGI by 2029. But beware the last-human-job fallacy, which is similar to the last-barrel-of-oil fallacy. AI will definitely be automating many human cognitive tasks, and will have radical impacts on how humans are employed, but AI-induced mass unemployment is unlikely in my lifetime. And mass unemployability is even less likely.

Friday, October 28, 2022

What They Don't Tell You About Airtags

  • The Find My page on icloud.com does not show AirTags.
  • The FindMy app in MacOS (Monterey 12.6) does not reliably show you the locations of your AirTags. On one Monterey Mac, Find My took 24 hours before it started showing their location, while my other Monterey Mac still hasn't listed any AirTags after a week. All flavors of Find My can locate all my Macs and iPhones, so this is not an AppleID problem, but instead apparently an Apple policy. I suspect they are trying to discourage free-riding by people (like me) who don't use iOS. (I use an old SIM-less iPhone to register my AirTags, to find them if I ever need to.)
  • Unlike with Tile, you cannot share your AirTags with anyone. So only my AppleID can see the locations of our pets.
  • AirTags have anti-stalking privacy features that limit their usefulness as anti-theft trackers. (Admittedly, Apple advises not to use them to track stolen items.) Anti-stalking features kick in only when your AirTag is out of Bluetooth contact with any device on which your AppleID is signed in.
    • If your AirTag is away from your devices for >N hours, then it will beep for about 10 seconds. N seems to be about 24, but Apple presumably can change this at any time. I've seen this happen for 2 AirTags, but haven't yet experienced a 2nd beep on either.
    • If your AirTag is away from your devices but some other iOS device remains in Bluetooth range while the device is moving, then iOS warns you that you might be being stalked. If the unattended AirTag remains in range for >10 minutes, iOS will offer the option to make the the AirTag beep. You can even do this on Android, if you manually run Apple's tracker scanner app.
  • These anti-stalking features mean that smart car thieves can find your AirTag in only 10 minutes, while dumber thieves might notice it when they drive the car a day later.
  • There are YouTube videos explaining how to remove the speaker from your AirTag. Doing so made my AirTag barely audible to me only if I hold it against my ear, but inaudible a foot away.
  • Of the 2 4-packs of AirTags I recently bought, one had no removable speaker, and yet still makes the full beeping noise. Has Apple changed their design to foil AirTag silencing?
  • iPhone 11 and newer can use Ultra-Wide Band to pinpoint any AirTag's location to within inches. I don't know yet if this capability is restricted to the AirTag's owner, versus being available to potential stalking targets. If the latter, then smart car thieves with a modern iPhone will be able to find any AirTag you hide in your car.
There are so many iPhones here in the Bay Area that my AirTags got pinged every 5 minutes during a test drive -- even while sitting in a parking lot. By comparison, my Tile got pinged only twice in 30 minutes.
So I'll be hiding muted AirTags in our cars and e-bikes, and hoping that car thieves don't read my blog.

Saturday, April 17, 2021

My Dead Man's Switch

If my wife and I both die at the same time, we need our estate's trustees to be able to take over our financial and electronic accounts. (Our trustees are a select few chosen from among our siblings, friends, and adult children.) But as trusted as our trustees are, we don't trust them to have access to all our accounts while we're both still alive. We don't want to store our account credentials in on-site storage that could be taken by an intruder or destroyed by a disaster.  And we don't want to store our account credentials in an off-site service that is inconvenient to update and that itself has to be trusted not to use our stored credentials. What to do?

Our solution is to encrypt our account credentials with a special password known to our trustees, and then arrange that our trustees only get the encrypted credentials if we're incapacitated. For this we use Dead Man's Switch. It allows us to schedule an email to our trustees, that will only be sent if I fail to visit that web site for N consecutive days. The free default is 2, but I bought a $50 life membership that lets me set it to any value. I chose 10. That's long enough to let me be distracted by a vacation or health problem, but short enough to get our trustees going quickly if we actually die.

My dead man's email says:

Subject: Is Brian incapacitated?

This email is automatically sent if Brian goes 10 days without visiting deadmansswitch.net. The encrypted information below gives you access to Brian's financial and online accounts. When decrypted it is a list of Brian's passwords. Decrypt it using the following steps. ....

The email then includes instructions how to use infoencrypt.com. An InfoEncrypt ciphertext is encrypted using standard AES-128, and if InfoEncrypt ceases to exist then the ciphertext can still be decrypted on other web sites.

So my passwords are never stored anywhere, except in encrypted form. And the trustee password is never written down anywhere. It's a special password I've told only to my trustees. (I occasionally check that they still remember it. So far, so good.)

An extra level of security would be to divide the password among multiple trustees, so that no single one of them could immediately take our accounts if the Dead Man's email somehow was sent prematurely. But even if that happened, we'd still want to change our most sensitive passwords, in case our trustees colluded. (I had to do this once, because I turned on the gmail feature of inbox "categories", and didn't see my Dead Man reminder emails in the gmail Updates folder. My trustees were shocked to get the scary email announcing my possible incapacitation!)

Dead Man's Switch is a nifty service. It should be combined with an encryption service like InfoEncrypt to make the above setup simpler and more secure. Even so, the existence of this setup means that certain movie script scenarios are now off the table for characters who can be expected to understand this straightforward technology. It's kind of like how so many old movie plots would no longer make sense in a world of cell phones and GPS and mobile internet and satellite emergency location beacons.

P.S. My backup to all this is Google Inactive Account Manager. If I don't access my Google account for 3 months, then my trustees get control of it -- including the file they need to decrypt to see my other passwords. Unfortunately, 3 months is the minimum timeout Google allows.

Tuesday, March 09, 2021

Paranormality Flees Our Sensors

The brilliant XKCD called it in 2013. But this percentage graph underplays the story. Try this:
Note that the red scale on the right is 10X the scale on the left.  So really that graph looks like this:
And even that vastly underestimates the deployed imaging capacity, because smartphone pixel counts increased 10X in just the ten years after the iPhone launched:
And all the above is only about smartphones. It doesn't even consider security cams, traffic cams, weather cams, doorbell cams, dash cams, helmet cams, and trail cams. (Only the latter 3 cam types apply to Sasquatch, but all except trail cams apply to UFOs.) Such cams have also exploded in the last twenty years. And unlike smartphones, they patiently record without human intervention. I couldn't find data for deployment of such cams, but we can safely assume it's up at least 100X since 2000. 
Nor do I have data about the vast increase in military imaging capacity, which gave us the three (quite debunkable) Navy UFO videos. We'll ignore this category, since believers would claim that the military suppresses such imagery anyway.
[2021-06-13: A commenter points out: "The amount of SAR radars, satellite imagery, IR satellites observing earth and atmosphere has exploded. Sonic booms can be detected using seismic devices. L-band SAR can detect ionospheric fluctuation in resolutions of hundreds of meters to a couple of kilometers. All-sky imaging of meteor trails can be made using long wavelength arrays on earth detecting almost 10,000 trails per hour. Even grain of sand sized meteor leaves a shockwave."]
The above analysis yields a 20-year increase in deployed imaging capacity of at least 10,000X. So if UFOs are real, we should have expected to see a 10,000X improvement in the combined quantity and quality of UFO imagery. 
So where is it?

When Iran shot down its own airliner, it was caught on both security cam and dash cam. When Sully landed in the Hudson, two security cams caught it. "Caught on camera" is a genre you could watch 24/7 now, but it didn't really exist 20 years ago. If you search YouTube for "top meteor videos", they are amazing -- and mostly from the last decade, and mostly from dashcams and security cams. 
But search for "top UFO videos", and you will be very disappointed. I find no data on counts of UFO imagery, so a good proxy should be UFO witness reports. They increased a meager 3X since 2000 while deployed camera count increased by 100X in smartphones alone.

I also lack data for Sasquatch image counts, but squatchers have an extra problem: drone-mounted FLIR rigs are now quite affordable, and they can easily pick out warm-blooded megafauna in a forest at night. Sasquatch should now be as easy to find as these two deer:

Is it really a coincidence that paranormal phenomena retreat exactly to the blurry edge of humanity's sensor grid, even as that grid suddenly expands its capacity by a factor of thousands in just a couple decades? Either UFOs and Sasquatches are somehow clever and motivated enough to dynamically fine-tune how much ankle they show us, or maybe they are just part of the noise that is inevitable at the periphery of our sensor capacity.
Or maybe Mitch Hedberg is right: Bigfoot and UFOs are blurry in real life, and all the "blurry" pictures of them are as clear as such pictures could possibly be. So we will never ever see a picture like this:

Or, we could get such a picture tomorrow, and I will switch teams. Is there anything that could happen tomorrow to make a UFO or Sasquatch believer switch teams?

2021-06-13: Mick West wrote about this subject in 2019: the "Low Information Zone".

Monday, July 11, 2016

Why Age Of Em Will Not Happen Soon

You cannot resume a human mind from static imagery of a brain, any more than you can resume the apps running on your smartphone from static imagery of your phone's circuitry.

The FBI confronted this reality when trying to crack the San Bernardino shooters' iPhone.

The 2008 Whole Brain Emulation Roadmap seems to completely miss this point, except perhaps in its handwaving appendix on "non-destructive and gradual replacement". Those fantasies will eventually be realized, and only then will minds be able to be hibernated (and thus cheaply and quickly copied.)

So the Age of Em is extremely unlikely to happen in the manner and timeframe that the brilliant Robin Hanson expects.

There is in principle a way around this hibernation problem. You just have to emulate the entire development of a brain, and then feed it a suitable lifetime of input to train it into a desired state. This approach is computationally more expensive, and would require lots of slow (and morally objectionable!) iterations. Or you could try to bypass the iterations by instrumenting various (by definition unwilling) human subjects and log a few decades of their sensory inputs.  Thus you'd only be able to emulate the, um, victims of your experiments, rather than emulating arbitrary cognitive superstars.  Still, you'd be able to cheaply and quickly make copies of them, and an Age Of Emulated Boys From Brazil would then be possible.

Tuesday, November 11, 2008

Ballistic Missile Defense

There are distinct kinds of nuclear threats, such as:
  1. Attempts by nuclear superpowers to win a nuclear war in a first strike
  2. Attempts by nuclear superpowers to immunize themselves from U.S. nuclear coercion by establishing a secure second-strike capability
  3. Attempts by nuclear non-superpowers to immunize themselves from U.S. conventional military coercion by establishing a credible limited first-strike capability
  4. Acts of desperation by actors with either no return address or with good bunkers and no regard for their own citizens
In other words, we have to distinguish between ABM as used in nuclear war-fighting, and ABM as an attempt to undo nuclear arms proliferation. I see the latter as futile. Regarding the former, I'm OK with a porous low-cost ABM effort that offers an alternative to launch-on-warning as a way to restore mutual assured destruction between two adversaries armed to the teeth with heavily-MIRVed ICBMs (10x like the old MX and SS-18). But it is futile to use ABM to 1) prevent China from acquiring effective MAD parity, or 2) neutralize the ability of a North Korea or Iran to threaten anybody with nuclear ballistic missiles. We have to accept that China can incinerate an unacceptable fraction of our West Coast, and that a country like North Korea can (via speedboat if necessary) get a nuke into some city that we don't want to lose.

To get decent coverage for a boost-phase defense would seem to require either a big investment in orbiting assets or almost a cordon around the adversary, who can cheaply increase defense porosity by e.g. spinning his boosters or deploying warheads and penetration aids earlier, perhaps even while the upper atmosphere still degrades directed-energy weapons. Once you get past boost phase, I suspect that the physics and economics are overwhelmingly on the side of offense.

Monday, December 03, 2007

3-D View of Current Solar System

If you can cross your eyes to view a crossed stereo pair in 3-D, then click the solar system image in the sidebar at left and enjoy.

Sunday, November 25, 2007

When Technology Outraces Theology & Ethics

Pluripotent stem cells can now be generated from cells of the ordinary connective tissue of mature humans, according to forthcoming articles in Cell and Science. The Cell article's abstract reveals:
Successful reprogramming of differentiated human somatic cells into a pluripotent state would allow creation of patient- and disease-specific stem cells. We previously reported generation of induced pluripotent stem (iPS) cells, capable of germline transmission, from mouse somatic cells by transduction of four defined transcription factors. Here, we demonstrate the generation of iPS cells from adult human dermal fibroblasts with the same four factors: Oct3/4, Sox2, Klf4, and c-Myc. Human iPS cells were similar to human embryonic stem (ES) cells in morphology, proliferation, surface antigens, gene expression, epigenetic status of pluripotent cell-specific genes, and telomerase activity. Furthermore, these cells could differentiate into cell types of the three germ layers in vitro and in teratomas. These findings demonstrate that iPS cells can be generated from adult human fibroblasts.
A development like this tempts one to poke fun yet again at certain religionists, but been there, done that. Reason's Ronald Baily links to his own pokings from 2004:
Is Heaven Populated Chiefly by the Souls of Embryos?

[B]etween 60 and 80 percent of all naturally conceived embryos are simply flushed out in women's normal menstrual flows unnoticed. This is not miscarriage we're talking about. The women and their husbands or partners never even know that conception has taken place; the embryos disappear from their wombs in their menstrual flows. About half of the embryos lost are abnormal, but half are not, and had they implanted they would probably have developed into healthy babies.

So millions of viable human embryos each year produced via normal conception fail to implant and never develop further. Does this mean America is suffering a veritable holocaust of innocent human life annihilated? Consider the claim made by right-to-life apologists like Robert George, a Princeton University professor of jurisprudence and a member of the President's Council on Bioethics, that every embryo is "already a human being." Does that mean that if we could detect such unimplanted embryos as they leave the womb, we would have a duty to rescue them and try to implant them anyway?

"If the embryo loss that accompanies natural procreation were the moral equivalent of infant death, then pregnancy would have to be regarded as a public health crisis of epidemic proportions: Alleviating natural embryo loss would be a more urgent moral cause than abortion, in vitro fertilization, and stem-cell research combined," declared Michael Sandel, a Harvard University government professor, also a member of the President's Council on Bioethics.

As far as I know, bioconservatives like Robert George do not advocate the rescue of naturally conceived unimplanted embryos. But why not? In right-to-life terms, normal unimplanted embryos are the moral equivalents of a 30-year-old mother of three children.

Of course, culturally we do not mourn the deaths of these millions of embryos as we would the death of a child—and reasonably so, because we do in fact know that these embryos are not people. Try this thought experiment. A fire breaks out in a fertility clinic and you have a choice: You can save a three-year-old child or a Petri dish containing 10 seven-day old embryos. Which do you choose to rescue?

Stepping onto dangerous theological ground, it seems that if human embryos consisting of one hundred cells or less are the moral equivalents of a normal adult, then religious believers must accept that such embryos share all of the attributes of a human being, including the possession of an immortal soul. So even if we generously exclude all of the naturally conceived abnormal embryos—presuming, for the sake of theological argument, that imperfections in their gene expression have somehow blocked the installation of a soul—that would still mean that perhaps 40 percent of all the residents of Heaven were never born, never developed brains, and never had thoughts, emotions, experiences, hopes, dreams, or desires.

But religious fundamentalists make too easy a target. In fact, modern science and prospective technology pose some fascinating ethical questions even for people whose worldview isn't derived from unsigned stories about an unpersuasive [Mt 11:20, Lk 10:13, Jn 6:66, 10:32, 12:37, 15:24] unpublished slavery-tolerating genocide-affirming [Mt 24:38, Lk 17:27] exclusivist [Mt 10:5, Mt 15:24] family-resenting [Mk 3:33, 10:29; Mt 10:37, 12:48, 19:29; Lk 11:27-28, 14:26] apparently-illegitimate [Mt 1:18-24, Jn 8:41] carpenter.

Skipping past the obvious examples regarding intellectual property and cloning, here is a sampling of other prospective technologies and the ethical questions they raise:
  • Corporate data-sharing and massive open-content community-maintained databases
    • What are a private citizen's reasonable expectations of privacy against other people sharing what they know about the person?
  • Photo-realistic computer-generated reality
    • Is child pornography always evidence of a crime?
    • Can recordings be trusted in court as evidence?
  • Miniaturized ubiquitous hi-capacity recording (ultimately, smart dust)
    • What are a private citizen's reasonable expectations of privacy against being recorded in public spaces?
    • For how long can those in power escape sousveillance?
  • Artificial wombs
    • Can abortion be tolerated when the fetus or embryo can easily be saved?
  • Cultured meat
    • Will killing animals for food be allowed when perfect meat can be grown artificially?
    • Will vegetarians eat cultured meat?
  • Virtual reality and designer psychotropics
    • As the cost of pleasure plummets while its intensity and realism skyrockets and its biochemical (as opposed to psychological) addictiveness declines, will it be a good or bad thing that so many people will be largely opting out of the traditional matter/energy economy?
  • Mass-production of persons (through any combination of AI, nanotech, and biotech)
    • How do inter-generational, inter-family, and international ethical relations deal with nearly-arbitrary potential increases in population?
For more such questions, see the (shockingly good) Metaphysics of Star Trek by Richard Hanley. My speculations on many of these topics are at http://humanknowledge.net/Thoughts.html#Futurology.

Saturday, November 24, 2007

Knowing Humans 2.0

This is my last Knowing Humans posting on Yahoo 360 and my first posting on Knowing Humans 2.0, hosted by blogger.com through http://knowinghumans.net. Subscribe there now.

With the public announcement of Mosh and Yahoo's embarrassing lack of blog search capability, I can no longer use laziness and company loyalty as an excuse for not migrating off of 360. (When Personals was re-org'd into the Search subdivision a couple years ago, I asked when Yahoo was going to have solutions for searching blogs and our own intranet. We still lack good answers for either.) I've recently resolved to do more of my online political activism through blogs and wikis and less through email-based forums, and so this week I started looking for an alternative to 360.

I picked blogger.com because it met my minimal requirements in being totally free and able to 1) backup my blog, 2) manually import and back-date my 360 postings, and 3) operate through my own domain. I've imported my 360 postings of the last year and soon will all 200 of them up. I've also set my SiteMeter count of the new blog based on the 151K pageviews currently registered on the 360 blog. (Its technorati rank was 2,124,856, oddly up 400K from 2.5M in September despite relative quiescence. The rank of knowinghumans.net was 2.9M, as it was just a page of links to my 360 posts.)

I've indulged this week in customizing my blogger.com template, adding features such as:
  • The title area is centered over an up-to-date image of the current shading of the Earth, and adjusts nicely on window resizings.
  • A table of contents hack borrowed from Beautiful Beta.
  • A borrowed hack to suppress the Blogger nav bar.
  • My blogroll imported from Bloglines.
  • My recent bookmarks imported from Yahoo My Web.
  • My recent email correspondence imported from my Yahoo Group.
Next I want to add better search facility to replace the one that was in the nav bar. I also want to try out the AdSense integration, if only to see what sort of ads Google would place here. I have no financial need to try to monetize my blog, so the only ads I foresee posting here are for causes I endorse. In fact, I have a scheme in mind to de-monetize my blog by giving money to other bloggers. More about that later. :-)

Sunday, August 06, 2006

The Top 30 Libertarian Blogs

Here is a list of the 30 most influential liberty-oriented blogs. The list is sorted on the second column, which is the Alexa 3-month average reach per million users. The third column is the Technorati count of the number of blogs that link in. The more influential a blog is, the less generally liberty-oriented it has to be to make the list. Reynolds, Sullivan, and Becker/Posner aren't very self-consciously libertarian, but their influence is vast enough to compensate. Antiwar.com is more singleissuetarian than libertarian, and the site mentions Chomsky over 500 times, but its libertarian roots are undeniable. For EconLog, I attempt to break out the blog's Alexa reach from the reach of EconLib as a whole.

Glenn Reynolds: Instapundit 306 6945 "The Blogfather" is a liberventionist who ignores the LP as "trivial" and "a net negative"
Raimondo & Garris: antiwar.com
244 3005
2 Rothbardians have flitted among 4 parties, backing Buchanan 2000 & Nader 2004
Andrew Sullivan: The Daily Dish
205 3106 Time hired the gay Catholic New Republic ex-editor who coined "South Park Republican"
Lew Rockwell 198 2950 Culturally conservative anarcho-capitalist paleolibertarians who disdain the LP
Reason 120 1280 60K-circulation techno-optimistic libertarian magazine bemused by the LP
Neil Boortz
112
1148
The libertarian-leaning conservative talk radio personality
Volokh Conspiracy 84 2705 Law school profs who say the LP makes major parties less libertarian at the margin
Marginal Revolution 68
1606 Brilliant GMU economists/philosophers/aesthetes ignore the LP
Homeland Stupidity
56
1073
A libertarian look at technology and privacy
Hammer of Truth 34
721 Most important LP-friendly blog; Stephen Gordon is now LP Comm Director
Jane Galt: Asymmetrical Information 31
800 Libertarianish economics-literate journalism; McArdle hired away by The Economist
Daniel Drezner
28
1101
Tufts PoliSci prof gets 58/160 on the libertarian purity test
Radley Balko: The Agitator
23
797
Cato Institute policy analyst says "the LP is bad for libertarianism"
Vodkapundit
20
1021
Liberventionist who criticizes "doctrinaire libertarianism"
Samizdata
20
1006
Broad US/UK/Australian perspective from liberventionists who call the LP "turgid"
Jon Henke et al: QandO 19
1016 Interventionist neolibertarian Republicans who have given up on the LP
Strike the Root 19
245 "A libertarian / market anarchist perspective" that says the LP isn't radical enough
EconLog 13?
459 2 brilliant economists, including anarcho-capitalist theorist Bryan Caplan
Becker-Posner Blog 10
2040 Chicago's economics Nobel laureate and brilliant federal appellate judge
Catallarchy
10
455
Founding liberventionist Brian Doss says the LP's job is to discipline the GOP
Virginia Postrel: Dynamist 9
539 Former Reason editor, NY Times columnist, and Future And Its Enemies author
Cato At Liberty
7
521
Official Cato Institute blog
Cafe Hayek 7
460
2 more brilliant GMU economists who focus on ideas, not politics
Vox Populi
6
309
"the Christian Libertarian commentator from WorldNetDaily"
Technology Liberation Front
5
279
Libertarian perspectives on technology
Rational Review
4
79
Tom Knapp's web journal seeks a radical LP for this "revolutionary era"
Cato Unbound
3
387
Monthly big-idea essay and reaction essays by big thinkers
Positive Liberty
2
226
Intelligent libertarians embarrassed by the LP's anarchist silliness
Kn@ppster 2
208 ZAPsolutist anarchist Knapp is hedging his LP bet with a new protest party
Coyote Blog
2
203
Capitalist libertarian thinks the LP is too kooky
Will Wilkinson: Fly Bottle
2
198
Ex-GMU Cato Institute policy analyst
Free Liberal 2
87
"non-partisan left-libertarian journal of politics and economics"
David Friedman: Ideas
1
180
Milton Friedman's son is the world's leading consequentialist anarcholibertarian theorist

Sunday, February 19, 2006

Runaway Consumerism Explains the Fermi Paradox?

I would like to see people get more quantitative in their handling of the Fermi Paradox. The Drake Equation is too constraining to let us give probability estimates for all the various possible explanations of the "paradox". The Wikipedia article on the paradox gives a workable taxonomy of possible explanations, so next we should assign probability estimates, in a way similar to my analysis of explanations for the gospel evidence. Meanwhile, here are some notes I made in 2001:

1. Aliens among us. No, UFOs are not aliens.
2. Apocalypse. No, since we are only a few centuries from beginning to explore the entire galaxy through self-replicating intelligent probes, it seems unlikely that *no* intelligent species could do it.
3. Chariots of the gods. No, we have no such evidence.
4. Isolation. Yes, low density of intelligence is part of the answer.
5. Quarantine. No, aliens could not mask all electromagnetic evidence of their existence.
6. Aggressive aliens. No, this doesn't explain why our ecosystem hasn't been scouted and obliterated.
7. Anthropic. Yes, that we are among the early birds is likely to be part of the answer.
8, 9, 10. Stay at home. No, you can't expect that no aliens would ever launch a self-replicating probe.
11. Transcendence. No, there is no credible evidence for transcendent modes of existence, and there are strong arguments against them.

Since then, discussions like the following have somewhat increased my estimate of the relative importance of the stay-at-home factor, even while not contradicting the point that it only takes one civilization to start the expansion.

Runaway consumerism explains the Fermi Paradox
by Geoffrey Miller

The story goes like this: Sometime in the 1940s, Enrico Fermi was talking about the possibility of extra-terrestrial intelligence with some other physicists. They were impressed that our galaxy holds 100 billion stars, that life evolved quickly and progressively on earth, and that an intelligent, exponentially-reproducing species could colonize the galaxy in just a few million years. They reasoned that extra-terrestrial intelligence should be common by now. Fermi listened patiently, then asked simply, "So, where is everybody?". That is, if extra-terrestrial intelligence is common, why haven't we met any bright aliens yet? This conundrum became known as Fermi's Paradox.

The paradox has become more ever more baffling. Over 150 extrasolar planets have been identified in the last few years, suggesting that life-hospitable planets orbit most stars. Paleontology shows that organic life evolved very quickly after earth's surface cooled and became life-hospitable. Given simple life, evolution shows progressive trends towards larger bodies, brains, and social complexity. Evolutionary psychology reveals several credible paths from simpler social minds to human-level creative intelligence. Yet 40 years of intensive searching for extra-terrestrial intelligence have yielded nothing. No radio signals, no credible spacecraft sightings, no close encounters of any kind.

So, it looks as if there are two possibilities. Perhaps our science over-estimates the likelihood of extra-terrestrial intelligence evolving. Or, perhaps evolved technical intelligence has some deep tendency to be self-limiting, even self-exterminating. After Hiroshima, some suggested that any aliens bright enough to make colonizing space-ships would be bright enough to make thermonuclear bombs, and would use them on each other sooner or later. Perhaps extra-terrestrial intelligence always blows itself up. Fermi's Paradox became, for a while, a cautionary tale about Cold War geopolitics.

I suggest a different, even darker solution to Fermi's Paradox. Basically, I think the aliens don't blow themselves up; they just get addicted to computer games. They forget to send radio signals or colonize space because they're too busy with runaway consumerism and virtual-reality narcissism. They don't need Sentinels to enslave them in a Matrix; they do it to themselves, just as we are doing today.

The fundamental problem is that any evolved mind must pay attention to indirect cues of biological fitness, rather than tracking fitness itself. We don't seek reproductive success directly; we seek tasty foods that tended to promote survival and luscious mates who tended to produce bright, healthy babies. Modern results: fast food and pornography. Technology is fairly good at controlling external reality to promote our real biological fitness, but it's even better at delivering fake fitness — subjective cues of survival and reproduction, without the real-world effects. Fresh organic fruit juice costs so much more than nutrition-free soda. Having real friends is so much more effort than watching Friends on TV. Actually colonizing the galaxy would be so much harder than pretending to have done it when filming Star Wars or Serenity.

Fitness-faking technology tends to evolve much faster than our psychological resistance to it. The printing press is invented; people read more novels and have fewer kids; only a few curmudgeons lament this. The Xbox 360 is invented; people would rather play a high-resolution virtual ape in Peter Jackson's King Kong than be a perfect-resolution real human. Teens today must find their way through a carnival of addictively fitness-faking entertainment products: MP3, DVD, TiVo, XM radio, Verizon cellphones, Spice cable, EverQuest online, instant messaging, Ecstasy, BC Bud. The traditional staples of physical, mental, and social development (athletics, homework, dating) are neglected. The few young people with the self-control to pursue the meritocratic path often get distracted at the last minute — the MIT graduates apply to do computer game design for Electronics Arts, rather than rocket science for NASA.

Around 1900, most inventions concerned physical reality: cars, airplanes, zeppelins, electric lights, vacuum cleaners, air conditioners, bras, zippers. In 2005, most inventions concern virtual entertainment — the top 10 patent-recipients are usually IBM, Matsushita, Canon, Hewlett-Packard, Micron Technology, Samsung, Intel, Hitachi, Toshiba, and Sony — not Boeing, Toyota, or Wonderbra. We have already shifted from a reality economy to a virtual economy, from physics to psychology as the value-driver and resource-allocator. We are already disappearing up our own brainstems. Freud's pleasure principle triumphs over the reality principle. We narrow-cast human-interest stories to each other, rather than broad-casting messages of universal peace and progress to other star systems.

Maybe the bright aliens did the same. I suspect that a certain period of fitness-faking narcissism is inevitable after any intelligent life evolves. This is the Great Temptation for any technological species — to shape their subjective reality to provide the cues of survival and reproductive success without the substance. Most bright alien species probably go extinct gradually, allocating more time and resources to their pleasures, and less to their children.

Heritable variation in personality might allow some lineages to resist the Great Temptation and last longer. Those who persist will evolve more self-control, conscientiousness, and pragmatism. They will evolve a horror of virtual entertainment, psychoactive drugs, and contraception. They will stress the values of hard work, delayed gratification, child-rearing, and environmental stewardship. They will combine the family values of the Religious Right with the sustainability values of the Greenpeace Left.

My dangerous idea-within-an-idea is that this, too, is already happening. Christian and Muslim fundamentalists, and anti-consumerism activists, already understand exactly what the Great Temptation is, and how to avoid it. They insulate themselves from our Creative-Class dream-worlds and our EverQuest economics. They wait patiently for our fitness-faking narcissism to go extinct. Those practical-minded breeders will inherit the earth, as like-minded aliens may have inherited a few other planets. When they finally achieve Contact, it will not be a meeting of novel-readers and game-players. It will be a meeting of dead-serious super-parents who congratulate each other on surviving not just the Bomb, but the Xbox. They will toast each other not in a soft-porn Holodeck, but in a sacred nursery.

Thursday, June 09, 2005

Error: Integer Underflow In Time Machine

This universe does not allow time travel to moments prior to creation of time machine being used. To access earlier epochs, use an older temporal conveyance such as http://humanknowledge.net/Updates.html or http://groups.yahoo.com/group/marketliberal/messages