Sunday, January 01, 2006

Supernatural Causation, and other card games

I've been debating intelligent design over at Telic Thoughts where recent discussion has turned to supernatural causation. Though there are a few fringe contributors who think that the supernatural can be scientifically validated, most ID proponents prefer to make the claim that ID does not inherently admit supernatural causation.

However, I think one can show that the generic form of ID that is used to attack evolutionary biology is indistinguishable from supernatural causation.

Suppose there were a "supernatural causation" (SC) hypothesis. The SC proponent claims that we will not find an explanation (directed or otherwise) for some subset of phenomena. That is, SC is a purely negative claim about all other scientific approaches to the phenomena in question.

The IDist, attempting to establish a difference between the ID and SC programs, will claim that, unlike SC, ID admits the possibility of natural intelligences that causally dictated the observed phenomena.

Yet, the SCist will counter that the IDist's proposal is mere handwaving, and that the IDist has the burden of demonstrating the detailed naturalistic mechanisms (forward planning, memory, data processing, manufacturing) that were behind the formation of life (or whatever else the SCist claims was poofed into existence). The SCist will demand of the IDist precisely what the IDists demand of the evolutionary biologists, i.e., no gaps.

As long as we are willing to accept that the lack of alternative explanations counts as confirmatory evidence for a "theory", we can't blame the SCists for demanding that IDists join the evolutionary biologists and "prove that the supernatural nature of the universe is illusory!"

The moral of the story is that ID must live by conventional, predictive scientific methods, or die by its own sword. If you admit meta-theories as scientific merely on the grounds that an alternative explanation would disprove them, then you have to acknowledge SC as a scientific program.

ID supporters have tried to support their claims using a Bayesian probability argument. The argument basically says that if experiments reduce confidence in the alternative to ID, then we should raise confidence in ID accordingly.

Omar put forward a similar claim at Telic Thoughts:

T1 = “No intelligent designer was involved in causing the existence of complex structures”

and let

T2 = “Some intelligent designer was involved in causing the existence of complex structures”

Note that if T1 is true, then the only remotely plausible way it can be true is for neo-Darwinian evolutionay theory (NDE) to be true. In other words, given the current state of our knowledge, if T1 is true, then NDE is true.

Thus, evidence against NDE is evidence against T1 (by virtue of the rule of inference modus tollens), and this is in turn evidence for T2.
I think there are two flaws in this argument. The first is that NDE isn't equivalent to T1. NDE proposes specific, predictive mechanisms that partially explain how life evolved without forward planning. NDE does not rule out design.

The second reason why the argument is flawed is that it relies upon an implicit and false assumption about the search space for mechanistic explanations of evolution.

There's a good (if long) analogy for why this argument for raising confidence in T2 is flawed in the general case.

Imagine that I have a deck of cards. I begin turning over cards, and after the first few cards are turned over, a pattern emerges. The deck appears to be largely a standard deck sorted in ascending order of rank, but the Spades have been replaced with identical Jokers.

Let's establish these theories:

Q1 = "this is a standard deck of cards sorted in order of rank, but Spades have been replaced with Jokers."

Q2 = "the Ace of Hearts has been removed and replaced with a Joker."

We turn over 30 cards, and Q1 is further validated, raising our confidence in Q1. Yet, still, the Ace of Hearts has not been seen. So, are we justified in raising our confidence in Q2?

If the deck had been randomly shuffled (and Q1 were false), we should be raising our confidence in Q2 as cards are overturned. This is because, without Q1, we would have no reason to expect that the Ace of Hearts should be at the end of the deck ("...if Q2 were false, on average, in a shuffled deck, we would have expected to find the Ace of Hearts by now").

However, our confidence in Q1 means that we don't expect to significantly raise our confidence in Q2 until we approach the end of the deck. So, our confidence in Q2 remains almost unchanged until the very end.

The reason why this works out the way it does is that Q2 makes no specific predictions along the way. As we make our way through an unsorted deck, our confidence in Q2 changes only because we can estimate the total number of cards in the deck, not because any sorting rule is predicted. That is, P(observation|Q2) is the same for any individual observation. It is only the integral over observations that hopes to change our confidence in Q2.

Comparing T2 to Q2, we see that P(observation|design) is similarly undifferentiated by any particular observation (T2 is not predictive of actual observations). Therefore, confidence in T2 can only be founded on some nebulous estimate of the likelihood we would have solved the puzzle of mechanistic evolution given the number of observations we have made so far. This is analogous to having made our way through most of a shuffled deck.

All this presents two problems for T2.

First, we have high confidence in several partial theories of evolutionary mechanisms, and those theories tell us that we have a considerable amount of computation and research to do (e.g., cracking the protein folding problem) to complete our research program. This is analogous to knowing that there are lot more cards in the deck.

Second, even if we had no confidence in evolutionary biology, we would still have no idea how much data is enough to make an inference to T2 given that your theory isn't predictive. This is analogous to ignorance of how many cards there are in the deck.

In the end, we see that we have to account for validated theories and for the expected size of the search space. Without accounting for these, we cannot gain confidence in theories that are founded on purely negative tests of other theories.

My first version of this post incorrectly stated that T1 and T2 were not mutually exclusive.

No comments: