Carl Hempel developed a theory (the deductive-nomological, or DN model) in which the laws and boundary conditions of the explanans (the thing that does the explaining) must deductively imply the explanandum (the thing being explained). A slightly modified version of this is the inductive-statistical (IS) model which uses statistical generalizations instead of universal generalizations.

My model of explanation is very similar to the DN/IS model. The main difference is that, unlike the IS model, mine doesn't require that the regularity assert a high probability statistical generalization. Whereas Hempel's model is based on a parallel with logical deduction, my model is based on differentiation from trivial restatement of the explanandum.

I think there's an intuitive reason why Hempel's requirement that the probability be high can be relaxed. If you're going to claim probability of an outcome given certain initial conditions, it only makes sense that such a claim would be tested statistically over a sample size greater than one. This means that we can devise an experiment that statistically amplifies the variation of the probability distribution away from uniformity. Even if a law says that a child is 1% more likely to inherit a genetic ailment given the presence of the disease in a sibling, that slight variation can be amplified by doing a statistical survey of a large number of families. Not only can small variations in probability distributions be magnified by larger sample sizes, the mere statement of a probability distribution implicitly asserts that such a test is anticipated. This leads naturally into Bas van Fraassen's Constructive empiricism because we can take either a frequentist approach or a Bayesian approach to statements of probability.

After Hempel proposed the DN/IS model, several criticisms were brought up. It was claimed that there were some explanations that were not predictive (that DN/IS was not necessary), and that there were some theories that met the DN/IS conditions but that were not explanatory (that DN was not sufficient).

I have yet to find any potent instance either type of criticism.

Let's look at the sufficiency criticism first. One example is provided here:

This claim stems from Hempel's original IS constraint that the laws and boundary conditions of the model must predict the explanandum with high probability. Since my model doesn't require large variations in probability distributions, the criticism falls flat.Since smoking two packs of cigarettes a day for 40 years does not actually make it probable that a person will contract lung cancer, it follows from Hempel's theory that a statistical law about smoking will not be involved in an IS explanation of the occurrence of lung cancer.

The second criticism is that of insufficiency. These arguments purport to show valid DN/IS explanations that are not truly explanatory because they have failed to capture the causality involved. Here's Wesley Salmon's classic:

C1. Butch takes birth control pills.

C2: Butch is a man.

L1: No man who takes birth control pills becomes pregnant.

____________________________________

E: Butch has not become pregnant.

The birth control example presumes that the person stating the problem has observed pregnancy in humans, but has not noticed (and thus does not know) that only women have ever been pregnant. He also notices that a particular man is taking birth control pills and the man has not become pregnant.

In my opinion, the explanation that the pills prevent him from becoming pregnant is a perfectly valid candidate for an explanation, but it's simply not the best (or the correct) explanation.

Salmon's example only appears to counter Hempel because we intuitively apply our background knowledge that men don't get pregnant. That's another law L2 that was not included in L1. If you assume L2, then L1 is superfluous. The point is that Hempel is perfectly correct if you assume the theorist doesn't know L2, and doesn't include L2 in his explanation.

Another set of arguments tries to show that symmetry of cause is a problem for Hempel. Normally, we would explain the length of a shadow cast by a flagpole in terms of the elevation of the Sun and the height of the flagpole. Suppose instead we try to explain the height of a flagpole in terms of the length of the shadow it casts and the elevation of the Sun. The formulas can be inverted and we can express the law fixing the height as a function of the other two variables.

Given our background knowledge that the shadow is the caused by the other two factors, we are intuitively aware that the inverted explanation is kooky, despite the fact that it meets the DN criteria. But what would happen if we didn't have that background knowledge?

The same thing will occur in any investigation in which the order of causality is ambiguous (does A cause B, or B cause A or do they have a common third cause?). If, in reality, we have yet to discover that A actually causes B, this unknown fact does not mean that the theory that B causes A does not qualify as a possible explanation for the observation. It simply means that the theory that B causes A is the wrong explanation, even though it is explanatory.

On what criteria do we distinguish A causing B from B causing A? There are several. The first is a time dependence such that A pre-exists before B or vice versa. In the case of the flagpole, we can detect that the photons travel in a time-dependent way from the Sun to the flagpole, establishing the shadow as the caused factor.

The second is that, if we detect a relation among three variables, A, B and C, and we learn that A is fixed, then we tend to reject A as being caused by the other factors. In the intuitively kooky explanation, once we find that the explanation always predicts the height of the flagpole to be a constant, we would prefer by convention to argue that the height is not the effect, but a cause. In the absence of seeing one factor pre-exist before the others, the more constant a factor, the earlier we prefer to place it in a causal chain.

So, again, what we have here is a failure to isolate the experiment from background knowledge. If we only ever saw the Sun at one elevation, the flagpole at one height, and the shadow at one length, we would not have enough information to divine an order of causality. In that scenario, the theorist will be quite justified in explaining any one factor in terms of the other two. He only prefers one explanation to the others once he makes a deeper (far deeper) investigation.

## No comments:

Post a Comment