Tuesday, January 03, 2006

More on Explanation

I define an explanation as rule of implication that connects observations such that A => B. Think of A and B as pattern matches on observations, so that the rule looks like "state of affairs A causes state of affairs B." (Here, I use the term "cause" loosely, as there may be no time variable, e.g., as in geometry).

A set of observations without an explanation can be thought of as being either independent of any prior state or dependent on unknowable states. That is, an inexplicable set of observations has no detectable underlying rule of implication that connects members of the set. Every effect has its own hidden, independent cause. To the extent that we can say there's a function that predicts each observation, that function is tautologically fitted to each corresponding observation. The function "predicts" what you observe, no matter what you observe.

So, now let's consider scenarios in which we do have an explanation.

Since we are trying to explain our observations, we must find effect B within our observations.

Consider the cases where (i) we find cause A within our observations, and (ii) we do not find cause A in our observations.

(i) If A is found to predict B within our observations, then we have an inference from our observations, as well as a prediction about future events, namely, that A leads to B.

(ii) If we do not find A within our observations, then the explanation is not a scientific inference from our observations. That is, we have no reason to suspect that A causes B based on our observations.

Despite not being a scientific inference, we might still qualify our rule as a random guess at an explanation. Such a guess would predict that there was A prior to the observation of B.

However, if A is such that it leaves no evidential trace (A is said to be among a set of unknowable states), then the purported explanation appears no different from the non-explanation presented earlier.

I don't think that any of this sort of analysis is intuitive. We're used to thinking in very fuzzy terms about what constitutes an explanation. Some things feel like they're explanatory, when they are actually worse than random guesses.

The moral of the story is that if your explanation doesn't make any predictions, then it's not only worse than a random guess at an explanation, it's not even an explanation at all.