Suppose we make a set of observations, O

_{1}, O

_{2}, O

_{3}, ...O

_{n}. Each observation could be physical or mental, i.e., they are experiences of any kind. We devise consistent theories, {T

_{i}}, that claim to account for the {O

_{i}}. There are trivial and non-explanatory theories among them. One says this:

T_{1}: You will observe O_{1}, O_{2}, O_{3},... O_{n}.

This theory is trivial. If we observe some new O

_{j}, we just amend the theory to:

T_{1}: You will observe O_{1}, O_{2}, O_{3},... O_{n}and O_{j}.

Why can we do this? Because T

_{1}is never

*inconsistent*with any observation O

_{j}we might possibly make.

T

_{1}is not explanatory of the {O

_{i}}, not by my definition, and presumably not by yours. If T

_{1}were explanatory, then every collection of observations or experiences would be trivially self-explanatory.

So, how do we resolve this minor problem? What is it about a theory that makes it explanatory?

One guess is that explanations serve to compress observations. A theory can have the effect of being a short-hand for many observations. For example, instead of maintaining a long list of the timed locations of a billiard ball in motion, we can propose that the location of the billiard ball is a fixed function of time and the ball's initial conditions. That is, we can propose that there are relatively fixed laws of billiard ball motion that substitute for a long list of data points. This is precisely my analogy with fitting curves to points on a graph. Fitting a curve is not a restatement of the data because the curve predicts interpolations and extrapolations. Note also that there is a difference between, say, noticing that the data points fall on a straight line and claiming that they fall on the line for a reason. The first is an observation, and the latter is a prediction.

So, I am claiming that an explanatory theory predicts a subset of {O

_{i}} from part of the remainder of {O

_{i}}. For example, suppose I make these observations:

O_{1}= 1

O_{2}= 4

O_{3}= 9

My theory should predict O

_{3}given O

_{1}and/or O

_{2}, or predict O

_{1}in terms of O

_{2}and/or O

_{3}, etc. One theory that works here is

O_{i}= T(i) = i^{2}

This not only predicts the already observed O

_{1}-O

_{3}, it also predicts O

_{4}, and O

_{5}and O

_{0}and O

_{1.4}and so on. I can't think of any non-trivial theories that don't make predictions. Can you?

Suppose you observe the following:

O_{2}= 4

O_{4}= 16

What if we theorize that

O_{i}= T(i) = i^{2}, for i=2 and i=4 only.

This theory has been carefully tailored not to make a prediction. Is this explanatory? No, it's just like T

_{1}. We've just done a trivial coordinate transformation on the data by expressing the {O

_{i}} in terms of the square of a number instead of a direct value. It's a trivial restatement of the data. You might as well say that:

definition: T(i) = O_{i}

We're not explaining the observations, we're just saying that each succesive observation is given by a one-off rule that never applies to future observations. We would be drawing dots over your data points on your graph so as not to predict anything.

This is why an explanation never escapes making a prediction, for if it didn't, you could re-interpret the so-called explanation as a restatement of the data using a different coordinate system.

Remember, the {O

_{i}} can be any form of experience, including a statistical measurement. This means that our predictions can be of a statistical nature, and that assertions of regularity needn't be large statistical effects. They could be assertions of very minor probability variations.

## No comments:

Post a Comment