Friday, November 21, 2008

The Problem With Detroit

The U.S. auto mobile industry in an a shambles. Why?

Well, there are several contributing factors, such as higher labor costs, but there is one big reason. U.S. automakers don't make fuel-efficient vehicles that can compete.

Chronically, the automakers have planned on a short time horizon. They didn't invest over the long term, and they're unwilling to change the status quo. Ford, GM and Chrysler have too many inefficient trucks and SUV's, and their manufacturing processes take too long to retool. That's why cars like the Pontiac Grand Prix don't change body styles for 5 years or more, whle Toyota Camry gets a facelift every year or two.

Though the automakers developed hybrids, electric and fuel cell vehicles, they did so primarily as a PR move, with no intention of shipping green machines to the public unless they were absolutely forced to do so. Toyota forced their hand, and now American cars are playing a sad game of catch-up.

This myopic strategy has been ongoing for the last decade. Everyone knew the U.S. car industry was doing it.

What could have changed this? CAFE! Increasing average fleet fuel efficiency standards. The government could have forced the U.S. auto industry to build more fuel-efficent vehicles.

This would not only have made our carmakers greener, it would have made them more competitive over the long term.

Why didn't it happen? Because the Republicans insisted that it was better to let business play their game instead of having government get involved. Oh, and political contributions from the automakers might have had something to do with it, also.

It's really very simple. When you look at an industry, there are limitations in how the marketplace works. Most U.S. corporations don't have a long-term strategy. They're obsessed with short term profits and stock prices. Meanwhile, other governments write articles of legislation that force their industries to plan for the future. Consequently, their corporations are safer, greener, and more citizen friendly, and more competitive.

I'm all for free trade, but if we're going to take down trade barriers, why should the U.S. compete with one hand behind its back?

Tuesday, November 04, 2008

What government can do for biomedical research

Sharon Begley in Newsweek:

These barriers to "translational" research (studies that move basic discoveries from bench to bedside) have become so daunting that scientists have a phrase for the chasm between a basic scientific discovery and a new treatment. "It's called the valley of death," says Greg Simon, president of FasterCures, a center set up by the (Michael) Milken Institute in 2003 to achieve what its name says. The valley of death is why many promising discoveries—genes linked to cancer and Parkinson's disease; biochemical pathways that ravage neurons in Lou Gehrig's disease—never move forward.

The next administration and Congress have a chance to change that, radically revamping the nation's biomedical research system by creating what proponents Richard Boxer, a urologist at the University of Miami, and Lou Weisbach, a Chicago entrepreneur, call a "center for cures" at NIH. The center would house multidisciplinary teams of biologists, chemists, technicians and others who would take a discovery such as Keirstead's and nurture it along to the point where a company is willing to put up the hundreds of millions of dollars to test it in patients. The existence of such a center would free scientists to go back to making important discoveries, not figuring out large-scale pipetting, for goodness' sake.

Sunday, November 02, 2008

Skeptics Protest Bloggs Conviction

Fred Bloggs was convicted of the murder in court, yesterday. His fingerprints were found at the scene. The victim's blood and DNA were found on Bloggs's coat at his home. Also, the murder weapon was found in Bloggs's garage. Eyewitness accounts placed Bloggs at the murder scene on the day in question.

However, skeptics protested against the verdict. Protesters argued that Bloggs was a victim of as-yet-unexplained coincidences. They argued that the victim died of natural (although bizarrely bloody) causes.

Skeptics cited what they called missing evidence in the case. They argued that prosecutors failed to say precisely how Bloggs traveled to the murder scene. Though advocates for Bloggs could not produce an alibi for him, they claimed the court's judgment to be absurd if it could not say definitively whether Bloggs took the bus or rode his bike to the scene (or how many seconds late the bus was running).

Lacking evidence or alibis, protesters advanced even stranger arguments to defend Bloggs. The skeptics suggested that if a person could seem to be stabbed by an assailant in all physical respects without actually having been stabbed by an assailant, then there must be some ineffable difference between being physically stabbed by an assailant and actually being stabbed by an assailant. On this basis, they argued that it was unreasonable to convict Bloggs on the basis of physical evidence. The skeptics were elated by the cleverness of the argument, but when asked by a reporter whether the premise of the argument begged the question, the skeptics pretended they hadn't heard the reporter's question.

Overall, protesters said it had been a good day in the Bloggs case, and claimed that their demonstration was evidence that the case against Bloggs was in full retreat, and, indeed, that the practice of relying on physical evidence in court cases would soon be abandoned.

Thursday, July 10, 2008

Statistical weight versus gut, aka, more on zombies

Dualists do not a priori believe that consciousness has a physical component.

Imagine living 500 years ago. Peter says the mind is a physical mechanism, Dave says it's not all physical. Now what are Peter's predictions? Peter's predictions are that every cognitive function can be intercepted or corrupted by physical means. Meanwhile, Dave's predictions are that every cognitive functions may or may not be corrupted by physical means.

Centuries pass, and we find that, at every opportunity, Peter's predictions are validated. Dave's theory has not been absolutely ruled out, but it has been ruled out statistically. What are the odds that Dave's dualism is that one rare form of dualism that looks exactly like Peter's physicalism?

Well, the odds are overwhelmingly in favor of Peter. Every experiment that could go Dave's way but doesn't is a factor of two in favor of Peter's theory. Today, one would be guilty of gross fine-tuning (and gap argumentation) to suppose that Dave's theory were likely to be true. Even if Peter had passed only ten tests of physical cognitive function, Dave's theory would still be a million-to-one long shot.

The question is, does the zombie argument impose million(or billion)-to-one statistical argument that can cancel out all of Peter's data for the last five centuries?

No. The premise of the zombie argument is that human zombies are possible, i.e., that physicalism is insufficient to explain qualia. But qualia may not even exist as non-causal elements. Even if our belief favored the existence of non-causal qualia (and mine certainly doesn't), we would not be sure to one part in ten, let alone one part in a million. If we were billion-to-one certain that qualia existed as an non-causal part of the cognitive story, then Dave could happily sustain his debate with Peter. But that's just not the case.

Friday, June 06, 2008

The Placeholder Fallacy

In physics, we have found a pattern of reduction and unification. James Clerk Maxwell discovered that the electric force and magnetic force are both aspects of a single electromagnetic force. Abdus Salaam, Steven Weinberg and Sheldon Glashow were awarded the Nobel prize for their work in revealing that the electromagnetic force and the weak nuclear force are two aspects of a single electroweak force. The hope is that, one day, we will unify all of the forces in a "Theory of Everything" (ToE) described by one or two simple equations.

However, as yet, we do not know whether the ToE exists. Some think that a String Theory might turn out to be the ToE, but for all we know, the ToE could be based on a radically different mathematics. For now, a ToE remains an elusive dream.

Imagine two astronomers are looking through a telescope tonight, witnessing a galaxy exploding mysteriously. One astronomer says to the other "Aha! This explosion is explained by the Theory of Everything!"

The other astronomer replies. "Really? What is the Theory of Everything?"

The first astronomer responds "I don't know what the Theory of Everything actually says, but, it being a theory of everything, it must explain this explosion."

Has the first astronomer explained the explosion?

Of course not! The first astronomer is merely using a reference to a theory he does not have. He is using a placeholder for an explanation as if he had the actual explanation.

This is akin to me stepping ashore on an unexplored continent, declaring the tallest mountain to be named "Mount Logic", and then trying to claim credit for the discovery of the tallest mountain on the continent.

Thus, I present you with what I call The Placeholder Fallacy

The Placeholder Fallacy
Treating a reference to an explanation you don't have as if it has explanatory power.
This fallacy is most commonly used by theists when they claim that God can explain the existence of the universe, the fine-tuning of physical constants, or the origin of species. By the definition of a deity, God can certainly perform all of these tasks. This is no different from there being the possibility that an as-yet undiscovered physical law can explain all these things. What the theists really mean is that, if we knew the mind of God, we would have an explanation for these things, not just the possibility of an explanation. But theists generally reject the idea that we can know the mind of God well enough to predict anything like creation of universes, creation of life with common descent, etc. God, or rather the mind of God, is an explanation we do not (and can never) have. It can't explain anything until we know what God was thinking.

Note that if we knew what God was thinking, we could make some predictions, and then God would begin to be explanatory.

How does one expose instances of the placeholder fallacy? If the alleged explanation is prediction-less, just substitute the as-yet undiscovered Theory of Everything for the alleged explanatory agency. If it doesn't work for the ToE, it doesn't work for the agency either.

Example: "God explains the creation of the universe."
Test: "The ToE explains the creation of the universe."

Example: "God explains why this child survived the crash."
Test: "The ToE explains why this child survived the crash."

Example: "God explains why humans can think rationally."
Test: "The ToE explains why humans can think rationally."

Monday, May 05, 2008

Zombie Question-Begging

We get inferences to reduction even when we're not certain that our reductionist model accounts for every pre-reduction fact. For example, there are some complex systems of water that have not been simulated in terms of H2O molecules because the computational task is beyond our abilities. So it is possible that, say, some kinds of whirlpools cannot be accounted for in terms of H2O. Perhaps such whirlpools require some sort of irreducible water spirit? Yet, we don't doubt that water reduces to H2O. Why?

The argument is roughly like this: irreducible water spirits don't place constraints on experimental tests (while still being relevant to them), whereas physical reductionism does. Experimental results are consistent with the constraints when they needn't have been. Therefore, it is probable that water reduces to H2O.

Suppose there are fair coins and two-headed coins, and I take one of the coins at random and flip it in front of you. It lands heads. What are the odds that the coin is fair? Clearly, it is more likely that the coin is the two-headed coin. Now take this to the Nth power, and you'll see why we don't regard water as consisting of water spirits (fair coins), even if we have not formally reduced every instance of water behavior we have ever observed.

Similarly, there are a great many ways that minds are consistent with physics in ways they needn't have been if minds were not reducible to physics. Hence, it is rational to believe that minds are likely to be physical systems (they might be irreducible, but it is terrifically unlikely because we would be supposing that very special, fine-tuned form of irreducibility that looks just like reducibility wherever we look).

As for the zombie argument, I personally think there's some very subtle question-begging going on. In order for qualia to escape the aforementioned reductionist inference, it has to be claimed that qualia are wholly irrelevant and disconnected with physics. By making this claim, it is also implicitly claimed that qualia cannot have a physical explanation. If this assumption is sustained, then qualia don't have any implementation, so physical minds don't place any more constraints on experiment than do irreducible ones, and the inference to reduction to physical minds fails.

However, if we deny from the start that qualia can have a physical explanation, that's begging the question. There are also multiple arguments to the effect that physically irrelevant qualia don't exist at all.

(Originally a comment on Philosophy Etc.)

Wednesday, April 02, 2008

Abstractions and Language

Suppose I see a rabbit for the first time. I know it is small, brown, has four legs, is furry and has big ears. In recognizing the bunny, I have created a filter in my brain - a rabbit recognizer. Anything that is recognized by this filter I will call a "rabbit". This filter is an abstraction because it will recognize any rabbit, including rabbits I have yet to see.

So when I say "rabbits have long ears" I really mean that "my rabbit filter is triggered by (among other things) long ears."

This exposes an important fact about language. It means that when we speak in terms of abstractions, we don't have to be referring to some Platonic ideal or some floating universal. We can be referring to our own faculties, and what would trigger those faculties to recognition.

So when I recognize what a watch is doing when it is keeping time, I automatically create an abstraction filter for time-keeping. I can speak of time-keeping mechanisms in the abstract because I refer to the filter in my mind that recognizes such things. And I can say that the time-keeping ability of this particular watch is due to the mechanical mechanism inside of it.

So when you ask "Is a watch a time-keeper in the absence of minds?" you have to decide what you mean by the question. "Time-keeper" could mean that I presently see and recognize and use the device as a time-keeper. Or "time-keeper" could mean that, if I had such a device here and now, I would recognize it as a time-keeper. You would have to take time-keeper only in the strict, former sense of the word to say that time-keepers would not exist without us. However, taking the word "time-keeper" in this sense is misleading. If I used the first definition, then any watch not in my presence would not be a time-keeper. (And any rabbit yet to be born would not be a rabbit, etc.) No one takes language to mean this. The language is taken such that a device is a time-keeper if it would be recognized as such by a mind, if a mind were present.

Tuesday, April 01, 2008

Reducing a Watch

The following is an excerpt from a comment I made over at Dangerous Idea 2. The comments got a little sidetracked from the original post, but the issue at stake is what reduction is about. Does a reductionist theory eliminate phenomena or does it identify one phenomenon with a different one? No....

The watch tells the time. The watch ticks. The hands of the watch move. The watch contains movable springs and gears. The watch has a glass cover and a part that connects it to a strap or watch-chain.

The reductionist theory of watches says that the ticking and the time-telling and the hand movements are caused by the springs and gears working as a mechanism. The reductionist does not say that the hand movements and the time-telling are identical to the springs and gears in the absence of a mechanism.

When the reductionist theory is first put forward, the mechanism is unseen. So we're not saying that something we have seen is something else we have seen. We are saying that something we have yet to see (the mechanism) causes the seen phenomena.

In this case, what we have observed to date about the watch does not contradict the reduction. The ticking, the gears and springs, the time-telling are all still present after the reduction, and they don't disappear after the reduction. The reduction does not eliminate the motion of the hands or the telling of the time. It explains them.

Now, let's backtrack to a time before we decided knew the watch reduced.

A priori (before the reduction), we might think that the watch properties cannot be reduced. We might think that the external properties of the watch are the fundamental ones. For example, we might think time-telling is in the watch as a fundamental property of the whole and cannot be broken down to a mechanism of more fundamental parts.

When we propose irreducibility like this, there will generally be parts in the watch that we can remove without breaking the time-telling. For example, we can remove the glass cover or the parts that connect the watch to a chain or strap. Take off these other parts and the watch continues to tell the time.

When we open the watch and see the springs and gears moving in sync with the hand movements, we have to ask whether they are all fundamental or whether one is more fundamental than the other. For example, maybe it is the intrinsic time-telling that moves the hands, and the hands drive the gears. If that were the case, then the gears and springs are at least as likely to be unnecessary as necessary. Maybe the watch is intrinsically a time-teller, and the gears serve only to cause the ticking.

On what basis do we successfully and completely reduce the watch to a mechanism of gears and springs? Well, the watch reduction is totally successful once we understand the mechanism, and once we can build watches or show that the gears and their mechanism predict the ticking and the time telling.

However, we can make partial reductions without understanding the whole watch. If we find that the watch cannot tell time without the gears, there's (a priori) 2:1 odds that the gears cause the the time-telling (and that time-telling is not fundamental to the watch). This is because if time-telling is fundamental, the time-telling is compatible with gears moving or not moving, but if gears and mechanism are fundamental, the gears must move.

Similarly, if we find that tweaking the gears causes the time-telling to speed up or slow down, then we have another 2:1 (now 4:1) statistical advantage for the reductionist gears theory. If time-telling is fundamental, then it may or may not be possible (2 possibilities) to change the hand movements by tweaking the gears. On the other hand, if the gears and mechanism are fundamental, then it will certainly be possible to change the time-telling rate by tweaking the gears.

In cognitive science, we have performed many partial reductions. If mind or its aspects are fundamental, then we don't need physical brains, physical memories or any of the other things comparable to the gears in the watch. Yet we have them. If minds are fundamental, then we don't need circuits in our brains that function as memory, that recognize, that predict, and that emote. Yet we have all those things. It turns out that just about every function of mind can be tweaked physically or chemically. So while we don't know the whole mechanism, we ought to be well over 99% certain that the mind is not fundamental, and that the neurochemical mechanisms are fundamental to mind.

Sunday, March 23, 2008

Specious Explanations: The Short Form

In my last post, I explored the boundary between the explanatory and the non-explanatory, it it ended up being rather long-winded. Since then, I've condensed my position down to just a few sentences. Here goes...
  • If your theory predicts nothing then it is not explanatory, for all it does is restate your existing observations.
  • If your theory is just a reference to (or placeholder for) a theory you don't have, then it isn't explanatory. You cannot explain merely by declaring what you would name an explanation if you had one.
  • If your theory is predictive, but attributes the prediction to an entity you know nothing about, then the entity in question is not part of the explanation.
Here's the simple test. If you can substitute "Unknown Theory of Everything" for any entity in your theory, then that entity does no explaining in your theory.

Example:
Gravitational force is proportional to inertial mass.
This theory makes many predictions, so it is an explanatory law. However, if you attribute this force to an "Unknown Theory of Everything" then your Theory of Everything isn't adding any explanatory power. If you say Unknown Theory of Everything causes Gravitational force to be proportional to inertial mass, then you're not explaining any more than you did with your original theory.

Example:
At random, God strikes sinners with lightning. We are all sinners, and all guilty in the eyes of God.
This theory isn't predictive, so it explains nothing. Furthermore, its ridiculousness is exposed when we substitute "Unknown Theory of Everything" for God: At random, Unknown Theory of Everything strikes some sinners with lightning.

Saturday, March 15, 2008

What counts as a prediction?

I was describing my theory of explanation to a friend, and he suggested that my definition of a predictive theory was too vague.

He gave the example of a religion which explains airplane accidents based on the sinning nature of people around the world. People sin, therefore God punishes us with airplane accidents that are not necessarily directed at sinners. This theory predicts future airplane accidents if people continue to sin. Does the nonspecific prediction of future airplane accidents count as a proper prediction?

Well, the model being proposed looks something like this:

airplane accidents = God(sin)

If these variables represented the probability of an airplane crash and the aggregate of sin or something along those lines, then my standard argument against the theory would be that the function "God" is not defined. The connection between the two variables would be so mysterious that it would be like an infinite order polynomial, and no foreseeable amount of fine tuning will generate a particular prediction. So the original "explanation" would degenerate into a claim that "if we had a predictive theory, it would be explanatory. If we knew the mind of God, then accidents would be predicted and explained." In other words, this is a reference to an explanation we don't have, and it has no explanatory power.

But my friend was not asking so simple a question. He wants to know how we draw the fine line between a predictive theory and a non-predictive one. In particular, is it possible to weaken the prediction and salvage the theory's explanatory power? Can we rig up this formula to represent the mere existence of airplane accidents and sinning? Can we theorize that the mere existence of sinning implies the existence of airplane accidents?

Well, there are a few things that any such theory would have to avoid if it's going to be more than the false pretense of a prediction. The theory can't claim that the elimination of sin is impossible, or that the other factors in airplane crashes make it impossible to test the claim. If a theory did these things, it would merely be pretending to make predictions.

But let's suppose that the advocates of this hypothetical religious theory accept that sin could be eliminated and that their claim is actually testable in principle, even if difficult in practice. Are there any other problems?

Yes. The problem is that God has nothing to do with the prediction.

Suppose I explain the electrostatic force of attraction between two spheres in terms of the electrical charges on each sphere and the square of the distance between the spheres. This is known in physics as Coulomb's law. That is, I provide my prediction as a formula. I can use this formula to explain the force of attraction between charged objects. I can say that sphere A is attracted to sphere B because of Coulomb's law.

Now suppose that I also attribute the attraction between charged objects to the work of undetectable faeries. Something is obviously wrong with this. Faeries are not known for imposing inverse-square laws of attraction. The existence of faeries does not predict the formula. It has nothing to do with the predictive theory, and has simply been gratuitously slapped onto a naturalistic explanation. In short, the faeries aren't doing any predicting.

I could try to wriggle out of this by saying that faeries are ultimately responsible for the attraction, and that by discovering Coulomb's law of electrostatic attraction, we are learning something new about undetectable faeries. However, I could replace the word "faeries" with undiscoverable Theory of Everything (ToE) and claim the same excuse. Yet, everyone would reject this. If a ToE existed and we knew what it was, the theory of everything would predict the electrostatic attraction formula. However, the ToE isn't doing any explaining here. The predictive power of this theory has no more to do with the ToE than it does with faeries. If we had a theory of faeries which cause electrostatic attraction, such a theory would predict Coulomb's law, but we don't have such a theory of faeries, so the faeries predict nothing.

So, we can extend our original list of specious explanations. An theory loses its explanatory power when the theory
  • Fails to make any predictions, and merely restates our observations.
  • Fails to make a prediction because it refers to a theory that we do not have (even though the theory would be predictive if we actually knew any of its details).
  • Makes a prediction, but fallaciously attributes the explanatory power of the predictive relationship to parts of the theory that are non-predictive.
Most of the time when theories fall prey to this last error, they are positing some entity that is alleged to be responsible for the prediction in question. The key to uncovering the error is to ask how this entity is defined.

If the entity refers to the prediction rule itself, there's no problem. This is the case in a scientific law, like Coulomb's law. It is the law itself that is responsible for the prediction (or rather it is simply the nature of the stuff one is observing that it follows the law). Similarly, a physical theory (which leads to many laws) is no problem because posits entities that are components of the prediction-making mechanism. For example, while quarks have never been seen directly, they are the focus for generating many rules about the behaviors of elementary particles. Yet, quarks do not refer to anything more than this. Quarks do not have any existence beyond the predictive theory. For example, quarks do not have any moral status because they are not part of any predictive moral theory.

On the other hand, if the responsible entity is defined outside of our rule-making system, it may be illicit. This is the case for God and the Theory of Everything. In such cases, the entity is presumed to exist independent of any particular set of predictions.

Of course, there is a way for the entity to be defined outside of the rule-making system and yet not be illicit. If the entity is already defined as an element of a known rule-making system, then the entity may have a chance to be explanatory. Here's an example.

The Pacific Ocean is part of a predictive model. If you go to the Western coast of the United States, you'll see a very large body of cool, salty water. This body usually has waves breaking on the shore. This body of water is called the Pacific Ocean, and it is predictive in space, time and numerous physical properties. Hence, if I am in Los Angeles and I walk far enough to the west, I predict I will get my feet wet. Why? Because I have this predictive thing called the Pacific Ocean that is defined by its geographic and physical properties.

Now suppose I am trying to explain the formation of Monterey Bay. I can theorize that the Bay was caused by the action of the Pacific. In this case, the Pacific is not being invented for the purposes of explaining the bay. Rather, it was invented to explain other things, but may also explain the erosion of Monterey Bay. The Pacific becomes explanatory of Monterey Bay when a mechanism is cited by which the Pacific can erode bays.

Every explanatory entity can be defined as "that which predicts X." God and invisible faeries are not defined by what they predict, and so they fail to be explanatory.

Critics of this conclusion will say that not everything that exists is predictive. In other words, they may argue that God exists and is a meaningful concept even if God predicts nothing. I disagree, but this point is irrelevant. The question at hand is whether an entity is explanatory, not whether it exists.

I apologize for the verbosity and lack of clarity in this post. It took me a while to think it through. I'm sure I'll come up with abbreviated ways of saying the same thing.

Thursday, March 13, 2008

The Financial Security Objection

My friend R is a conservative. The other day, we discussed the regressive tax strategies advocated by Republicans. R did not dispute my objections to such schemes, but instead settled on an honest and simple statement of his motivations.

R's concern was about his family's ability to accumulate wealth. He noted that during the Great Depression, wealthy Americans were adversely affected by the economic conditions, but unlike working class folk, the wealthy were not devastated. In contrast, working families were devastated. So my friend espouses a strategy in which he will accumulate a lot of wealth, and by preventing the government from taking his money, he will then be in a relatively good position should another Great Depression occur.

I don't know whether or not this is the typical belief held by conservatives, but I'm sure my friend is not alone. We are all driven by some combinations of fears and hopes about economic outcomes. R's fear is that taxation will leave him without any hope of surviving an economic collapse.

Needless to say, I have some issues with this kind of strategy.

During the Great Depression, there were shanty towns where people lived in crates and old pipes and anything else they could find. However, as far as I can tell, only a small minority of the population fell victim to this outcome. If it were the demise of the average Joe, the majority of Americans would have lived in Hoovervilles, and that doesn't seem to have been the case. Instead, most Americans endured significant hardships (unemployment went as high as 25%), but most were not rendered homeless. Only the few super-wealthy folk survived unscathed.

So, first, if we had to define the boundaries of the economic classes during the Great Depression, we probably would say that there was a minority of truly destitute persons, a large majority of persons with lower standard of living, and a teeny, tiny sliver of super wealthy folk. This means that all one has to do to avoid being in the destitute minority is to be in the 25th percentile or better. My friend R already does this handily. However, R is not rich. He's not a billionaire (unless he's not telling me something!), and so he's not going to be unaffected by a Great Depression. In other words, while more wealth is always better, it's unlikely that having the upper tax brackets pay 5-10% more in tax is really going to save R much grief should a depression occur.

Second, no Democrats are proposing taxing the wealthy at much more than the rates we had under Bill Clinton. At those tax rates, not only do wealthy people survive, they thrive. The wealthy Republicans who whined about taxes under Clinton are just plain greedy. They were getting incredibly rich, but somehow it wasn't enough for them.

Third, the strategy my friend is advocating is pessimistic. Rather than go for a strategy that is likely to make the nation solvent and better able to prevent a depression, my friend wants to apply a strategy that will make a depression far more likely to occur. Indeed, some analysts have said that the disparity in wealth between the haves and the have-nots was a primary cause of the Great Depression. The wealth gap today is getting far greater (far worse) because of years of Republican rule. I cannot imagine how the expectation value of R's wealth improves by his paying a few percent lower taxes at the expense of national solvency.

So I think R's strategy is wrong for me, wrong for the nation, and wrong for R himself. It is better for R to prevent a depression through sensible policy than it is to encourage a depression in the hopes that the depression-encouraging strategy will leave R with a little more padding when the depression comes.

In other words, I'm all for selfish strategies, but I think that selfish long-sighted economic strategies are better than selfish short-sighted ones.

Friday, February 29, 2008

Specious Explanations

There are two kinds of bad explanations I've blogged about in the past.

The first kind is the restatement of data. If we look at the data points on a graph, and our explanation is equivalent to drawing dots over the data points, then we're not explaining the points, merely restating them.

The second kind of specious explanation is the reference to an explanation we do not have. "God" is an example of this. In theory, if God were visible and we knew the mind of God and had his omniscience, God's actions would be explained by his choosing the best solution in each case. However, we don't have any of those things, and so God refers to an explanation we don't have.

In both cases, the way to avoid the trap is to have in one's hands a predictive model. It may not actually be the correct model, but at least it is explanatory. In the first case, points on the graph are explained by a particular curve (or a limited class of curves) passing through those data points. In the second case, God is only explanatory when we know what actions were God's, why he acted as he did, and what his actions are likely to be in the future.

In describing these specious cases to a friend, it occurred to me that these two varieties of fallacy are two sides of the same coin.

Suppose that we have a graph with data points on it, and we propose that they are explained by a curve passing through the points. So far so good. Under normal circumstances, we would fit N data points to an Mth-order polynomial, where M < N. That way, we can fit the polynomial to the data and make a prediction (by interpolation or extrapolation).

However, if I only have two data points and I propose to fit the points to a 2nd-order (quadratic) polynomial which requires three parameters, then my theory is under-determined. I need another data point to know what the polynomial looks like. Nonetheless, this is acceptable as an explanation because it makes the prediction that I can measure another data point, then make predictions from there.

Now, if we were to describe the space of all possible explanations, what would it look like? It would look like an unknown-order polynomial. For such a polynomial, we have no idea how many data points need to be accumulated before a predictive pattern emerges. Moreover, it is impossible to make any predictions from the unknown-order polynomial. A Kth-order polynomial (where K is finite and specified) would make a definite prediction, but an unknown-order polynomial can't do that much.

Hence, the first variety of specious explanation, the restatement, is equivalent to the second variety, a reference to an explanation we don't have.

As a clarification, I am talking about potential explanations here. Explanatory power. The explanatory power of a theory I do not yet have is zero! I'm not just saying that "God" is an unconfirmed theory, I'm saying that it's a non-explanatory theory (if it even merits the title "theory").

To add insult to injury, supernaturalists will argue that predictions about supernatural causes are not just an unknown distance away, but are fundamentally impossible. They don't merely invoke an unknown-order polynomial, but an infinite-order one.