Friday, November 24, 2006

The Real Force Behind the Mass Murders of History

A friend suggested I comment on a recent column by Dinesh D'Souza claiming that atheism, rather than religion, is responsible for the greatest mass murders of history. These sort of accusations pop up now and again, and they're a sign that either the accuser is playing games, or has a very limited understanding of the issues.

Just to put things in perspective for those who have never heard of D'Souza, he is one of the neoconservatives who thought invading Iraq was a pretty cool idea back in 2003. He also recently thought that a democratic Iraq would be a beacon of inspiration to other nations in the region. This isn't directly relevant to D'Souza's column, but it does establish the level of wingnuttery we're dealing with here.

At the conclusion of D'Souza's column about the evils of atheism, he writes:
It's time to abandon the mindlessly repeated mantra that religious belief has been the greatest source of human conflict and violence.
Now, hang on to your butts... this statement isn't completely untrue. It's just misleading. It's not a connection between totalitarianism and religious belief. The problem is with organized religion.

There are two kinds of dictatorships: mandated and coerced.

Coerced dictatorships emerge by overpowering the people. For instance, the Russian Civil War was lost by the liberals, leaving a radical, authoritarian, fear-mongering cabal in power. The people made their stand and were defeated militarily. In the decades that followed, millions would die at the hands of a paranoid police state. In such cases, it makes little difference whether or not the people are critical thinkers and humanitarians.

The claim that religious affiliation of the dictator would have prevented the holocausts of the 20th century is utterly preposterous. Dictators are not nice guys, and mass murder goes with the territory. That's how they get to be dictators.

When they're theistic, the dictators see themselves as God's representative on Earth, and they'll see their self-preservation as a holy cause.

Though Hitler's actions were motivated neither by Christianity nor atheism, Hitler was a Christian. The evidence is as plain as day. I quite expect that if you had asked Hitler whether God approved of his actions, he would have answered in the affirmative. It's perfectly natural when you think about it. We all create God in our own image. Hitler's propaganda chief, Joseph Goebbels, put it this way:
The war we are fighting until victory or the bitter end is in its deepest sense and war between Christ and Marx.
Did Saddam think Allah was on his side? I quite expect so.

So, religion won't defend us by giving us better dictators (although it might be more likely to give us less technologically sophisticated ones).

But what of the dictatorships that emerge with the consent or mandate of the people? What could compel people to willingly sacrifice their civil rights and participate in dark crimes of the state?

Fear and mob mentality. When fear strikes, people naturally seek the security of their tribe. When this happens, people naturally suspect any dissenter of treachery. The needs of the group outweigh the needs of any individual, be they man, woman or child.

Hitler was a power-hungry maniac who found something that worked: social manipulation through propaganda. The German people were encouraged to think (or, rather, not think) like a mob and be proud Christians in the process. Dissent was punished. Fear, dogma, propaganda, and ideology overruled the people's humanity, critical thinking and reason. Hermann Göring, infamous Nazi, explained why they it worked:
Voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is to tell them they are being attacked, and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same in any country.
And he might be right. Perhaps, it is in the nature of every state, even the democratic ones, to be vulnerable in this way.

Then again, maybe there's a hope that our citizenry might one day be more than faint-hearted lemmings. Maybe they will have the courage to put their fear aside, stick to their principles of rationality, and question what they are told. Such courage would be no certain defense against totalitarianism, but it would protect us from mandated dictatorship. From the next Hitler.

Will religion help or hurt?

In the Spring 2003 issue of Free Inquiry, Dr. Lawrence Britt cataloged the identifying traits of fascist governments. A few passages stand out:
  • Fascist regimes tend to make constant use of patriotic mottoes, slogans, symbols, songs, and other paraphernalia. Flags are seen everywhere, as are flag symbols on clothing and in public displays.
  • The people are rallied into a unifying patriotic frenzy over the need to eliminate a perceived common threat or foe: racial, ethnic or religious minorities; liberals; communists; socialists, terrorists, etc.
  • The governments of fascist nations tend to be almost exclusively male-dominated. Under fascist regimes, traditional gender roles are made more rigid. Opposition to abortion is high, as is homophobia and anti-gay legislation and national policy.
  • Fascist nations tend to promote and tolerate open hostility to higher education, and academia. It is not uncommon for professors and other academics to be censored or even arrested. Free expression in the arts is openly attacked, and governments often refuse to fund the arts.
One cannot miss the parallels between fascism and organized religion. Just substitute tribalism for nationalism. The more reactionary the religion, the more fascist it looks.

The central mission of religious institutions is indoctrination to dogma. Their aim is to provide a ready tribe, and to condemn dissenting views. Religions are monuments to the ideal that there are unquestionable moral authorities, and that systems based on blind obedience are not just to be tolerated but revered. Indeed, most churches of organized religion claim that a dictator runs the universe for his pleasure and our pain.

So, how exactly is a microcosm of fascism going to teach us to eschew fascism? Of course, it will do no such thing.

Could religion's authoritarian influences be neutralized by their espousal of humanism and non-violence? This is wishful thinking. Religions actively support our wars (whether just or not), and right-wing religious groups have no problem with torture of prisoners (after all, the prisoners are obviously terrorists, or they would never have been arrested). The most prominent American religions always seek punishment, including the death penalty, even when alternatives like forgiveness might result in better social outcomes.

These are not the values of liberal democracy. They are the values of totalitarian states, like Iran. Democracy has only advanced by building a wall between church and state.

Thus, contrary to D'Souza's conclusion, most organized religions groom their flocks for dictators, whether those tyrants be atheist or religious.

Thursday, November 23, 2006

Emergence and Reductionism

Stuart Kauffman has written a piece on emergence and reductionism for the Edge eZine. Edge is a very cool magazine, and I recommend picking up a free email subscription if you don't have one.

Kauffman claims that reductionism is out of steam, and suggests that emergence provides the answers that reductionism cannot. Kauffman cites three examples, the first of which is the origin of life.
Clearly none of the theories above is adequate. But one gets the firm sense that science is moving in on possible routes to the origin of life on earth. If some combination of the metabolism, polymer autocatalysis and lipid first view can be formulated and tested in a new "Systems Chemistry", we may find the answers we seek.

Suppose we do. It will be a scientific triumph of course. But if such self reproducing and, via heritable variations, evolving systems are formed, are they ontologically emergent with respect to physics? I believe the answer is yes.
Kauffman backs up his claim by arguing that natural selection can run on multiple physical platforms, as long as those platforms are self reproducing and have heritable variations. Intuitively, he seems to be on to something. The problem is that if we take emergence to mean the emergence of higher level properties from lower level ones, then metals are an ontologically emergent feature of atomic physics. And orbits are an emergent feature of gravitional attraction (orbits can occur under electromagnetic attraction).

Kauffman tries to distinguish emergence from reductionistic pictures by looking to see if the emergent feature can possibly be shown to emerge algorithmically from low-level physics. He says:
Note that while the physicist might deduce that a specific set of molecules was self reproducing, and had heritable variations and instantiated natural selection, one cannot deduce natural selection from the specific physics of any specific case(s), or even this universe, alone.
I find this rather baffling. Does Kauffman mean that you can't see evolution in a single organism? Does he mean that a molecular model that shows evolutionary processes is current beyond tractability? If we apply this rule to chemistry, we will find that water does not reduce to H2O because molecular models of fountains and waterfalls are presently intractable. I can't imagine a definition of emergence that Kauffman could using that wouldn't dispel the notion of reduction altogether.

Kauffman's second example is agency. Of meaning and value for agents he says:
They too are ontologically emergent. We have a natural place for value in a world of fact, for the world is not just fact: agents act on the world and actions are not just facts, for the action itself is a subset of the causal consequences of what occurs during an act, and that relevant subset cannot be deduced from physics.
Kauffman really ought to add the word "today" to then end of that sentence. This is a tired old poem about how meaning and value seem like more than facts, so they can't also be made of facts. Sort of like the way diamonds seem like so much more than charcoal, so diamonds and charcoal can't be the same stuff.

Indeed, the diamond example illustrates just how reductionism is often misunderstood. Diamonds are not charcoal. That's not what reductionism says. Reductionism is the idea that diamonds and charcoal are different forms or arrangements of the same component, carbon. Failure to appreciate this leads to what Daniel Dennett calls greedy reductionism - the idea that reductionism equates different configurations of the low-level components. Greedy reductionists hold that since dishwashers and paintings are the "same stuff," there is no such property as 'dishwasherness' or 'paintingness', and that it is irrational to treat them any differently. Clearly, this is not the case. We don't create galleries of fine dishwashers, nor do we dry our dishes with masterworks because the utility of each class of object is different. That is, stuff has different value to us humans depending on its configuration.

On the issue of consciousness, Kauffman argues that the mind cannot be a machine if it does not use algorithms. Devising a fairly mundane mechanical arrangement to prevent his computer from being upset by an unfortunate cable pull, Kauffman says:
So I invented a solution. I jammed the cord into one of the cracks and pulled it tight so that my family would not be able to pull the computer off the table. Now it seems to me that there is no way to turn this Herculian mental performance into an algorithm. How would one bound the features of the situation finitely? How would one even list the features of the table in a denumerably infinite list? One cannot. Thus it seems to me that no algorithm was performed. As a broader case, we are all familiar with struggling to formulate a problem. Do you remotely think that your struggle is an effective "mechanical" or algorithmic procedure? I do not.
There are several misconceptions captured in this single paragraph. First of all, we don't need an exhaustive definition of table in order to process information about a table. We only need a description that has as much precision as we need for the task at hand. We don't care whether tables are secretly alive, or are naturally occurring plant formations. All we care about is their local utility. Yes, we can examine tables in seemingly endless detail, but that's not really relevant to the solution we're talking about.

Second, an algorithm that solves a problem doesn't need to prove it deductively. Evolution and natural selection are brilliant examples of this. Genetic programming solves problems without necessarily reaching a single, right answer. The proof of its rightness is in the tasting. The same is true of Kauffman's computer cabling contrivance. Not only is it not a unique solution to the general problem, but there are seemingly an infinite continuity of ways he could have positioned the cable at a molecular level. We might think that Kauffman is saying that statistical algorithms aren't algorithms at all, or that the presence of statistical algorithms renders reduction invalid. Yet, again, such a claim would invalidate all reductionist claims.

Reductionism
All of this means we need to agree on definitions of what constitutes reduction and what doesn't. After all, we might find we agree once we synchronize our terminology.

Steven Weinberg's definition is that, in reductionism, "explanatory arrows always point downward." What does this mean? Well, an explanation requires a predictive model. The model needs to contain a number of components that doesn't exceed the possible number of observations we can make. (If it were to do so, it would degenerate into a restatement of observations as they happen.) Inevitably, a model will propose that there are a limited number of components or component families that have properties that predict (and thereby) explain the observation. In this way, we learn that high-level observations can be explained in terms of lower level entities that ought to have observable effects.

Emergence
Is emergence the opposite of reductionism? Not necessarily. Diamond emerges from carbon, but that's not a counterexample to reductionism. As Kauffman suggests, there's also the notion epistemological emergence. Waterfalls are emergent, but we don't doubt that waterfalls are explained by, and reduce to, oxygen dihydride. We simply think that the computations necessary to simulate a waterfall are beyond our reach.

Kauffman also speaks of ontological emergence - the idea that emergent stuff is not reducible to other stuff, not even in principle. Thus, an ontological emergence of agency would mean that, in principle, we are prevented from constructing a predictive component model that would produce agency.

One of my problems with ontological emergence is that we don't get an explanatory arrow at all. If agency isn't explained by the stuff that appear to be necessary for it, then what explains agency? Are we to believe that agency is predicted by what it produces? That decisions get made, and decisions need agency, therefore, decisions are more fundamental than the agency which produced them?

This whole picture is quite bizarre. We are presented with some high-level concepts like life, agency, and consciousness, each of which are defined by their temporal function, i.e., their ability to do a certain kind of work in transforming a system at time zero to a new system at time t. We're then asked to accept that life, agency and consciousness are to be explained by a deeper need for things to grow, be decided, or be aware. As if a fundamental law of awareness predicts that there should be a mechanism by which things may be aware. This is very poetic, but does anyone really find this explanatory?

Think about it. Is my agency explained by my need to use that agency to decide what to eat for breakfast? Is my conscious caused by my future need to be self-aware? Are present and developing faculties to be explained by their future function? The only thing I see emerging here is the delusion that present observations are explained merely by their future (as yet unknown) consequences. The emergent physics that makes me choose to eat Shredded Wheat for breakfast is explained by my resultant choice of eating it.

Conclusion
Kauffman's article expresses his discontent with reductionism, but it doesn't do anything more. Most importantly, it fails to establish any rigorous definition of emergence or how emergence delivers explanatory power. Just how does the explanatory arrow point from the future to the present without being either nonsensical or a triviality?

Thursday, November 16, 2006

Predictive Explanations:
Are they necessary or sufficient?

In my debate about explanation at Thinking Christian, someone offered this link in criticism of my assertion that explanation requires prediction.

Carl Hempel developed a theory (the deductive-nomological, or DN model) in which the laws and boundary conditions of the explanans (the thing that does the explaining) must deductively imply the explanandum (the thing being explained). A slightly modified version of this is the inductive-statistical (IS) model which uses statistical generalizations instead of universal generalizations.

My model of explanation is very similar to the DN/IS model. The main difference is that, unlike the IS model, mine doesn't require that the regularity assert a high probability statistical generalization. Whereas Hempel's model is based on a parallel with logical deduction, my model is based on differentiation from trivial restatement of the explanandum.

I think there's an intuitive reason why Hempel's requirement that the probability be high can be relaxed. If you're going to claim probability of an outcome given certain initial conditions, it only makes sense that such a claim would be tested statistically over a sample size greater than one. This means that we can devise an experiment that statistically amplifies the variation of the probability distribution away from uniformity. Even if a law says that a child is 1% more likely to inherit a genetic ailment given the presence of the disease in a sibling, that slight variation can be amplified by doing a statistical survey of a large number of families. Not only can small variations in probability distributions be magnified by larger sample sizes, the mere statement of a probability distribution implicitly asserts that such a test is anticipated. This leads naturally into Bas van Fraassen's Constructive empiricism because we can take either a frequentist approach or a Bayesian approach to statements of probability.

After Hempel proposed the DN/IS model, several criticisms were brought up. It was claimed that there were some explanations that were not predictive (that DN/IS was not necessary), and that there were some theories that met the DN/IS conditions but that were not explanatory (that DN was not sufficient).

I have yet to find any potent instance either type of criticism.

Let's look at the sufficiency criticism first. One example is provided here:
Since smoking two packs of cigarettes a day for 40 years does not actually make it probable that a person will contract lung cancer, it follows from Hempel's theory that a statistical law about smoking will not be involved in an IS explanation of the occurrence of lung cancer.
This claim stems from Hempel's original IS constraint that the laws and boundary conditions of the model must predict the explanandum with high probability. Since my model doesn't require large variations in probability distributions, the criticism falls flat.

The second criticism is that of insufficiency. These arguments purport to show valid DN/IS explanations that are not truly explanatory because they have failed to capture the causality involved. Here's Wesley Salmon's classic:
C1. Butch takes birth control pills.
C2: Butch is a man.
L1: No man who takes birth control pills becomes pregnant.
____________________________________
E: Butch has not become pregnant.

The birth control example presumes that the person stating the problem has observed pregnancy in humans, but has not noticed (and thus does not know) that only women have ever been pregnant. He also notices that a particular man is taking birth control pills and the man has not become pregnant.

In my opinion, the explanation that the pills prevent him from becoming pregnant is a perfectly valid candidate for an explanation, but it's simply not the best (or the correct) explanation.

Salmon's example only appears to counter Hempel because we intuitively apply our background knowledge that men don't get pregnant. That's another law L2 that was not included in L1. If you assume L2, then L1 is superfluous. The point is that Hempel is perfectly correct if you assume the theorist doesn't know L2, and doesn't include L2 in his explanation.

Another set of arguments tries to show that symmetry of cause is a problem for Hempel. Normally, we would explain the length of a shadow cast by a flagpole in terms of the elevation of the Sun and the height of the flagpole. Suppose instead we try to explain the height of a flagpole in terms of the length of the shadow it casts and the elevation of the Sun. The formulas can be inverted and we can express the law fixing the height as a function of the other two variables.

Given our background knowledge that the shadow is the caused by the other two factors, we are intuitively aware that the inverted explanation is kooky, despite the fact that it meets the DN criteria. But what would happen if we didn't have that background knowledge?

The same thing will occur in any investigation in which the order of causality is ambiguous (does A cause B, or B cause A or do they have a common third cause?). If, in reality, we have yet to discover that A actually causes B, this unknown fact does not mean that the theory that B causes A does not qualify as a possible explanation for the observation. It simply means that the theory that B causes A is the wrong explanation, even though it is explanatory.

On what criteria do we distinguish A causing B from B causing A? There are several. The first is a time dependence such that A pre-exists before B or vice versa. In the case of the flagpole, we can detect that the photons travel in a time-dependent way from the Sun to the flagpole, establishing the shadow as the caused factor.

The second is that, if we detect a relation among three variables, A, B and C, and we learn that A is fixed, then we tend to reject A as being caused by the other factors. In the intuitively kooky explanation, once we find that the explanation always predicts the height of the flagpole to be a constant, we would prefer by convention to argue that the height is not the effect, but a cause. In the absence of seeing one factor pre-exist before the others, the more constant a factor, the earlier we prefer to place it in a causal chain.

So, again, what we have here is a failure to isolate the experiment from background knowledge. If we only ever saw the Sun at one elevation, the flagpole at one height, and the shadow at one length, we would not have enough information to divine an order of causality. In that scenario, the theorist will be quite justified in explaining any one factor in terms of the other two. He only prefers one explanation to the others once he makes a deeper (far deeper) investigation.

Prediction and Explanation:
A Recapitulation

The following is based on a comment I wrote at Thinking Christian.
Suppose we make a set of observations, O1, O2, O3, ...On. Each observation could be physical or mental, i.e., they are experiences of any kind. We devise consistent theories, {Ti}, that claim to account for the {Oi}. There are trivial and non-explanatory theories among them. One says this:
T1: You will observe O1, O2, O3,... On.

This theory is trivial. If we observe some new Oj, we just amend the theory to:
T1: You will observe O1, O2, O3,... On and Oj.

Why can we do this? Because T1 is never inconsistent with any observation Oj we might possibly make.

T1 is not explanatory of the {Oi}, not by my definition, and presumably not by yours. If T1 were explanatory, then every collection of observations or experiences would be trivially self-explanatory.

So, how do we resolve this minor problem? What is it about a theory that makes it explanatory?

One guess is that explanations serve to compress observations. A theory can have the effect of being a short-hand for many observations. For example, instead of maintaining a long list of the timed locations of a billiard ball in motion, we can propose that the location of the billiard ball is a fixed function of time and the ball's initial conditions. That is, we can propose that there are relatively fixed laws of billiard ball motion that substitute for a long list of data points. This is precisely my analogy with fitting curves to points on a graph. Fitting a curve is not a restatement of the data because the curve predicts interpolations and extrapolations. Note also that there is a difference between, say, noticing that the data points fall on a straight line and claiming that they fall on the line for a reason. The first is an observation, and the latter is a prediction.

So, I am claiming that an explanatory theory predicts a subset of {Oi} from part of the remainder of {Oi}. For example, suppose I make these observations:
O1 = 1
O2 = 4
O3 = 9

My theory should predict O3 given O1 and/or O2, or predict O1 in terms of O2 and/or O3, etc. One theory that works here is
Oi = T(i) = i2

This not only predicts the already observed O1-O3, it also predicts O4, and O5 and O0 and O1.4 and so on. I can't think of any non-trivial theories that don't make predictions. Can you?

Suppose you observe the following:
O2 = 4
O4 = 16

What if we theorize that
Oi = T(i) = i2, for i=2 and i=4 only.

This theory has been carefully tailored not to make a prediction. Is this explanatory? No, it's just like T1. We've just done a trivial coordinate transformation on the data by expressing the {Oi} in terms of the square of a number instead of a direct value. It's a trivial restatement of the data. You might as well say that:
definition: T(i) = Oi

We're not explaining the observations, we're just saying that each succesive observation is given by a one-off rule that never applies to future observations. We would be drawing dots over your data points on your graph so as not to predict anything.

This is why an explanation never escapes making a prediction, for if it didn't, you could re-interpret the so-called explanation as a restatement of the data using a different coordinate system.

Remember, the {Oi} can be any form of experience, including a statistical measurement. This means that our predictions can be of a statistical nature, and that assertions of regularity needn't be large statistical effects. They could be assertions of very minor probability variations.

Sunday, November 12, 2006

Sources of Knowledge

In the course of a debate I've been having at Thinking Christian, a theist produced a link to Alvin Plantinga's critique of Daniel Dennett's Darwin's Dangerous Idea:
Here Dennett seems to assume that if you can't show by reason that a given proposed source of truth is in fact reliable, then it is improper to accept the deliverances of that source. This assumption goes back to the Lockean, Enlightenment claim that, while there could indeed be such a thing as divine revelation, it would be irrational to accept any belief as divinely revealed unless we could give a good argument from reason that it was. But again, why think a thing like that? Take other sources of knowledge: rational intuition, memory, and perception, for example. Can we show by the first two that the third is in fact reliable--that is, without relying in anyway on the deliverances of the third? No, we can't; nor can we show by the first and third that memory is reliable, nor (of course) by perception and memory that rational intuition is. Nor can we give a decent, non-question-begging rational argument that reason itself is indeed reliable. Does it follow that there is something irrational in trusting these alleged sources, in accepting their deliverances? Certainly not. So why insist that it is irrational to accept, say, the Internal Testimony of the Holy Spirit unless we can give a rationally conclusive argument for the conclusion that there is indeed such a thing, and that what it delivers is the truth? Why treat these alleged sources differently? Is there anything but arbitrariness in insisting that any alleged source of truth must justify itself at the bar of rational intuition, perception and memory? Perhaps God has given us several different sources of knowledge about the world, and none of them can be shown to be reliable using only the resources of the others.
My first observation was that Plantinga sees that reason relies on certain unprovable axioms, without all of which, no reasoned conclusion can be reached. This is a good start. Plantinga claims the axioms are logic, memory, and perception, but I suspect these equate to my axioms of logic, regularity, and the axiomatic nature of experience (that experiences need to be explained).

Plantinga then argues that, since we are comfortable accepting these axioms without proof, why not accept additional axioms (e.g., that there are non-rational sources of knowledge)? Maybe "Internal Testimony" (whatever that is supposed to mean), is just an extra-rational assumption, rather than an irrational one.

Alternate Sources of Knowledge
Assuming that knowledge is defined as justified true belief, what does rationality say about sources of knowledge?

If I have some source of knowledge, S, then I am saying that there is some associated test, TS, I can apply to a proposition, P, to test its truth:
S: Truth(P) = TS(P)

I am also saying that there is some (potentially different) form of justification for belief in P:
S: Justification(P) = JS(P)

In the case of science, justification and test are one. The truth of a scientific belief is fixed by the test of its truth.

If I have multiple sources of knowledge, then I may have multiple definitions of the truth:
S1: Truth(P) = TS1(P)
S1: Justification(P) = JS1(P)

S2: Truth(P) = TS2(P)
S2: Justification(P) = JS2(P)

At this point, I'm going to assume that S1 is science and S2 is supernaturalism. This means I can write:
S1: Truth(P) = Justification(P) = TS1(P)

S2: Truth(P) = TS2(P)
S2: Justification(P) = JS2(P)

There's no guarantee that a truth from one source of knowledge is a truth in the other. An intuitive personal truth (one tested by asking a person for his opinion) may not be a scientific truth. Thus, in general, the "truth" of a proposition has no fixed meaning when there is no preferred source of knowledge.

There are three ways to avoid the problem of propositions having no truth values:

1) Assume that there is only one source of knowledge. In this case, most of us would likely choose science.

2) Assume that there are multiple sources of knowledge, but that they should all agree on the truth of any applicable proposition. This means that:
TS1(P) = TS2(P)

3) Assume that no single proposition can be evaluated by every source. That no truth value revealed by S1 can be revealed by S2, and vice versa. This might be akin to Stephen Jay Gould's Non-overlapping Magisteria.

I'm pretty sure theists would reject (1). I won't entertain the claim that science is not the preferred source of knowledge where available because no one reading this blog could consistently make that claim.

This means that either (2) or (3) is the case.

Overlapping Magisteria
Case (2) is ruled out because supernatural knowledge sources are broadly inconsistent with scientific ones where they overlap. Psychics and miracles are routinely shown to be fraudulent, and supernatural sensation isn't any better than guessing. Thus, we have ample evidence that supernatural knowledge sources fail to give the same truth values as scientific methods, as would be expected in case (2).

Non-Overlapping Magisteria
Case (3) is problematic for the theist because it means that any question that can potentially be settled by predictive means cannot be answered by a supernatural source. Indeed, it means that any phenomenon that one claims to know through supernatural methods must be unknowable through natural methods. I think this is one reason why theists assert that the mind cannot be purely physical, for otherwise, there would be no meaningful truth claims about supernatural souls.

As is well known, history's trash heap is littered with supernatural beliefs that were displaced by scientific truths, and the boundary of the supernatural domain has been in monotonic retreat for centuries. Still, the inductive inference that all supernatural claims are rot isn't deductive proof that they are rot. So, let's suppose that there is a domain, non-overlapping with science, in which supernatural sensation does reveal the truth (at least, most of the time). It is implicit in a knowledge claim (supernatural or otherwise) that the knowledge will be true as well as justified. Can we not then ask science to assess the efficacy of the supernatural knowledge source?

Symbolically, what we're evaluating is this:
S1: Truth(S2 is effective) = TS1(S2 is effective)

If it were possible for S1 to find S2 to be ineffective, then the space of truths of S2 would overlap (and potentially conflict) with those of S1 because S2 also implicitly asserts that it is effective. This would violate the premise that S1 and S2 don't overlap.

Therefore, S2 cannot give an answer that S1 might later determine to have been wrong. Thus, a fortune-teller cannot tell me that I'll be a millionaire by age 30 because I could use scientific means to know that he was wrong, and the fortune-teller implicitly claims he is right (i.e., he claims that his knowledge is not just supernaturally justified, but true). Unfortunately, this principle also negates all knowledge about future experience derived by S2 because such knowledge could be falsified by S1.

The practical upshot of all this is that S2 is unable to tell me anything about future experience. So why should I care what S2 has to say?

I might care about what S2 says if the execution of the method of S2 is a source of amusement. Is this why TV commercials for psychic hotlines display a "for entertainment only" disclaimer?

Science and Rationality
The beautiful thing about science (apart from the fact that it works) is that it is derived from rationality itself. In assuming that science is a source of knowledge, what am I assuming? I am assuming that I can make inductive inferences from past experience to predict future experience. Can I drop this assumption without destroying rationality itself?

If I assume that past experience is no guide to future experience, why should I assume that a theorem that I have just deductively proven true won't be false before I perform the next step in a proof? I cannot. I must assume that past experience, whether mental or physical, is a guide to future experience. Once I make this assumption for purposes of rationality, science automatically follows.

Conclusion
I have shown that supernatural sources of knowledge are either totally unreliable, or can only tell me about things that are irrelevant to experience. I have shown that science as a source of knowledge follows from the assumed axioms of rationality.

If science is accepted as the primary source of knowledge in any domain, it is the only relevant source knowledge about experience.

Nov 14 2006 Clarification: Science here refers to methods of deductive and inductive inference. It could be mathematics as easily as it is physics or linguistics.

Saturday, November 11, 2006

Odds and Ends

Newsweek has been publishing intelligent pieces on atheism recently. Today I saw this story on the Beyond Belief conference in La Jolla, California.
It's hard to be a skeptic, that much was clear from the conference. Hard for the astrophysicist Neil deGrasse Tyson, director of the Hayden Planetarium in New York, who described trying to offer up thanks "to the scientists who made this abundance of food possible" at a friend's Thanksgiving dinner, only to be shouted down by demands for a proper grace. Hard for atheist author Sam Harris ("Letter to a Christian Nation") who likes to point out that people today believe in God based on no more evidence than the ancients had for believing in Zeus or Poseidon—with the result that in addition to all the mail he gets from Christians, he's now getting angry letters from pagans who claim he's insulted their beliefs, as well.
Man, I love these guys.

Newsweek also recently published a very short bit about Sam Harris for their BeliefWatch column.

And I love this:

Friday, November 10, 2006

Post Election Post

Overall, the elections went well, even though there were several races I was very disappointed to lose.

We now have to use our victory to best advantage, and that means explaining to Americans just how horrible Bush and the Republicans have been for the last 6 years.

The American people don't like political games. Americans were passive in the face of Watergate. That was until the official wheels of justice began to turn, and serious crimes were found to have been committed. When this happens, the American people back prosecution. That's where House subpoena power comes in. I'm sure there's a lot of shredding taking place in Republican offices in Washington D.C. this week.

Many are uncomfortable at prosecuting Bush, Cheney, Rumsfeld and Miller for war crimes. I am not. I don't see why we ought to protect our own war criminals, while we chastise other nations for doing the same. Some will ask whether it is right that Bush et al be punished while Osama bin Laden goes free? Well, if Bush had caught Osama, this wouldn't be an obstacle, would it?

I think the Dems should tentatively plan to have the articles of impeachment lined-up for about 18 months from now. They don't need to dive in and start all the investigations on day one. Shameful revelations of Republican crimes occur about once a month, and it's not perceived as partisan to investigate them as the stories break. Eighteen months from now, thanks to new Republican scandals, we'll probably have a dozen House investigations in full swing, and an independent prosecutor.

By 2008, the bankruptcy of Republican authoritarianism will be plain to see, even to the average American voter.

Saturday, November 04, 2006

Has America Lost Its Luck?

There's a nice column by Michael Hirsh over at Newsweek:
What a glorious couple of centuries it has been, all held together by this great string of luck. "The Lord looks after drunks, children and the U.S.A." went the old saying, and it seemed true. But the thing about luck is that, eventually, you run out of it. Everybody craps out in the end. And that is what has happened to us. As Americans go to the polls Tuesday we must confront the fact that we have become a luckless people, all across the political spectrum.

...

But at a moment in history when we faced the most subtle sort of global threat, when we needed not just a willingness to use military force but a leader of real brilliance—someone who would carefully study a little-understood enemy—we got a man who actually took pride in his lack of studiousness. No surprise: Bush never once presided over a grand-strategy session to divine the nature of Al Qaeda, and he ended up lumping Saddam and every Islamist insurgent and terrorist group with Osama bin Laden. He ensured that a tiny fringe group that had been hounded into Afghanistan with no place left to go—one that could have been wiped out had we focused on the task at hand—would spread worldwide and become a generational Islamist threat.
Not that one could not have predicted what we're seeing now. Maybe the press couldn't believe the news they were supposed to report. They refused to call a dolt a dolt. Without the press bold enough to tell the truth, a lot of Americans went to the polls and elected an anti-intellectual simpleton. They elected a Forrest Gump to lead a nation composed largely of Forrest Gumps who think pure intentions are better than good intentions plus intelligence and expertise.

Hirsh is right, but he's six years too late.

Wednesday, November 01, 2006

Hoochification

Is it wrong for a woman to dress in a sexually arousing manner lest it lead to objectification? This has been the topic of discussion (and, sometimes, non-discussion) over at Signs of the Times (here and here).

When I see a woman who displays her sexuality openly, I don't leap to the conclusion that she is a sex object and nothing else. For all I know, she might be a doctor, lawyer, history professor or CEO. Certainly, she is also a human person. To think otherwise would be for me to "objectify" her.

Of course, there is a minority of folks who will objectify her for her fashion choice. There are two reasons why one might think the existence of this minority ought to cause women to suppress their sexuality. First, one might think that the unpleasantness of being objectified by a fool outweighs the pleasantness of a sexual display. I don't think very many believe the scales tip this way. Most would argue that the woman runs her life, not the fools.

A second reason for women to suppress their sexuality in light of foolishness would be to avoid reinforcing the stereotype that a woman who dresses scantily is just an object through and through. If open sexuality were a reinforcer of the stereotype, it still wouldn't override a woman's right to live as she sees fit. After all, the problem with the stereotype is that it limits a woman's freedom, so supppressing that freedom is like throwing the baby out with the bath water.

The interesting thing that occurred to me during this debate is that public displays of sexuality ("hoochification") actually do work to overcome the objectifying stereotype. How will objectifiers learn that open sexuality is not mutually-exclusive of humanity without observing open sexuality in the presence of humanity?

[If S->~H, the falsifying pattern is (S ^ H). The theory cannot be falsified if S is never present.]

Indeed, dehoochification retards women's rights. I don't think the objectifying stereotype would vanish if we put women in burqas for 25 years. I suspect the reverse would result, and women who exposed their faces after 25 years of suppression would be seen as nothing but sluts (c.f., Iran).

I'm not saying that everyone ought to display their sexuality publicly and at all times. I'm just saying that there's nothing wrong with some people doing so some of the time.