Thursday, November 23, 2006

Emergence and Reductionism

Stuart Kauffman has written a piece on emergence and reductionism for the Edge eZine. Edge is a very cool magazine, and I recommend picking up a free email subscription if you don't have one.

Kauffman claims that reductionism is out of steam, and suggests that emergence provides the answers that reductionism cannot. Kauffman cites three examples, the first of which is the origin of life.
Clearly none of the theories above is adequate. But one gets the firm sense that science is moving in on possible routes to the origin of life on earth. If some combination of the metabolism, polymer autocatalysis and lipid first view can be formulated and tested in a new "Systems Chemistry", we may find the answers we seek.

Suppose we do. It will be a scientific triumph of course. But if such self reproducing and, via heritable variations, evolving systems are formed, are they ontologically emergent with respect to physics? I believe the answer is yes.
Kauffman backs up his claim by arguing that natural selection can run on multiple physical platforms, as long as those platforms are self reproducing and have heritable variations. Intuitively, he seems to be on to something. The problem is that if we take emergence to mean the emergence of higher level properties from lower level ones, then metals are an ontologically emergent feature of atomic physics. And orbits are an emergent feature of gravitional attraction (orbits can occur under electromagnetic attraction).

Kauffman tries to distinguish emergence from reductionistic pictures by looking to see if the emergent feature can possibly be shown to emerge algorithmically from low-level physics. He says:
Note that while the physicist might deduce that a specific set of molecules was self reproducing, and had heritable variations and instantiated natural selection, one cannot deduce natural selection from the specific physics of any specific case(s), or even this universe, alone.
I find this rather baffling. Does Kauffman mean that you can't see evolution in a single organism? Does he mean that a molecular model that shows evolutionary processes is current beyond tractability? If we apply this rule to chemistry, we will find that water does not reduce to H2O because molecular models of fountains and waterfalls are presently intractable. I can't imagine a definition of emergence that Kauffman could using that wouldn't dispel the notion of reduction altogether.

Kauffman's second example is agency. Of meaning and value for agents he says:
They too are ontologically emergent. We have a natural place for value in a world of fact, for the world is not just fact: agents act on the world and actions are not just facts, for the action itself is a subset of the causal consequences of what occurs during an act, and that relevant subset cannot be deduced from physics.
Kauffman really ought to add the word "today" to then end of that sentence. This is a tired old poem about how meaning and value seem like more than facts, so they can't also be made of facts. Sort of like the way diamonds seem like so much more than charcoal, so diamonds and charcoal can't be the same stuff.

Indeed, the diamond example illustrates just how reductionism is often misunderstood. Diamonds are not charcoal. That's not what reductionism says. Reductionism is the idea that diamonds and charcoal are different forms or arrangements of the same component, carbon. Failure to appreciate this leads to what Daniel Dennett calls greedy reductionism - the idea that reductionism equates different configurations of the low-level components. Greedy reductionists hold that since dishwashers and paintings are the "same stuff," there is no such property as 'dishwasherness' or 'paintingness', and that it is irrational to treat them any differently. Clearly, this is not the case. We don't create galleries of fine dishwashers, nor do we dry our dishes with masterworks because the utility of each class of object is different. That is, stuff has different value to us humans depending on its configuration.

On the issue of consciousness, Kauffman argues that the mind cannot be a machine if it does not use algorithms. Devising a fairly mundane mechanical arrangement to prevent his computer from being upset by an unfortunate cable pull, Kauffman says:
So I invented a solution. I jammed the cord into one of the cracks and pulled it tight so that my family would not be able to pull the computer off the table. Now it seems to me that there is no way to turn this Herculian mental performance into an algorithm. How would one bound the features of the situation finitely? How would one even list the features of the table in a denumerably infinite list? One cannot. Thus it seems to me that no algorithm was performed. As a broader case, we are all familiar with struggling to formulate a problem. Do you remotely think that your struggle is an effective "mechanical" or algorithmic procedure? I do not.
There are several misconceptions captured in this single paragraph. First of all, we don't need an exhaustive definition of table in order to process information about a table. We only need a description that has as much precision as we need for the task at hand. We don't care whether tables are secretly alive, or are naturally occurring plant formations. All we care about is their local utility. Yes, we can examine tables in seemingly endless detail, but that's not really relevant to the solution we're talking about.

Second, an algorithm that solves a problem doesn't need to prove it deductively. Evolution and natural selection are brilliant examples of this. Genetic programming solves problems without necessarily reaching a single, right answer. The proof of its rightness is in the tasting. The same is true of Kauffman's computer cabling contrivance. Not only is it not a unique solution to the general problem, but there are seemingly an infinite continuity of ways he could have positioned the cable at a molecular level. We might think that Kauffman is saying that statistical algorithms aren't algorithms at all, or that the presence of statistical algorithms renders reduction invalid. Yet, again, such a claim would invalidate all reductionist claims.

Reductionism
All of this means we need to agree on definitions of what constitutes reduction and what doesn't. After all, we might find we agree once we synchronize our terminology.

Steven Weinberg's definition is that, in reductionism, "explanatory arrows always point downward." What does this mean? Well, an explanation requires a predictive model. The model needs to contain a number of components that doesn't exceed the possible number of observations we can make. (If it were to do so, it would degenerate into a restatement of observations as they happen.) Inevitably, a model will propose that there are a limited number of components or component families that have properties that predict (and thereby) explain the observation. In this way, we learn that high-level observations can be explained in terms of lower level entities that ought to have observable effects.

Emergence
Is emergence the opposite of reductionism? Not necessarily. Diamond emerges from carbon, but that's not a counterexample to reductionism. As Kauffman suggests, there's also the notion epistemological emergence. Waterfalls are emergent, but we don't doubt that waterfalls are explained by, and reduce to, oxygen dihydride. We simply think that the computations necessary to simulate a waterfall are beyond our reach.

Kauffman also speaks of ontological emergence - the idea that emergent stuff is not reducible to other stuff, not even in principle. Thus, an ontological emergence of agency would mean that, in principle, we are prevented from constructing a predictive component model that would produce agency.

One of my problems with ontological emergence is that we don't get an explanatory arrow at all. If agency isn't explained by the stuff that appear to be necessary for it, then what explains agency? Are we to believe that agency is predicted by what it produces? That decisions get made, and decisions need agency, therefore, decisions are more fundamental than the agency which produced them?

This whole picture is quite bizarre. We are presented with some high-level concepts like life, agency, and consciousness, each of which are defined by their temporal function, i.e., their ability to do a certain kind of work in transforming a system at time zero to a new system at time t. We're then asked to accept that life, agency and consciousness are to be explained by a deeper need for things to grow, be decided, or be aware. As if a fundamental law of awareness predicts that there should be a mechanism by which things may be aware. This is very poetic, but does anyone really find this explanatory?

Think about it. Is my agency explained by my need to use that agency to decide what to eat for breakfast? Is my conscious caused by my future need to be self-aware? Are present and developing faculties to be explained by their future function? The only thing I see emerging here is the delusion that present observations are explained merely by their future (as yet unknown) consequences. The emergent physics that makes me choose to eat Shredded Wheat for breakfast is explained by my resultant choice of eating it.

Conclusion
Kauffman's article expresses his discontent with reductionism, but it doesn't do anything more. Most importantly, it fails to establish any rigorous definition of emergence or how emergence delivers explanatory power. Just how does the explanatory arrow point from the future to the present without being either nonsensical or a triviality?

No comments: