THE PHILOSOPHY OF SCIENCE
In the last chapter we learned that evolutionary science is driven by a sharp bias. In the next chapter we shall examine some of the "scientific" evidence in support of, or inimical to the theories of creation science and evolution. At this juncture it is therefore necessary to ask, what is science? Only then can we assess whether the bias of evolutionary scientists is significant. And only then can we assess scientific "data" in a meaningful light. In Edwards v. Aguillard the Supreme Court found that the Louisiana legislature's requirement to provide a "balanced treatment" of creation and evolutionary science a violation of the First Amendment. To reach this conclusion, the Supreme Court had to first determine that creationism is not a science. The Court held to a rigid delineation between "science" and "religion", as if they were incompatible. But why is evolution scientific while creationism is not? Imre Lakatos and Elie Zahar of the London School of Economics address the most fundamental question concerning the philosophy of science. "The central problem of philosophy of science is the problem of normative appraisal of scientific theories; and in particular, the problem of stating universal conditions under which a theory is scientific. * * * [T]he Velikovsky affair revealed that scientists cannot readily articulate standards which are understandable to the layman (or, as my friend Paul Feyerabend reminds me, to themselves.)" In spite of this ineffable enigma, numerous Noble Laureates were paraded before the Supreme Court in various amicus briefs in Edwards v. Aguillard. In this dog-and-pony show, these showcase scientists were quite certain that creation science was not a science at all. This chapter shall address itself to the question of normative appraisal.
There are essentially two means of appraisal of scientific theories--empirical and methodological. An empirical appraisal of a scientific theory seeks to measure the correlation between the theory and real world data. A methodological appraisal of a theory looks at the development of the theory itself.
Empiricism is generally broken down into three major groups: Inductivists, probabilists, and falsificationists. (It will be seen however that falsificationism is more properly a methodological rather empirical consideration.) Methodological aspects can be broken down variously into: simplicism, (beauty, economy and elegance may be considered expressions of simplicism, or perhaps held as distinct groups); and falsificationism (prescience, or the ability of a system to accurately predict future data). When a theory is retroductively reformulated to accommodate new data, each ad hoc reformulation should be weighed as a new theory. If the reformulation is also able to accurately predict new data, it has merit.
For various reasons, scientific philosophers have come to use the Copernican revolution as the classical model for the scientific method. The world view shifted from a Ptolemaic system of a geocentric universe to a Copernican system of a heliocentric universe. Because the Copernican Revolution has become the standard for addressing questions surrounding the philosophy of science, many of the following quotations address themselves to the comparison of these rival theories.
EMPIRICAL CONSIDERATIONS: Inductivism and Probabilism. (How closely does the theory match observable data?)
Inductivism is ostensibly drawn from factual observation. And yet many if not most scientific theories are tormented by renegade data that simply doesn't fit the theory. It is a rare theory indeed that has not the slightest trouble accommodating every bit of stray data that comes across the screen. For example, Gingrich observes that the predictions by Ptolemy's system were oftentimes no more flawed, and occasionally less flawed than Copernicus' system. This should not surprise us, nor should it serve as grounds for junking the Copernican theory, since few theories comport with all observable data. What it does mean, however, is that few theories can truly be said to be generated inductively, since by definition, an inductive theory is one drawn from the data . . all the data. For example, the standard theory (quark theory) produces a calculation where the probability of an event exceeds one. Obviously this is a mathematical impossibility, unless the same event somehow happens twice. The quark theory is clearly flawed. The real question is how flawed? Strict inductivism would require that the quark theory be discarded on that basis. Yet if every theory were thrown out for such scant reasons, we would have few theories to explain anything. Because it is seldom (if ever) that any theory fits all observable data, "[s]trict inductivism was taken seriously and criticized by many people from Bellarmine to Whewell and was finally demolished by Duhem and Popper."
Probabilism is the theory that picks up where strict inductivism leaves off. The question is no longer "which theory fits the data?", but "which theory fits most of the data? "According to probabilistic inductivists one theory is better than another if it has a higher probability relative to the total available evidence at the time." For example, a probabilist would say that if the quark theory does a better job than any of its rivals in addressing observable data, it is superior in spite of its flaws. In reality, most of us live our lives as probabilists. We learned almost everything we know of life's fundamentals, from how to walk or drink out of a glass, to how to ask a favor of a friend, by evaluating our probability of success from past efforts.
There remain however several vexing problems intrinsic to probabilism. First, the question of who plays judge and jury in deciding how seriously flawed the quark theory --or any other theory -- is. Is it 1% flawed? 5% flawed? or 95% flawed? This is of course an eternal conundrum when rival theories like creationism and evolution are competing for market share. The unspoken conclusion which is ever present in the background is: "the problems with my theory are minor; the problems with your theory are so grievous as to be beyond repair." But who is to be the judge? Who is truly unbiased? The posturing for this judicial position begins among the three great engines of social dominance in society today -- the university, the media , and the courts themselves. At the university level, to enforce the dominance of one's judgment, tenure may be denied to unsupportive rivals, or advanced degrees denied. Those who control the media can portray an un-favored position as idiotic. And if one can persuade the Supreme Court of the United States to place its imprimatur on one theory and ban all rival theories by judicial decree, the ball game is just about over. This, of course, is the essence of the controversy being addressed by this paper. The Supreme Court has elected to play judge and jury over the credibility of scientific theories rather than letting dialogue and debate ferret out a solution in the market place of ideas.
Fortunately, the second problem with probabilism is more easily solved. What if two theories answer an equal amount of data--say 95%? Are they necessarily equal, or can one still be regarded as superior? It is at this point that falsificationism serves as the bridge between probabilistic empiricism and a methodological appraisal of a scientific theory or research system.
FALSIFICATIONISM, or the "Critical Experiment," and the Procrustean Bed of modern science
When I was in college studying signals and systems in Electrical engineering, we learned Fourier analysis, how each square wave was made up of a theoretically infinite number of sine waves, and that there were formulas by which one could calculate the strength of the various harmonic sine waves that formed a square wave or other non-sinusoidal signal. A square wave of frequency "f" could be duplicated by a sine wave of frequency "f" times a certain constant K, plus (K/3 sin 3f) plus (K/5 sin 5f) plus (K/7 sin 7f) . . . etc. Since each harmonic is smaller, the seventh harmonic being divided by seven, eventually you can approximate the square wave to a certain accuracy by selecting three, five, seven, or more harmonics of the sine waves. To me, it was an abstract exercise in math. What I did not understand at the time is the connection between mathematical symbol and reality. My professor then began to speak of the sine waves present in the square wave. I raised my hand and objected . . . "I understand that we can mathematically represent a square wave by a series of sine waves, but when you mention a circuit response to the sine waves, you talk as if these sine waves are actually in there." The professor smiled, stopped class, and lead us down to the lab. There he generated a square wave on the oscilloscope, and set up some simple "low pass" filter (a resistor and a capacitor with a scope measuring the voltage across the capacitor) to filter out the higher frequencies. Sure enough, when all but the fundamental harmonic were filtered out, a sine wave of the first harmonic remained! Although the significance of the moment was not appreciated by me at the time, I never forgot that class, and though long on the significance of what I had observed. It had been more than an exercise in electrical engineering, it was an exercise in the philosophy of science. Were the sine waves really "in there"? Mathematical theory, specifically the Fourier series, suggested they were. And, as it turned out, the theory modeled reality quite well! This is science. An idea is a hypothesis. An idea that can be shown to stand up under experimental conditions, and predict the outcome of the experiment (that the higher harmonics would be filtered out) is a theory. The creation of a critical experiment is known as falsificationalism. A hypothesis can either rise to the level of a theory if it satisfies a critical experiment, or stand as falsified if it fails to do so.
Although the ability of a theory to comport to observable data is fundamental, two theories that address and explain an equal amount of data are not necessarily equal. One theory may account for 95% of data already observed, and also be 95% accurate when evaluated against new data not known at the time the theory was formulated. A rival theory might be 95% accurate (or even 99% accurate) with respect to past data, but be an utter failure in predicting future data. It maintains its level of accuracy only through ad hoc reformulations and reverse-engineering any time new data becomes available. This methodology is known as a Procrustean bed. Procrustes, or "the stretcher," was the epithet ascribed to Polypemon, also known as Damastes. Those who fell under his power were forced into the Procrustean bed. If the stranger were too short, he was stretched to fit the bed. If he were too tall, his limbs were cut off until he fit the bed.) It was not a ghostly tale, but a philosophical commentary on methodology. Like the Procrustean bed, modern science will stretch or otherwise reformulated a theory to comport to newly observed data. This is not necessarily wrong. Indeed, the progress of science is marked by this process. The six million dollar question is the extent to which a theory is discredited by such ad hoc reformulation. The other obvious question is when a theory should simply be discarded. Lakatos and Zahar suggest that as long as a new reformulation not only explains some present anomaly but also predicts some future event accurately, the theory remains viable. But when the ad hoc adjustments are doing nothing more than retroductively tweaking the theory to conform to existing data, the theory has become moribund.
It is always easy for a scientist to deal with a given anomaly by making suitable adjustments to his program (e.g. by adding a new epicycle). Such manoeuvres are ad hoc, and the program is degenerating, unless they not only explain the given facts they were intended to explain but also predict some new facts as well.
This then, is the essence of falsificationism. The same principle can also be referred to as a "critical experiment." There is a vast qualitative difference between a theory that has been reformulated to accurately address old data and a theory that was able to make bold predictions before that data was observed. Karl Popper reacted to Marxism and Freudianism as "pseudo-sciences" since they were unwilling to subject the merits of their theories to a critical experiment which would "falsify" the theory. Johnson writes,
Popper saw that a theory that appears to explain everything actually explains nothing. If wages fell this was because capitalists were exploiting the workers, as Marx predicted they would, and if wages rose this was because the capitalists were trying to save a rotten system with bribery, which was also what Marxism predicted. A psychoanalyst could explain why a man would commit murder-- or, with equal facility, why the same man would sacrifice his own life to save another. According to Popper, however, a theory with genuine explanatory power makes risky predictions, which exclude most possible outcomes. Success in prediction is impressive only to the extent that failure was a real possibility.
Popper was impressed by the contrast between the methodology of Marx and Freud on the one hand, and Albert Einstein on the other. Einstein almost recklessly exposed his General Theory of Relativity to falsification by predicting the outcome of a daring experiment.
The potential of predicting carries with it the potential for failure. This is the essence of falsificationism. Falsificationism, then, does not simply address the aggregate sum of data that coincides with a theory, nor the sum of data which contradicts a theory. Those are measures of the theories empirical integrity. Strict probabilism would hold that the theory that answers most data is necessarily superior. Falsificationism evaluates a theory's prescience--its success in predicting new data. It asks what percent of the empirical fit came before the data was discovered, and what percent is simply the product of reverse-engineering. Falsificationism demands that a viable theory demonstrate a predictive quality. A theory that is continually reformulated and "jury-rigged" to accommodate renegade data but is otherwise incapable of addressing future data is inferior--even if it can account for all the data after each ad hoc reformulation.
Because it is concerned with how the data is supported (by prediction, or by ad hoc reformulation), rather than how much data is addressed, it is not a purely empirical tool. It is really the bridge between the empirical and the methodological.
METHODOLOGICAL CONSIDERATIONS
Simplicism holds that one theory is superior over a rival theory if it is simpler. This is of course a subjective term. To a mathematician, X2 Y2=Z2 is the simplest and most economical way to describe a circle. It defines a circle from the center point, being equidistant from the center. To a child, however, the equation is far from simple. It is incoherent. Indeed, the Schrodenger wave function is a rather compact equation, but is incoherent even to most highly educated people. It can hardly be called simple. This tension between simplicity and economy was well captured in de Solla Price's observation that the Copernican system was "more complicated but more economical [than Ptolemy's]." In addition to the concept of "economy", the term "simplicity" has also been augmented by the term "beauty." Robert Millikan was the 1923 Noble Laureate in physics for isolating the charge of an electron in his now famous "oil drop" experiment. Within the margins of the original text, one can see the word "beauty" inscribed by Millikan as he made this discovery. The mathematical formula for predicting the terminal velocity of a tiny sphere of oil in free fall, and other simple principles combined to yield a wonderful discovery. That the math should have been born out by experiment was elegant, beautiful. Simple equations and a truly simple experiment performed by high school students can isolate the charge of an electron. Beauty can also be observed in the form or symmetry of equations. Scientists have often noted the near symmetry of Maxwell's electromagnetic equations
D x H = i dD/dt
D x E = - dB/dt
D . B = 0
D . D = r
and mused that they might become perfectly symmetrical if a magnetic "monopole" were found in nature. Maxwell's equations combined the forces of the electric and magnetic field. More optimistic physicists have sought to express an equation or set of equations that combine the electric, magnetic, strong nuclear, weak nuclear and gravitational forces, in what is often called the "unified field theory" or "grand unified theory" or simply "GUT." In the last lecture I heard on the "unified field theory" [which was probably out of date by the time the microphone was turned off] a physicist expressed hope that if the universe were expressed as an eleven-dimensional system, an equation for the grand unified theory could be as small as a single line, whereas, if a ten-dimension universe were used as a model, if it were even possible to express a unified field equation in a ten dimensional universe, it would require an equation of several pages. Whether or not scientists ever derive a unified field equation, let alone two rival equations of ten-dimensional and eleven dimensional universes, remains a question for future generations. But from a standpoint of the philosophy of science, let us assume that rival equations are developed. Are they both true? If one theory holds to an eleven dimensional universe and the other to a ten dimensional universe, how does one determine which is "true?" As noted, the first steps are empiricism/probabilism (holding true for the most data) and the ability to make predictions through a critical experiment (falsifyability). Assuming, however, that both equations predict equally well, the question verily screams for an answer: Which is true truth? It is then that the third test, "simplicity" or "elegance" or "symmetry" comes into play. If a grand unified theory could be expressed in a single line for an eleven dimension universe, and an equation presuming a ten dimensional universe required four pages, a proper application of the philosophy of science would prefer to view the former as the "true" expression of the universe because it is "simpler."
However one defines simplicity, it is intuitively central to the value or merit of a scientific theory. Indeed, Ptolemy heralded his system (which involved forty one epi-cycles rotating about each other to account for planetary motion) principally because it was simpler than the systems of any of his predecessors.
In a similar manner, Smith regarded the Copernican system as superior to Ptolemy's--not because it was more accurate in accommodating observable data (for it was not), but because it was simpler.
Adam Smith, for example, in his beautiful History of Astronomy, argued for the superiority of the Copernican hypothesis on the basis of its superlative "beauty of simplicity." [Citation omitted.] He disclaimed the inductivist idea that the Copernican tables were more accurate than their Ptolemaic predecessors and that therefore, Copernican theory was superior. According to Adam Smith the new, accurate observations were equally compatible with Ptolemy's system. The advantage of the Copernican system lay in the "superior degree of coherence, which it bestowed upon the celestial appearance, the simplicity and uniformity which it introduced into the real directions and velocities of the Planets." [Citation omitted.]
We note then that both the Ptolemaic and Copernican systems each lauded their superiority--not because of empirical considerations (their ability to address observable data), but because of their superior simplicity.
The source of an hypothesis and methodological assumptions.
The discoverer of benzene had mulled over the problem of the shape of the molecule. In the dream, he saw the molecule benzene represented by a worm or dragon eating its own tail. He awoke, and surmised that benzene was a round or ring shaped molecule. A few experiments later, the model was shown to accurately predict chemical reactions with benzene. The ring or dragon shaped molecule was no longer an hypothesis, it was now a theory, and one which has remained a reliable theory, predicting a great many chemical reactions with profound accuracy.
It is important to note that it mattered not whether the dream giving birth to the hypothesis was induced by sleep depravation, alcohol, drug abuse, or any other means. The dream was confirmed to be an expression of chemical truth by subsequent experiments.
Imagine, however, that our intrepid dreamer had been intoxicated and passed out when he had the dream of a benzene ring. Now further imagine that a temperance society called "Americans for the Separation of Alcohol and School Children" objected to the ring-theory of benzene on the grounds that it glorified drinking and worked toward the corruption of school children.
Now imagine that our temperance society hires a film editor named Leni Riefenstahl, to direct and produce a movie entitled "The Triumph of the Separation of Alcohol and School Children." Imagine further that a public relations firm is hired to convince Americans that the "Separation of Alcohol and School Children" is somehow a part of the First Amendment of the United States Constitution, and that for fifty years, the media keeps repeating this phrase. Imagine that eventually the average American, who has never even read the constitution, comes to believe that "The separation of school alcohol and school children" really is in the United States Constitution. Because many noble laureates are members of the Temperance Society, they are paraded before the Supreme Court like French Poodles at a dog-and-pony show in an attempt to persuade the Court to remove the teaching of the benzene ring from schools.
Imagine then that, as a result of this chicanery, the courts, who have also come to believe that the "separation of alcohol and school children" is somewhere in the constitution, determine that it is unconstitutional to teach that benzene is ring shaped in chemistry class. The temperance society offers other molecular shapes, linear molecules, bent molecules, tetrahedrons, etc. as alternatives to the ring shape of benzene. As a result, schools are only allowed to teach these highly flawed and scientifically errant shapes for benzene. One can hardly call this an "advance" of science. It is no longer science because it is no longer a quest for truth. It has presuppositionally eliminated certain shapes of benzene.
In fact, if benzene is indeed a ring (as years of experimental verification strongly suggest), then truth has become circumvented by the politics of the day. Truth is no longer an expression of reality based in the scientific method, it is become a political commodity. And science is no longer a method for ascertaining the truth, but a tool for social engineering and political change.
It is interesting to note that the systems of Copernicus and Ptolemy were each born out of distinct methodological assumptions. Claudias Ptolemy's geocentric theory of the universe was not motivated by some theological insistence that the earth was the center of the universe. Quite the contrary, it was born from the methodological presupposition that mathematics was a higher reality than theology. In Ptolemy's The Mathematical Treatise "we see the first indication that Ptolemy broke with Aristotle, for he argued that mathematics is the highest form of philosophy, rather than theology."
|
Chapter 7: The Philosophy of Science |
|