Saturday, October 3, 2009

Scientific Method

In my last year in college, I took a course on Scientific Method in the Philosophy department. It was to be the fourth and final of my philosophy courses - all my earlier ones were on basic problems in contemporary Analytic Philosophy like philosophy of mind, nature of reality, free will and determinism, and theory of knowledge. Judging from the title of the course, I imagined that it would have a lot of resonance with my understanding and perspectives on the development of theories in physics and standard methods and procedures of performing experiments and drawing conclusions from them. I expected the field to define and explain, in a more rigorous, detailed and general way the foundational premises of the science - from the importance of reproducibility of experiments to limitations of physical theories. However, the class turned out to be quite different from what I had anticipated. Yes, the question of what constitutes a valid method of investigation in sciences was discussed but the entire course was focused more on the Philosophy of Science and the evolution of scientific theories. I was made aware of the existence of such a branch of inquiry within the domain of philosophy only after I took this course. As the early morning lectures(8 AM) moved from one topic to another, I did recognize that some of the issues dealt with were not only important for understanding the foundations of science properly but also provided some insight into the nature of progress in science and emergence of radically different viewpoints. Nonetheless I was disappointed.

Maybe I was bored because I have limited patience and attention to things that I do not find any immediate excitement in. But in part, it could also be due to the fact that I was not able to relate most of what I gathered from these lectures to whatever I had come to understand as the basic philosophical framework for constructing theories and expressing physical laws within. During this entire period when I learnt about the formal theory of scientific method, I was scarcely ever able to make any connection to a real advancement in the field of physics. The entire language was so general, and in some cases quite simplifying, that it was unable to describe say, the insight that led to a specific approach in attacking some topic (BCS theory of superconductivity), or an ingenious way to design an experiment (Michelson interferometer) , or to create extensions of existing laws to account for unexplained phenomena (maxwell's displacement current). Science, at the stage where there is no clear explanation for some problem, is messy and it can be so in rather unpredictable ways: many candidate theories of varying merits (Extensions of Standard Model) , insufficient experimental data to draw any solid conclusions or support any specific theory(Dark Matter) , some intractable mathematical monster that needs to be cracked (energy conditions in General Relativity) , or a whole bunch of different ad-hod ideas that need to be tied together to form a coherent theory (development of Quantum Mechanics as well as Quantum Electrodynamics). Progress in these cases can come in completely unexpected ways, and I don't think there is any unambiguous way in which we can classify the different possible attempts at resolving open questions. No philosophical school describing the methodology of science can account for all the bizarre and crazy ways in which physics evolves at any given stage. For example, it is not always true that experiments precede theoretical developments. The top quark was predicted based on the observed pattern of arrangement of quark families and this inference was vindicated by experiments later on. Our near certainty about the existence of Higgs particle and its properties comes from the enormous success of the Standard model in explaining most of the interactions of elementary particles. Ideas sometimes pop out of nowhere and can lead to creation of a new branch of science. Chaos was first discovered by Poincare when he was investigating the three body problem and the this opened up the study of a whole of class of similar problems under dynamical systems. Explanations provided can be outlandish and they often appear to be completely contrived. De Broglie's explanation for the relation between energy and wavelength sounds pretty bizarre and vague when one encounters it for the first time. Indeed, the postulate that the speed of light is a constant in all inertial frames, the axiom at the foundation of Special Relativity, is unconvincing when you make a sharp transition from classical Newtonian mechanics to this revolutionary new framework. And how on earth did Faraday strike upon the notion of fields? Does any of the scientific method theories advanced by Popper or Kuhn explain the ridiculous brilliance of this 19th century English experimenter who, despite no formal training in physics, was able to put forth a description of electrodynamics in terms of these invisible oscillating lines of flux? I honestly doubt it. The idea that one can systematically describe development in physics according to some general outline is either impossible or the outline is so broad as to render its useless. The history of science is very complex (at least it is so for physics) and there is reason to doubt if anyone working in the Philosophy of Science is aware of all the subtleties involved. Below I shall consider some of the important problems in scientific methodology tackled by philosophers and the various theories advanced to characterize it.


The problem of induction is something that comes up often while discussing this subject. Without getting into finer details, let me state that the problem addresses the fact that no empirical law in nature can be completely correct because we have not tested, or it is impossible to do so, for all the possible cases in the universe. Newton's second law (ignore relativistic/quantum mechanical effects) is not exact because it has not been verified in every situation and in every location where it is expected to be valid. A lot of discussion has gone into this and it is a question that has preoccupied philosophers from the period of Hume but there is no satisfactory resolution of this. That it is an important philosophical question is beyond any reasonable doubt but whether a working physicist gains anything from it is something worth considering in more detail. Even before I learnt about this formal paradoxical problem, I intuitively understood the relevance of this question in the context of empirical relations in physics. We know in mathematics that a single counterexample was all that is required to disprove a general assertion. Extrapolating, it is reasonable to expect the same must be true with physical laws as well - if one can demonstrate some experiment anywhere in the universe where the principles do not apply, then it ceases to be a real principle. (Of course, all this must be taken with a grain of salt since even Newton's third law is not valid in quantum field theory but few would dispute that it is a valid empirical relation for a wide class of phenomena). In fact, today we are considering far-reaching possibilities that admit precisely such limitations in the applicability of our theories. We know that quantum field descriptions are constructed only up to a certain scale (expressed in energy or length), and these are independent of the structure of the underlying "fundamental" formulations. In the same way, we expect classical general relativity to break down at energies comparable to the Planck scale since effects of quantum mechanical fluctuations in gravitational fields would make significant contributions to the calculations. There is ongoing speculation regarding the mutability of fine structure constant with the evolution of the universe. Ever since Heisenberg firmly disregarded any speculations on basic theory and confined himself to merely describing the observations, physics has moved in a direction where it acknowledges that the main thrust is to explain empirical observations and not be too distracted by our preconceptions and prejudices regarding underlying theories. In fact, such a stance was taken by none less than Newton himself. He vowed never to make abstract speculations and discarded any metaphysical notions of space, time and physical laws. This was a bold decision at the time, and it required an extraordinary genius like Newton to proclaim such a radical outlook towards understanding nature. The one occasion where he did not put this philosophy into practice was in his description of time, being the ephemeral concept to be pinned down accurately, he resorted a metaphysical position that absolute time exists and it flows evenly, as can be corroborated by observers in any reference frame. Therefore careful understanding and formal analysis of the induction problem is something that is unlikely to provide new insights for physicists as far as research in physics is concerned.


While studying the theories of Karl Popper, one encounters the concept of verisimilitude. This is a term that is used to index the "turthlikness" of a specific scientific theory and to compare it with other competing theories. Popper assigned verisimilitude in a quantitative manner based on the number of truth and false propositions of a theory. A theory X is considered better than Y, if the true propositions of Y are included in X and the false propositions of X are included in Y. This is a prime example of how simplistic much of the studies in the philosophy of science is. Anyone with some background in undergraduate physics would immediately realize that this is not how theories are compared and we don't count the truth propositions (or the false ones). In fact, we don't think of it and judge it using such a formal system and doing so would lead to all kinds of odd conclusions. And how does it accommodate for the fact that much of what we consider as theories today, are in fact, approximations that are valid only in a specific regime. By this criterion, all statements in Newtonian physics are false, and the same goes for thermodynamics and even classical statistical physics.

Another criterion for judging scientific theories that was put forward by Popper was falsifiability. A theory was to be considered as scientific only if it provided a hypothetical event or phenomena that would prove the theory as false. For example, the SU(5)theory of grand unification predicted proton decay but since none has been observed in nature this model was quickly abandoned. The absence of a certain event provided a method of inferring the falsity of the hypothesis. That was a nice example which works well to explain Popperian notion of falsifiability but that is definitely not how all physical theories are rejected. In fact, I doubt if a single experimental result has ever been used to immediately decide that a theory is useless and must be discarded thoroughly. It always happens that physicists would try modifying the assumptions in the theory or alter the basic laws in such a way that it accommodates the new observations. The fact that the ether hypothesis was around for such a long time despite several paradoxes would illuminate this point quite well. The ether hypothesis could not explain the null result of the Michelson-Morley experiment which attempted to measure the speed of earth relative to the stationary ether, but since the concept of a medium through which light moved had been used to understand electromagnetic propagation for so long, there was a strong tendency to retain such a picture. To account for the unexpected results, various new models were proposed for interaction of ether with objects in the universe, most specifically with Earth. One way of approaching this was to assume that the ether was dragged along by massive objects like our planet and this could account for a null result. This was the first patch applied to a hypothesis that was, from Popperian classification, falsified. This method of rescuing the ether hypothesis opened a new set of complications. In a different attempt, Fresnel proposed that ether is partially dragged by a medium which resulted in a lower velocity for light traveling through this medium. The Michelson-Morley experiment in fact could not rule out this possibility. Meanwhile, Lorentz had considered contraction of all objects traveling through the ether and this included the arms of the interferometer used in the experiment. With a specific contraction law (which not surprisingly is the same as the one that can be derived from special relativity) one can still successfully defend ether's omnipresence. Note that in all these cases explanations and laws were worked out such that it would fit the experimental results. It was not until Einstein's formulation of special relativity and his unambiguous rejection of the ether concept that the idea was dropped. There were plenty of reasons to do so before but history shows that such a cherished belief would not be thrown away by a single, or for that matter a few experiments. A slightly different example would be the role of renormalization in modern quantum field theories. When cross-section calculations in QED using standard correlation functions that involved contributions from loop diagrams were carried out, it appeared as if the number obtained from theory was approaching infinity. Since that is an absurd result, going by strict Popperian criterion, the theory should have been dumped right away for its unfeasible implications. Indeed, many physicists believed that this conclusion sounded the death knell for the framework. Yet there were others who sought to modify it in a manner such that the divergences can be eliminated by some new ad-hoc rules, rules that was certainly unconvincing to many physicists and almost all mathematicians interested. While skeptics of this approach ( renormalization) were critical of the way quick-fix method that was devised by "sweeping the real problem under the rug", it eventually turned out that the formalism of quantum field theory required such a treatment of the quantities that appear in it. As time progressed and many of the predictions arising from working with such renormalization techniques gave correct results, the approach won over many skeptics and ultimately it became universally accepted as a legitimate theory. Such a unique development depended so crucially on the details of the specific theory to be interpreted accurately by the general criterion laid out by Popper.


A person of towering influence in this area of study is the American philosopher Thomas Kuhn. Kuhn's work in the 1950's and 60's can be rightly considered as representing a departure from the basic modes by which the subject had been analyzed by all his predecessors. His view of the history of science immediately strikes one as being altogether independent of the traditional and orthodox ways of understanding progress in scientific endeavor. Particularly, he strongly disagreed with the notion that scientific progress was a cumulative process, something that had been assumed in all the standard picture of the history of science. However, Kuhn argued that scientific activity can be broadly divided between two distinct phases, one "normal science" and the other characterized as scientific revolution. Normal science is the period when there is an existing framework within which all the discoveries and solutions to conventional problems were carried out. In his conception, this was a puzzle-solving time-frame where the scientist applies the rules, techniques and the underlying theoretical axioms to determine a solution to these relatively minor puzzles. This sort of activity is even compared to 'mopping the floor' and 'clearing the mess '-referring to the unresolved issues of the particular framework. In contrast, a scientific revolution basically involved a complete overthrow of the existing framework and replacing it with a paradigm that may be completely different from the earlier one. The revolution is a dramatic shift in the development of science because it involves a drastically different understanding of the basic concepts and ideas along with new tools and techniques for investigation, a different outlook of natural phenomena, and a shift in priorities between different aspects of the theory and the experimentation. As one would imagine immediately, the most striking examples of this comes from the revolution that took place in the early part of the last century, namely the development of quantum mechanics and relativity. There is absolutely no question that there is hardly a single field in modern physics that has remained untouched by these new frameworks, and in many areas, both these theories are incorporated compatibly. In addition to this, Kuhn also uses as examples the paradigms set and shifts that occurred around the works of Aristotle (on analyzing motion), Ptolemy (planetary positions),Maxwell ( expressing the electromagnetic equations in their mathematical form).

Yes, all these may be some good examples but how seriously do we take the claim that there is a reasonably clear division between a revolution and normal phase in progress of science. Let us focus on the period since those groundbreaking ideas of quantum mechanics and relativity came to be accepted as valid scientific theories by the physics community. It would be very hard to argue that there has been any other development(s) that can be considered as revolutionary since then. The most fundamental theoretical milestones during these periods would certainly include establishment of Quantum Electrodynamics, the Salam-Weinberg electroweak unification and the eventual setting up and success of the so-called Standard Model of particle physics.
Of course, none of this can be regarded as a paradigm shift in any way because they still retain the same underlying structure of quantum field theory -indeed that structure was just extended to all the basic interactions. However,let us consider all the important developments that have taken place during this period within this framework and ask ourselves whether these are something to just be casually treated in the somewhat demeaning manner of "puzzle-solving". Looking at the wikipedia page on physics time-line, one notices a whole of exciting new discoveries that took place in the last 90 years or so. The ascendancy of the Big Bang Cosmology, BCS theory of superconductivity, development of transistor, solution of 2D Ising Model, Fractional Quantum Hall Effect and Bose Einstein Condensation are some of the most striking examples . None of this and the associated research it spawned can be looked upon as a revolution because neither did it undermine the validity of relativity and quantum mechanics nor did it completely alter the progress of all fields in physics. Yet, taken together one can say that our view of the entirety of physics has been highly influenced by most of these "puzzle-solutions" and "floor-mopping". In fact, no one in the 1930's could have conceived the current status of physics, its achievements over the years, the formulation of ingenious principles and all the various new areas of research it has opened up. In other words, one can consider all the developments in physics together cumulatively as a 'revolution' without having any of the characteristics of scientific revolution as postulated by Kuhn.

If this argument is not convincing, let us look into the future and ponder how some of the unsolved problems of our era are going to be tackled. Certainly, one of the greatest unresolved questions in physics is that of unification of the fundamental interactions and all the associated problems and loopholes in the Standard Model -ranging from the lightness of Higgs mass, to origin of neutrino mass, to QCD vacuua. If ultimately we find a way to unify these interactions together successfully it would truly be one of the greatest breakthroughs in modern science. It may even represent the pinnacle of our achievements in understanding the most fundamental aspects of natural phenomena. However, would that be a scientific revolution in the sense described by Kuhn? How many areas of physics would such a development have any perceptible impact on, let alone completely turning it upside down. The answer is a few, if any at all. The unification is expected to occur at the Planck Scale and that energy is almost unattainable even in any accelerator that may be constructed in the foreseeable future. That being the case, there is no reason to expect that it will have any consequence for almost all of physics except in cosmology. At least until the time that we can explore such energy scales inside the condensed matter laboratory! Hence studies in atomic and molecular physics, theoretical nuclear physics, surface and material sciences, high Tc superconductivity and non-equilibrium statistical mechanics will continue as if nothing ever happened. And when it comes to understanding the origin of the universe and addressing some of the unresolved questions in that field such as the constituents of dark matter or the cosmological constant problem, although we may find solutions to these with the construction of 'theory of everything' it would not invalidate the progress we have made so far. Thus, cosmologists will not have disputes amongst themselves and will universally adapt their work to this paradigm once it is established. So, there is no room for such things as incommensurability(methodological or epistemological) or Kuhn-loss or new vocabulary as was laid down by this very influential philosopher. Hence, it is safe to conclude that however great an accomplishment the unification may be, it certainly will not be a scientific revolution.


Although I have been disappointed with most of philosophy of science, there is one special exception amongst the various doctrines of the discipline that I find very pertinent and useful: Logical Positivism. The exact positions and principles of this school have been debated upon extensively and some of the more radical positions have now been abandoned. I shall however not concern myself with these issues in this discussion(such as the analytic/synthetic distinction , the controversial rejection of synthetic a priori statements or regarding mathematics as a tautology). Instead, I am going to focus on that one important aspect of the doctrine that is key to distinguishing science from metaphysics and this revolves around the principle of verifiability. It is held that a statement has "meaningful content" only if it makes a claim that can be supported by empirical justification. Or more broadly, the only statements that express factual knowledge are those that have the potential to being empirically verified sometime in the future. Hence, a statement like "God exists in ways unknown to man" is devoid of any meaningful content because there can be no observations or events that can establish its truth or falsity. Extending this principle the founders and adherents of the Vienna Circle in the 1930's made devastating critiques of areas in philosophy, metaphysics and theology. They argued that many of the propositions contained in these disciplines do not express any cognitively sensible fact about the world. I find myself agreement with this viewpoint and think there really is no meaning in asking questions like "Are there parallel universes out that is outside our space-time continuum?"*

It would be improper for me to conclude this discussion without putting philosophy of science in some fair perspective. I have raised several objections to the basic postulates put forward by some of the most important practitioners of this school of inquiry. I will always have a skeptical outlook towards how successful any theory describing the evolution of science can be and will suspect the accuracy of any characterization of the history of science based on certain simplistic rules. Applying Kuhn's own standard, I would say that philosophy of science is still in a pre-paradigm state! Yet, I strongly believe that every field of scholarship is of value and contributes to human knowledge. While the ultimate scope of philosophy of science may be too ambitious for its own good, there is no denying that the various theories shed some light on certain essential elements that characterize the practice of science. It may be incorrect to declare universal rules that govern all innovations in science but more modest statements about the development and progress of sciences would certainly prove to be useful to anyone-expert or not - curious about the evolution of what is unarguably the greatest collective human accomplishment.

*I have to admit that part of my inclination towards this philosophical position is my annoyance at the invariable propensity of individuals to make statements that have no empirical content whatsoever while imagining them to be something really profound! I have had the misfortune of having to sit with such loud-mouthed, gibberish-spewing "revolutionary thinkers" in the philosophy classes I have taken.