Home » Epistemology » The Scientific Method

The Scientific Method

The scientific method is the only viable method for justifying any statement that is not a logical tautology.

Table of Contents

Introduction
As has been known to philosophers since the time of David Hume, and as described in my essay On Truth, there are statements that, if true, cannot be proven logically.  These statements require input from our senses in order for us to be able to ascertain to any degree of confidence whether or not they are true.  While these statements cannot be proven, they can be justified via a method that I will loosely call the scientific method.  This essay will look at this method in some detail.

A close examination of the foundations of the scientific method reveals a number of philosophically questionable assumptions.  Yet, despite these problems, the scientific method has been extremely successful.  All of the technology that we enjoy in modern times was created as a direct result of information learned via the scientific method.  We bet our lives every day on the validity of knowledge obtained via science.  For example, every time we board a jet airplane, we assume that Bernouli’s law of fluid flow is correct and will continue to be correct.  A number of philosophers have questioned the value of these technologies, but they do not dispute that the scientific method has been extremely effective in providing us with a wealth of knowledge about the universe in which we live.

Background Assumptions
The scientific method makes a number of reasonable metaphysical assumptions.  The first is one of realism, that there exists a reality that is independent of our minds.  A related second assumption is that of naturalism, that everything in this independent reality can be observed, or at least the indirect affects of its existence can be observed1.

Another assumption that must be made is that universals, that to which predicates in language refer, exist and have identity.  In fact, they exist as patterns in the real world.  These patterns and their instantiations are themselves objects that can be parts of higher-level patterns2.  Note that I will often refer to sets as if they exist, but the intent is to refer to the patterns that are shared by the members of the set.  I will use this terminology because it is commonplace and sets in many cases are easier to visualize than are their corresponding patterns3.

I will adopt the correspondence definition of truth, in which a sentence is defined as true if it symbolically represents a state of affairs that actually exists (or ‘obtains’) in the real world4.  I will also assume that 2-value logic, in particular 1st-order logic5, a reasonably powerful set theory6, and if necessary modal logic, are valid methods of inference7.

Epistemological Foundations
We use our senses to gain information about the external world, either directly through observation, or indirectly through symbolic information.   We in turn represent this information to ourselves symbolically, in the form of languages and statements about the world, with these symbols mapped to actual objects and patterns in the external world.  I will call such a mapping the “meaning” of a symbol or sentence8.  We use logic to infer statements about the real world from other statements, and in particular, to test our beliefs for consistency.  If two beliefs are contradictory, then one of them must be wrong.

Based on information about the external world obtained through our senses, we form theories about that world.  A theory consists of a set of sentences that is closed under logical implication9.  Some of these sentences represent previously observed truths; all others are predictions.  A theory is true if every sentence in the theory is true.  In order for the sentences of a theory to have meaning (a prerequisite for truth), the symbols of the language in which the theory is expressed must themselves have meaning.  Some of these are ‘primitives’: Every one of us must understand what they mean.  The remainder exist solely for our convenience, defined from the primitive symbols.  To make our theories simpler and more useful, we axiomatize them.  An axiomatization is a set of sentences that are mutually consistent.  The set of all sentences that can be logically derived from the axioms is itself a theory.

In normal usage, the scientific method refers to the process of formulating and justifying theories about the real world.  While the full scope of the scientific method includes formal methods for exchange of ideas and for reaching consensus about the validity of theories, we use the same basic process in everyday life from the day we were born.  For example, as a child I observed small, warm, furry, amusing animals that live with us, and after some time learned a symbol for these types of animals: “cat”.  As time went on, through further observation, I learned more and more about the behavior of these animals, which instilled in me a personal knowledge about them.

In this essay, I will not be concerned with how theories are formulated, but will instead concentrate on their justification.  The reason for this is that theory formation is not as important philosophically as is justification.  If your crystal ball produced a new “Theory of Everything” in physics that proved to be correct, why would it matter that the theory came from a crystal ball, as long as we were able to demonstrate that the theory was (or most likely was) true?  As described above, sentences that depend on information about the external world for justification cannot be proven with logic.  For these sentences, a common criteria for a method of justification is that there be some metric for measurement of our degree of confidence that the sentence is true, and that we can use some mechanism to raise that metric to the point that its deviation from absolute truth can be made as small as we desire.  In other words, their must be a process by which the metric asymptotically approaches truth.   The scientific method is just such a process.

Probability as Degree of Truth
A common metric for degree of truth as described above is probability10.  While there is certainly not a consensus that probability is a valid metric for degree of truth, or that such a metric is even possible, in the brief amount of time in which I have contemplated this problem, I certainly have not been able to come up with a better alternative.  There are numerous problems with probability, however.  One is that we do not have good understanding of exactly what probability is.  There are numerous definitions of probability, each with significant problems.  Here is a summary of the leading ones11:

  • Classical Definition: This definition says that probability is the ratio of equally probably possible outcomes, such as the flip of a coin, or the roll of a single die.  The main problem with this interpretation is that normally the possible outcomes of a random event are not equally probably.  Also, since this definition includes the word “probable”, it is obviously circular.
  • Frequency Definition: This definition says that probability is the frequency at which a particular outcome occurred in an infinite number of trials.  The three problems with this definitions are 1) we obviously cannot perform an infinite number of trials; 2) a ratio of two (countably) infinite numbers is undefined; 3) often we want a probability that a single particular outcome can occur; we cannot perform more than one, much less an infinite number of trials.  We can overcome the first two problems in practice by performing a large number of trials, but there does not seem to be a solution to the third problem.
  • Subjective Definition: This definition says that probability is determined subjectively by our expectations, the value of which for a particular individual can be determined by asking that individual to make a wager regarding the outcome of the event in question.  The obvious problems with this as a definition of actual probability are that subjective beliefs have no necessary relation to actual events, and because of this subjective probabilities may differ from individual to individual.
  • Logical Definition: This definition says that probability is a ratio of numbers of “possible worlds”.  I favor this interpretation, and will discuss it in more detail below.

According to the logical definition of probability, for a sentence s:

P(s) ≡ # of possible worlds in which s is true / # of possible worlds, where P(s) means the probability that sentence s is true.

A useful related concept is of the probability that sentence s is true, given that another sentence q is true.  This is written and defined as follows:

P(s|q) = # of possible worlds in which s and q are true / # of possible worlds in which q is true.

Since possible worlds do not really exist, by “# of possible worlds in which s is true” is really meant “# of theories T in the language of s in which s∈T”.  I will use the “possible worlds” terminology in cases where they are easier to visualize than theories12.  It should be pointed out, however, that the logical definition is normally specified in terms of theories rather than worlds.  Each possible theory contains a statement (in the language where there is a symbol for every object in the universe of discourse) that indicates whether every combination of object symbols with predicate symbols is true or not.  Each of these is referred to as a “state” in the language.

This logical definition, like the frequency definition, does suffer from the fact that the number of possible theories will be infinite, so we once again run into the problem of ratios of infinities.  Another problem is that we are implicitly assuming that all possible worlds are equally likely.

A third, more complex complaint that is raised against this definition is that it does not provide the ability to learn from experience.  However, the argument presented to demonstrate this does not seem consistent with the scientific method.  A typical example13 is to consider a universe with three objects, a, b, and c, with a single predicate F.  There are 8 possible states of this universe (8 = 2 logical possibilities raised to the power of 3 object-predicate combinations).  Consider, for instance, the theory that Fa is true, and assume that it can be demonstrated that Fc is true.  The claim is that this does not provide evidence for Fa.  However, I do not see why this should be relevant.  It is equivalent to saying, in a more complex universe, that the fact that dogs cannot talk provides no evidence for the fact that humans can talk.  Why should it?  A more interesting theory would be ∀x: Fx.  In this case, not only would Fc indeed be evidence for this theory, but it can be shown using the classic Bayes equation in probability that observing Fc does in a sense raise the probability that the theory as a whole is true.  In fact, it is commonly suggested that Bayes’ equation in probability can be used to exactly measure our change in expected probability when new information becomes available14.  I will discuss Bayes’ equation in more detail below.

This definition does have the advantage that it unites the frequency and subjective definitions with a single underlying theory.  A random trial can be thought of as choosing a random world from all possible worlds.  Thus, the frequency definition, in a way, reduces to a version of the classical definition.  On the other hand, a subjective probability can be defined as the ratio of the number of possible worlds in which we know s is true to the number of all possible worlds.  Thus, our expected probability can change as we gain new information concerning the possible truth of s.  Again, Bayes equation can be used to demonstrate this quantitatively.

The Problem of Induction
Before we turn to how the scientific method can be used to calculate the probability that a theory is true, let us first look at two other problems often associated with this method.  The first is Hume’s problem of induction15.  Hume claimed that we cannot make predictions about the future based on observations from the past, because there is no logical reason to believe that the future will be similar to the past.  We cannot claim that following this procedure will work because it has always worked in the past, because to do so is circular reasoning: in a sense we are using induction to validate induction16.

Any scientific theory will have a domain of variables or dimensions that characterize the different situations in which the theory is applicable.  For example, special relativity is characterized by such variables as location in space and time, relative velocity, and mass.  When making experimental observations, each experiment is carried out in a particular location in this domain.  We can never test the entire domain of a theory.  One reason for this is that, as Hume pointed out, we cannot perform tests in the future.  Another is that for continuous variables, there will be an infinite number of possible experiments, and we obviously cannot test them all.  In some cases physical constraints prevent us from testing a theory in some areas of its domain.  For example, we currently have no method to test the general theory of relativity at distances on the order of those encountered normally in quantum theory.

The problem of continuous variables can be mitigated by interpolation; experience has shown that it is valid to assume, to within a reasonable margin of error, that the equations describing the external world will be smooth and well-behaved between experimental data points that are sufficiently close together.  However, it is well known that extrapolation beyond the limits of the domain in which a theory has been tested is likely to lead to error.  Hume’s problem is a special case of this particular problem.  As Wesley Salmon has written, “Hume’s profound critique of induction begins with a simple and apparently innocent question: How do we acquire knowledge of the unobserved?”17.

I do not see any solution to Hume’s problem.  It appears that we must accept the uniformity of nature as an assumption, on faith.  Specifically, I believe that we can assume that given any set of initial conditions, if an event e occurs following those initial conditions, then e will occur any time those initial conditions are reproduced exactly18.  This assumptions has two obvious problems.  One is that admitting that science rests on an article of faith opens it up to the charge that it is no better than any other method for justifying knowledge, such as crystal ball gazing or pure faith.  If a religious fundamentalist tells us that tomorrow, by God’s will, Armageddon will begin, how can we refute him, other than in retrospect?  The other, more mundane problem is that obviously initial conditions can never be reproduced exactly.  However, scientists (and humans in general) seem to have a knack for identifying the relevant variables in any particular situation.  Therefore, to reproduce a result, one only needs to reproduce the initial values of these relevant variables to within an acceptable degree of accuracy.

Useful Theories
The other problem with scientific justification is that, by definition, we can never know absolutely that a scientific theory is true: our confidence in a theory may approach certainty, but we will never achieve absolute certainty.  This is a significant philosophical problem, but in many circumstances it is not a practical problem.  As stated above, all viable theories have achieved that status by virtue of the fact that they have been tested in a subset of their domain.  Even if a theory has been found to be false outside a particular domain, it will continue to produce valid results (to within an acceptable margin of error) within the domain for which it has been successfully tested.

An example of this is Newton’s laws of motion.  It is well known that they are highly accurate in the domain of relative velocities much less than the speed of light.  Indeed, the theory that supplanted Newton’s theory, the special theory of relativity, reduces to Newton’s as relative velocity approaches zero.  Therefore, for situations involving everyday relative velocities, Newton’s equations can be used with a very high degree of accuracy.  To within an acceptable margin of error, we have an extremely high level of confidence that Newton’s theory will provide a correct answer.  And the obvious advantage of using Newton’s equations rather than those of special relativity (e.g., the Lorentz transform) is that they are much simpler and easier with which to work.

I will refer to such theories as “useful” theories.  Strictly speaking, they are not true, but that does not imply that they will not make valid predictions in certain domains.  The criticism that science can never produce truth is often interpreted as meaning that we can never know that any prediction of the theory will be true, whereas the only claim that can be made is that we cannot know that all of the predictions of the theory will be true.  Indeed, baring a failure of our assumption regarding Hume’s problem described above, all predictions of the theory will be true to within an acceptable margin of error in the valid subset of its domain.

Thomas Kuhn famously argued against this in his modern classic The Structure of Scientific Revolutions19, claiming that different theories such as Newton’s laws and special relativity were “incommensurable” because the primitive terms were different.  For example, he claimed that “mass” was not the same in these two theories, and therefore, the theories did not apply to the same domains, and one could not be substituted for or compared to the other.  I disagree with this assessment.  The fact that the primitives are slightly different does not eliminate that fact that either of these theories is useful in their respective valid domains.  As long as there is agreement on the correspondence between the primitive terms and reality, either theory will represent a potential model of reality.

Justification of Theories
A common probabilistic approach to validation of scientific theories is to use an equation in probability derived by English mathematician Thomas Bayes:

(1)    P(s|q) = P(s) * P(q|s) / P(q)

A common interpretation of this equation in the context of science is as follows.  Let E be a subset of a theory T, and denote each member of E as ei.  Then

(2)     P(T|E) = P(T) * ∏i[ P(ei|T) ] / ∏i[ P(ei) ], where

P(T|E) means the probability that T is true given that every member of E is true
i[…] means the product of all of the terms (subscript i) inside the brackets

If we let E be the members of T that have been experimentally tested, and E’ = T – E (i.e., T = E∪E’), then P(T) = P(E)*P(E’) = ∏i[ P(ei) ] * P(E’).  Substituting this into the equation (2) above yields:

(3)     P(T|E) = P(E’) * ∏i[ P(ei|T) ].

If we let P(ei|T) be the results of each of the experiments ei, then this equation allows us to estimate the probability that a theory T is correct, given a set of experimental results E = {ei}.  Several features of equation (3) are worth noting:

  • If for all experimental results, P(ei|T) = 1, then P(T|E) = P(E’), or our uncertainty about the untested sentences in the theory.  Indeed, if E = T, then in this case P(T|E) = 1, and T is proven.  Of course, in principle this will be impossible, as in general T will be infinite.
  • If any one of the experimental results fails, then P(ei|T) = 0, thus P(T|E) = 0, and T is falsified.
  • If we do a new experiment ej, meaning that ej moves from E’ to E, and the experiment is successful, then P(T|E) increases by a factor of 1/P(ej).  Thus, if P(ej) was very small, P(T|E) will increase by a significant amount.  This, I believe, is the reason why Karl Popper recommended that we look for theories that, in effect, have a low P(T)20, because the probability of the theory being correct will be increased a great deal by successful experimental results.

Problems with Bayes’ Equation
It is easy to see that the form of Bayes’ equation given in equation (3) satisfies the definition of justification provided above; i.e., P(T|E) will asymptotically approach unity.  For as long as we continue to test a theory and come up with positive experimental results, P(T|E) will be increased by 1/P(ei) for each results ei, and thus we can get P(T|E) as close to unity as we want, as long as there are no practical reasons why experimentation cannot continue.  Of course, the difficulty in running experiments in some parts of the domain of some theories (e.g., higher and higher energies in particle physics), prevents us from testing all of T, and thus getting P(T|E) arbitrarily close to 1.  However, in principle this possibility is there.

Despite the fact that this equation satisfies our requirement for a method of justification of empirical theories, in addition to the general problems with probability describe above, it suffers from a number of significant problems:

  1. Typically, T will be infinite.  Because it is impossible to perform an infinite number of experiments, for an infinite T, the fraction of the members of T that have been tested will always be zero.
  2. In reality, we can never be 100% certain of the outcome of an observation or experiment.  However, it is unclear exactly what value to use for P(ei|T) given any actual set of experimental results.
  3. In fact, in most cases, P(ei|T) will actually be < 1.  If T is infinite, then on average P(ei|T) for any ei∈T must approach 1, otherwise P(T|E) would be 0.  However, if experiment shows that P(ei|T) < P(ei), i.e., that the probability that ei is true based on experiment is less than the a priori probability that it is true, then each experiment will actually lower P(T|E), and for a large number of experiments, this value will be driven towards 0, despite the fact that they all gave a positive results.
  4. In order to calculate P(T|E), we must know P(E’).  However, there does not seem to be any objective way to determine this value, so the value we use here is necessarily subjective.  A consensus process can be used to arrive at a value, but in the end, it can be nothing but a best guess.

One way to mitigate problem (#1) is to select for experimentation a random set of members of T from a continuum of such members.  For example, if the theory is expressed as a function, then the domain of the theory will be the domain of the function, which will be infinite.  However, we can select points on the continuum at random, and assume with some confidence that the results for the values interpolated from these points will be similar.  As the number of representative points increases, our confidence in the interpolated points will grow.  Of course, there is no guarantee that this is the case, since for any set of data points, there are an infinite number of well-behaved functions that contain those points.  This is known as the problem of underdetermination of theories.  However, in practice this tends not to be a real problem21.  The implied results for the interpolated points could then be used as if they were actual experimental data points.  As long as the domain of T is of finite extent, then we can then eliminate the problem that this domain contains an infinite number of points, because the tested portion of T will be a finite fraction of T.  For example, for a single-dimensional domain, this will be the fraction of the segment of the domain that lies inside the most extreme points that were tested.   Despite the fact that the domain is a continuum and thus infinite, this fraction will be clearly defined.  Note, of course, as mentioned above, this policy could not be extended to extrapolated values.

Problem (#2) is more significant.  For any set of experimental data, the confidence level of the results would be a natural candidate for P(ei|T).  However, this value will not be uniquely determined, as it is a function of the margin of error selected.  For example, for a standard deviation of σ, a margin of error of 1.96*σ corresponds to a confidence level of 95%.  This value can be increased to 99% by accepting a margin of error of 2.575*σ.

Problem (#3) is even more severe.   Even if P(ei|T) = 99%, for very large or infinite T, it will be less than P(ei), and thus counterproductive.  We could solve both (#2) and (#3) by adopting the policy of setting P(ei|T) = 1 for an acceptable margin of error and confidence level.  However, these thresholds would necessarily be arbitrary, and it is unsure how to logically justify this policy.  In addition, it appears that any quantitative experiment can always be falsified.  This is because for any confidence level, the margin of error will be proportional to 1/√n, where n is the number of experimental data points.  Let yp be the predicted value for the outcome, and let ym be the mean value from the experimental data.  Then if |yp – ym| > 0, then the margin of error (for any confidence level) can always be made smaller than this value by obtaining a sufficiently large number of data points n.  In other words, for any desired level of confidence, we can demonstrate that yp ≠ ym, and thus obtain a value of P(ei|T) = 0.

One possible solution to these latter two problems would be to use the accuracy of the measurement as the measure of probability that the result is correct; i.e. P(ei|T) = 1 – |yp-ym|/yp.  For example, the probability that the value predicted by quantum theory for the muon g-factor in physics is correct would be 1 – | 2.0023318416 – 2.0023318361| / 2.0023318361 = 99.9999997% .  However, there does not seem to be any logical reason to interpret a measure of accuracy as a measure of probability.

I see no solution to problem (#4), other than through a subjective consensus of the scientific community, and even this would most likely not be correct.  Indeed, for two theories, even if the experimental evidence for one was much greater than for the other, the one with the lesser corroboration could be made to have a greater probability of success just be selecting a sufficiently higher P(E’).  This bias could be mitigated by adhering to some policy for setting P(E’) for similar theories, and certainly theories for which E is identical for both theories should have the same value for P(E’).  Again, it is unclear how such a policy would not be arbitrary and subjective.

Descartes’ Demon and the Matrix
It is well known that our senses are not entirely reliable22.  An amazing example of how context shapes our perception can be found in an article by R. Beau Lotto, Dale Purves, and Surajit Nundy in the May-June 2002 edition American Scientist23.   In one image from this article, there are squares on two Rubik’s-like cubes that clearly appear blue on one and yellow on the other.  However, when viewed in isolation (outside of the context of the scene consisting of the cube itself), it can be seen that they are both actually grey!  It may seem that if observations are unreliable, then the scientific method itself must be considered unreliable, as this would imply that we need to use a relatively low value of P(ei|T).  This problem is circumvented by the fact that failures of our senses seem usually to not be systematic, and that repetition of the same experiment, especially by independent experimenters, will tend to converge on a reliable answer, or at least a distribution of results that can be analyzed statistically.

But what happens if for some reason the lack of reliability of our senses is systematic?  The most extreme case imaginable of this is the scenario of a demon described by Descartes in his Meditations on First Philosophy, and manifested more recently in the motion picture The Matrix.  In these scenarios, we are immersed completely in a “virtual reality”, in which everything that appears to our senses is completely untrue.  It should be noted first that, as Daniel Dennett has pointed out, to create such a virtual reality would be almost impossible; the real world is so complex that the ability to create a convincing simulation would require a device as complex as reality itself24.  However, assuming that a demon or evil robots could create such a convincing reality, we would certainly use the scientific method to conclude that what we observed was reality, and that our observations gave an accurate description of this reality.

Would we be justified in believing that this virtual reality was the reality?  I think so, since there is no fundamental difference between the scientific method and the method that we would use to reach this conclusion.  Therefore, we would have to conclude that in this extreme, and extremely unlikely case, the scientific method would be a failure.

Summary & Conclusion
Because empirical sentences cannot be proven analytically, they require some other form of justification.  The scientific method, with probability as its metric for truth, and with Bayes’ equation providing the mechanism for calculating this probability, appears to be the best method that we have for justifying the belief that these types of sentences are true.  I have discussed above the many problems that have been raised regarding this method (and the scientific method in general), which are summarized as follows:

  • Lack of agreement on justification: While there is certainly no consensus that justification of empirical beliefs is possible, despite its many problems, the fact that the scientific method has been so effective at providing us with knowledge about the world seems to demonstrate that such a method of justification is indeed possible.
  • A metric for degree of justification: Probability seems to be the best metric for degree of truth of empirical statements.  Despite its many problems, it appears to be the only viable metric for the task.
  • The problem of induction: Hume’s problem of induction is a very serious one that does not seem to have any rigorous solution.  It appears that we must accept on faith that fact that the laws of nature will not change in the future.
  • Incommensurability: Kuhn claimed that theories could not be compared or applied to similar domains because of incommensurability of primitive terms.  As long as these terms can be interpreted in the real world, this objection seems unfounded.
  • Inability to objectively measure probability: The need to subjectively measure the probability that a theory is true in the untested portion of its domain remains a challenge.
  • Necessary falsifiability of measurement: This is also a philosophical challenge, because despite the fact that is it commonplace, there does not seem to be a straightforward way to translate experimental results into probabilities.
  • Underdetermination: There are an infinite number of theories that can explain any finite set of experimental data.  Again, however, in practice, this does not seem to be a serious limitation.
  • Fallibility of the senses: Our senses are fallible, but repeated experimentation can eliminate intermittent sensory failures.
  • Virtual reality: To create the Matrix in reality would be an almost impossibly difficult task.  If it could be done, however, residents of the Matrix would indeed be justified in believing that they were experiencing reality.

The fact that the there are so many problems with the foundations of the scientific method, albeit some more problematic than others, makes its obvious practical success all the more surprising.

End Notes

  1. For a defense of these assumptions, see my essay on Naturalism & Theology.
  2. For a more detailed see my essay on Universals.
  3. There is a minor difference between sets and patterns, or attributes as they are often called.  Sets are said to be extensional, in that any two sets that contain the same elements are equal.  In contrast, patterns or attributes are intensional, meaning that even if the exact same objects have two different attributes, the attributes are different.  An example is the pair of attributes “mammal” and “having hair”.  All mammals have hair, but these two attributes are different.
  4. For more details see my essay on Truth.
  5. For a good introduction to logic see Enderton, Herbert B., A Mathematical Introduction to Logic, Academic Press, San Diego, CA, 1972.
  6. For a good introduction to set theory, see Suppes, Patrick, Axiomatic Set Theory, Dover Books, New York, 1972.
  7. For a good introduction to modal logic, see Hughes, G.E. and Cresswell, M.J, A New Introduction to Modal Logic, Routledge Books, London, 1996.
  8. The term “meaning” has many different interpretations in philosophy.  For a discussion of them see Speaks, Jeff, “Theories of Meaning”, The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2011/entries/meaning/>.  The definition that I use here is the reference theory of meaning.  For a good discussion of this theory, see Muehlhauser, Luke, “Intro to Language: The Referential Theory of Meaning”, Common Sense Atheism, May 2, 2010, URL=<http://commonsenseatheism.com/?p=7763>.
  9. The fact that a theory must be closed under logical implication means that for any set of sentences T, T is a theory if an only if  for any sentence s that can be derived logically from any of the sentences in T, that sentences s must be a member of T.
  10. See for example Jeffreys, Harold, “Probability and Scientific Method”, Proceedings of the Royal Society of London, Vol. 146, No. 856 (Aug. 1, 1934).
  11. For a more in-depth overview of the leading interpretations of probability and their problems, see Hájek, Alan, “Interpretations of Probability”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2012/entries/probability-interpret/>.  In particular, Hájek discusses what are typically given as the three criteria that must be satisfied by any interpretation of probability, and how each proposed definition fares against these criteria.
  12. For a discussion of possible worlds and their relation to theories in mathematical logic, the reader is referred once again to my essay on Truth, in particular the section on Analytic vs. Synthetic Truth.
  13. To my knowledge, this objection was first raised in Salmon, Wesley C., The Foundations of Scientific Inference, University of Pittsburgh Press, Pittsburgh, PA (1966), pg. 72, although it is also discussed in Hájek, op. cit., section 3.2.
  14. See, for example, Salmon, op. cit., pg.  117.
  15. For an in-depth discussion of this problem, see Vickers, John, “The Problem of Induction”, The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2013/entries/induction-problem/>.
  16. The problem of induction has been a major focus of philosophical inquiry in the centuries since Hume first presented it.  Philosophers have concluded that, because of its circularity, induction is a far inferior method for justifying knowledge than is deduction.  However, it turns out that deduction is circular, too: We must assume that deduction is valid in order to prove that it is indeed valid.  To the best of my knowledge, this fact was first pointed out in Haack, Susan, “The Justification of Deduction”, Mind, New Series, Vol. 85, No. 337 (Jan. 1976), pp. 112-119.  Available online at http://www.as.miami.edu/phi/haack/Justification%20of%20Deduction%20reprint%202010.pdf.
  17. Salmon, op. cit., pg. 5.
  18. Strictly speaking, this is only true of deterministic processes.  For a indeterministic process such as in quantum theory, the results would necessarily following a particular probability distribution.  For more on deterministic and causal processes see my essays on Quantum Theory and Free Will.
  19. Kuhn, Thomas, The Structure of Scientific Revolutions, University of Chicago Press, Chicago, IL (1962).
  20. Popper, Karl R., The Logic of Scientific Discovery, Routledge, London, English Reprint (1992).
  21. I am not sure if the potential error from interpolation can be quantified, and if so, if this quantification of error has ever been formally specified.
  22. A famous rather modern discussion of this problem can be found in the first chapter, Appearance and Reality, of Bertrand Russell’s classic The Problems of Philosophy (Oxford University Press, Oxford, GB, 1959).
  23. Lotto, R. B., Purves, D., Nundy S., “Why We See What We Do”, American Scientist, Vol. 90, No. 3 (May-June 2002); available online at https://www.americanscientist.org/issues/pub/why-we-see-what-we-do/1.
  24. Dennett, Daniel C., Consciousness Explained, Back Bay Books, Boston, MA (1991), pg. 3 (The Brain in the Vat).

November, 2013

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: