What makes something a fact?
The construction of scientific meaning from a theological perspective.
Common sense and engineering makes facts bedrock: as unrefutable as our very selves. As far as common sense goes, we think with approbation of Samuel Johnson’s refutation of Bishop Berkeley’s idealism, reported by Boswell in his Life,
“After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it — “I refute it thus.”[1]
The stone and Johnson’s foot become as much facts for us, as we have to assume they were to Boswell and Johnson, through a combination of our hearing Boswell’s words and our imagining our foot rebounding against a stone we ourselves have kicked.
The importance of the development of facts from guestimates for engineering and other fields rather more consequential than philosophical arguments, can hardly be overestimated. For good reason, “facts not opinions[2]” is the motto over the door of 99, Southwark Street, London where David Kirkaldy had his workshop in the latter half of the nineteenth century. Kirkaldy had designed and build a machine to test the strength of construction materials for building and bridges etc., It was Kirkaldy’s work that led to the development of standards for such materials and his workshop with his giant machine intact is now the Kirkaldy Testing Museum. Facts whether historical, scientific or personal are the base layer, the stuff out of which the reality we construct and experience is formed. Facts are what we interpret or manipulate. Facts are the stone against which our feet rebound when kicked. Facts are what ensure our bridges support the traffic they bear, day by day and decade by decade. As Shapin and Schaffer put it in their book about Robert Boyle and his development of experimental science,
“In the conventions of the intellectual world we now inhabit there is no item of knowledge so solid as a matter of fact. We may revise our ways of making sense of matters of fact and we may adjust their place in our overall maps of knowledge. Our theories, hypotheses, and our metaphysical systems may be jettisoned, but matters of fact stand undeniable and permanent. We do, to be sure, reject particular matters of fact, but the manner of our doing so adds solidity to the category of the fact. A discarded theory remains a theory: there are “good” theories and “bad” theories—theories currently regarded as true by everyone and theories that no one any longer believes to be true. However, when we reject a matter of fact, we take away its entitlement to the designation: it never was a matter of fact at all.”[3]
All this is true but it overlooks the multilayered and constructed nature of human reality and human discourse. Tonight I am going to explore an aspect of the genesis of the idea of “a fact” and then look at how facts are constructed and function in one specific area of contemporary science. Those of you who have seen the word “theological” in the title and feel a little uneasy can rest assured that I have no intention of trying to pull the rational scientific rug from under our twenty-first century feet. Dietrich Bonhoeffer wrote from prison “We are to find God in what we know, not in what we don’t know.”[4] The greatest theologians of the middle ages, such as Thomas Aquinas, were the significant thinkers that they were because they embraced new learning, wherever it came from and rethought their faith and worldview in fresh terms. As a contemporary theologian who works in the area of science and religion, Robert John Russell reminds theologians, “ affirming that “God so loved the world” means accepting the challenge that this world in its cultural pluralism and its empirical complexity, is in fact the world ” God so loved/s.”[5] As a preacher, I think it is my duty to think , honestly, rationally and in a scientifically informed way about the world which Christians affirm as creation.
So facts, are the bedrock, things about which we can be certain and indeed, that was the point of them. The idea of the “fact” as we know it today was manufactured in the later seventeenth century at a time when how it is possible to be certain about things whether in theology or science was a hotly contested issue. The sixteenth century renaissance of ancient skepticism through the translation of Greek works into Latin and the disputes about the locus of certainty of religious knowledge because of the Reformation together meant that how it was that human beings knew what they knew was an important topic for debate. With regard to the development of the scientific experimental method that has proved so fruitful, an important stage of the debate about the human possibility of certainty was Robert Boyle’s construction of a guarantee process for the production of “matters of fact”. According to the OED, this is the definition of the word “fact” in its modern sense,
“Something that has really occurred or is actually the case; something certainly known to be of this character; hence, a particular truth known by actual observation or authentic testimony, as opposed to what is merely inferred, or to a conjecture or fiction; a datum of experience, as distinguished from the conclusions that may be based upon it.”
It’s first occurrence in written English was is in 1632, just after Boyle’s birth in 1627, in John Hayward’s translation of G.F Biondi’s work Eromena, or Love and Revenge. Before that the word “fact” was used to mean a deed done, and sometimes, more specifically, a criminal deed done. Robert Boyle was born into a climate of argument as to knowledge of what the human mind was capable. Alongside the debates as to where certainty could be found, scholars also turned their attention to the new “Doctrine of Chances”, probability theory. How much trust could be placed on findings in areas where certainty was not possible? The Doctrine of Chances began in earnest when a French nobleman asked Pascal about two problems in dice games. Pascal corresponded with Fermat’s which stimulated Huygens to publish his own thoughts on the matter. As the eighteenth century dawned, people like Bernoulli and de Moivre attempted to extrapolate work done on the doctrine of chances in games of dice to “civil, moral and economic matters”[6]. They were not aiming to attain infallible certainty but something lesser they called “moral certainty”. Boyle too wanted to lay aside the “sterile” disputes of the “schoolmen” that aimed at certainty and derive a methodology that enabled practical knowledge of the world around us. He wanted to set aside the theoretical debates about what could or could not be known in theory and produce a method that established matters that could be known, “matters of fact”. He writes
“I have often found such difficulties in searching into the cause and manner of things, and I am so sensible of my own disability to surmount these difficulties, that speak I dare confidently and positively of very few things, except matters of fact.”[7]
In Boyle’s terms “matters of fact” were produced by experiment, in front of witnesses and disseminated in writing within a wider critical circle. Debatable theories and hypotheses about the putative underlying mechanisms were eschewed in favour of descriptions of experimentally produced “matters of fact” which, by dint of being witnessed by a community and disseminated in writing, were as certain as it was possible to be. Matters of fact were produced by human experiment and interventions but were conceived as “morally certain” because the human intervention and experimentation were envisaged as discovering something rather than inventing it.[8] In his writing, Boyle was careful to separate the narration of the experiment that demonstrates the “matter of fact” which has “moral certainty” from his explanations of them, which could only be a matter of opinion. While this divide was problematic even at the time, it points to the importance for the pursuit of science of working towards and with what can be known, albeit provisionally. We can see in Boyle’s development of the experimental matter of fact, the basis of the scientific method that has grown and developed and still in use today and perhaps even a hint of the Kantian distinction between the phenomena, things as we as human being know them and the noumena, things as they are in themselves, of which we can have no knowledge.
So matters of fact are those things discovered and reproducible by human interaction with the environment. But they aren’t quite as bald as that might suggest because every human scientific intervention with the environment, every scientific experiment exists already within a mental framework, a means by which that particular part of the world can be understood, taken account of or measured. For example, the seemingly very simple fact of the boiling point of water requires a sophisticated framework in which ideas of pressure, the knowledge of what happens to elements when they are heated, specifically the properties of mercury when subjected to heat, the ability to work with glass in a way that could be reliably reproduced and the mathematical development of a tool of comparison are presupposed and also today, all the digitized phenomena by which the old glass thermometers have been overtaken. It is easy to hit one’s own foot against a stone but scientific matters of fact, even simple ones require, a sophisticated explanatory framework. Boyle devised the procedure for the discovery and dissemination, and hence production, of scientific matters of fact as an attempt to get past what he deemed the fruitless speculations of causes but the conceiving of any human intervention in the wider environment, including an experiment, requires a prior situation in a discourse that make sense of it. An experiment to demonstrate or discover something already has the terms it needs in which to think and talk about it: as it begins, it already assumes the beginnings of an explanation.
As a modern rational twenty-first century Christian thinker, I am not in the least interested in trying to derive scientific explanations from the Jewish or Christian scriptures or traditions. I am interested in science and in scientific explanations because, conversely, I wish to be able to express what is core in my tradition in a way that is scientifically cogent today. Specifically, I am interested in scientific takes on the nature of reality. Christian and Jewish thought speaks of reality, the world, meaning all that there is, (which by definition, excludes God!!) as creation, a concept that has been rigorously metaphysically explored by the tradition and indeed by the Islamic tradition also, through the centuries. As a non-scientist past O-level, I wanted to understand the science that underlies scientific debates concerning understandings of the nature of reality so as to expose theological discourse to those reflections.
Scientific discussions on the nature of reality are found in the science that deals with the extremes of our experience: cosmology and quantum physics and it was to quantum physics that I was drawn because of claims by some very interesting physicists that a series of specific experiments tell us something very profound about the nature of reality. Some have gone so far as to say that a new area of philosophy has been established, experimental metaphysics[9]. The experiments in question are those done in an attempt to replicate in reality an experiment originally conceived as a thought experiment by three scientists in 1935, Einstein and two researchers working with him, Boris Podolsky and Nathan Rosen. In 1957 two physicists, David Bohm and Yakir Aharonov, cast Einstein, Podolsky and Rosen’s thought experiment into a thought experiment more easily replicated in practice. Then in 1964, a scientist called John Bell, on sabbatical from the now famous CERN, wrote a paper in which he refine the details of the proposed experiment still further and derived a specific sort of equation, called in inequality which would enable the interpretation of the results of any actual experiment done designed to replicate the thought experiment. Some people designate the experiments which were eventually done and which are still being perfected to replicate the thought experiment “experimental metaphysics” because the experiments, first framed as a thought experiment, are designed to test certain philosophical assumptions about the nature how things exist in reality.
The story starts with the discussions that the scientists at the forefront of the development of quantum physics had amongst themselves about what quantum mechanics, the physics of the very small, actually meant. It was clear that the theory could predict experimental results extremely accurately but understanding what those results and the theory, which accurately predicted them, implied about the nature of the world was another matter. The attitude of some physicists working in this area about these problems can be summed up in a phrase of the physicist David Mermin, “shut up and calculate”[10]. If the theory works, and it does, perfectly, then what is the point of getting upset about what may be deemed philosophical questions about what wider implications the success of the theory does or does not have. Might not Boyle have counseled, just getting on and doing the experiments. The problem was, as Einstein saw, that any reflection on what quantum theory might mean quickly led to a contradiction with the normal common sense rules that make science possible. As far as Einstein was concerned, just like things in our everyday world, things in the quantum world have a stable set of properties that exist before we do any experiments on them and that what we do to things in the quantum world in one place cannot instantaneously affect quantum things in another place. But reflection on the meaning of experiments done in the quantum domain quickly lead to contradictions with these common sense notions of cause and effect, with notions of spacetime and with the idea that things in the quantum domain have a stable set of properties and Einstein felt that all of these problems are not only confusing but undermine the very idea of the scientific enterprise that has produced them in the first place.
Einstein is famous for his theories of special and general relativity but he was also at the forefront of the development of quantum theory. Quite soon however, he expressed his dissatisfaction with the majority understanding of what the implications of the theory was. His 1935 paper, written with two young researchers[11], is designed to show the absurd consequences of the dominant understanding of quantum theory, consequences that Schrödinger himself drew attention to in his famous cat paradox. The intention of the paper, usually known by the initials of the writers as the EPR paper, was to expose quantum theory’s flaws as the authors saw them. A property or set of properties in the quantum domain is described by an equation known as the wave function. The orthodox understanding of quantum theory became known as the Copenhagen interpretation because Niels Bohr, the major interpreter of early quantum theory lived and worked there. The adherents of the Copenhagen interpretation hold the description of a quantum system given by the wave function is a complete description; it says all that can be said. Einstein and his adherents were not convinced. In the EPR paper they draw attention to two facts they consider strange and which they interpret as meaning either that the wave function does not supply a complete description of all the quantum system’s properties or that not all the system’s potential properties exist in practice unless they are measured. Einstein and co thought the idea that quantum things might not have properties until the properties were measured was ridiculous so their conclusion was that quantum theory, although brilliant, was incomplete. The EPR paper gives general descriptions of the situations in which these paradoxes arise and Bohm and Aharonov worked out a specific experiment in which this could be explored which John Bell further refined.
To follow Bell’s version of Bohm and Aharonov’s version of Einstein-Podolsky-Rosen’s thought experiment, we need a little scene setting. But first of all, note that long chain, Bell’s version of Bohm and Aharonov’s version of Einstein-Podolsky-Rosen’s thought experiment. And this is just the thought experiment. Much more work has to be done to get to the stage of an doing an actual experiment! Robert Boyle envisaged a scientific method by which witnessed and disseminated experiments produced “matters of fact” of which one could be morally certain. He was trying to make the production of scientific matters of fact clear. Establishing matters of fact scientifically is a rather more complex procedure today.
So to follow Bell’s version of this experiment, we need to grasp some terminology. Firstly “spin”. “Spin” is an attribute of certain quantum particles that manifests along 3 axises, called x, y and z and can be in one of two directions, up or down. Some sorts of quantum particles have spins denoted in whole numbers and other sorts in half number. So if you measure a particle’s spin along any of three directions, it will give you an up or a down, usually denoted as a plus or a minus, and either a whole number value or half number value. A strange attribute of spin, according to orthodox quantum mechanics, is that if the spin of a particle is definite along one axis, it isn’t along any of the other two. This was something the EPR experiment was designed to show was ridiculous and Bell’s work was designed to test it.
The second bit of terminology we need is “the singlet state”: if you prepare two particles each of which has a total spin value across the three axises of a half, these two particles can be prepared together in what is called a singlet state which means that all the spin values for each direction are linked with one another and the total spin value of all 3 directions for the pair together is O. So if particle A and B are in the singlet state, then if I measure the spin of particle A in the z direction and find it to be +½ then I know that if I measure the spin of particle B in the z direction it be will -½ because the total spin has to be O and each direction, x, y and z is correlated.
So Bell’s version of Bohm and Aharonov’s version of the EPR thought experiment is simple: we are to imagine two spin half particles in the singlet state going off in opposite directions, each towards a detector capable of measuring spin. We already know all the different results that the detectors could measure:
Particle A Particle B
x +½ x-½
x-½ x+½
y +½ y-½
y-½ y+½
z+½ z-½
z-½ z+½
Bell then states two assumptions that encapsulate the EPR’s idea of reality. One he takes from an article by Einstein, “But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of the system S2 is independent of what is done with the system S1, which is spatially separated from the former”[12]. The other he specifies himself, that the orientation of the measuring device which registers Particle A does not affect the orientation of the measuring device which registers Particle B. Given this experiment and these assumptions, he expresses the contradiction to which the EPR paper drew attention, “Since we can predict in advance the result of measuring any chosen component of [particle B’s spin], by previously measuring the same component of [Particle A’s spin], it follows that the result of any such measurement must actually be predetermined. Since the initial quantum mechanical wave function does not completely determine the result of an individual measurement, this predetermination implies the possibility of a more complete specification of the state.”[13] Bell is referring to that strange attribute of spin, that if a particle’s spin is definite along one axis it cannot be along either of the other two axises. This is not only problematic to a common sense idea of particles with stable properties but impossible to account for given that as this thought experiment shows, a measuring device will always register Particle B’s spin as definite along the same axis as was registered on the measuring device that interacted with Particle A, whichever one is measured.
One of the many problems involved in actually doing this experiment is that quantum experiments destroy the systems on which the experiments are done. Think of the pictures produced recently by the current experiments in the large hadron collider which smashes subatomic particles into one another at high speed to see what results. The smashed particles cannot be reassembled for the experiment to be done again: the experiment has to be replicated with another set of the same particles which are assumed to have the same properties as the first set. Thus the mathematical equation that Bell derives as a way of discriminating between the predictions that quantum makes for the result of this particular experiment and the predictions made by a theory that envisaged each particle with a full set of definite spin components along all three axises at the same time has to be framed using probability theory. Yet another piece in the already complex conceptual framework in which this thought experiment is fixed.
According to Boyle, experiments produce “matters of fact”. Bell on the basis of Bohm and Aharonov’s version of the EPR thought experiment, has produced an experimental procedure that is capable of discriminating between two sets of assumptions about the nature of quantum reality. If quantum things have a stable set of properties before they are measured and if there are no influences that means a measurement in one place can affect the reality of a system separated from it, then the results of a long series of experiments carried out according to the procedure Bell outlines, (ie testing random and therefore sometimes different spin direction on pairs of particles repeatedly), will produce statistics that fall within the predicted values of his inequality. Quantum theory on the other hand, predicts that the statistics will produce values larger than Bell’s predicted value range.
Once scientists began to understand the importance of Bell’s paper, the race was on to design an experiment to make the thought experiment actual, to decide between quantum theory and common sense, to produce in Boyle’s parlance, a “matter of fact”. Attempts to design a real experiment, showed up the “experimentally unrealistic[14]” nature of some of Bell’s assumptions so his equation was remodelled by experimentalists and in 1972 Stuart J Freedman and John F Clauser published a paper in the prestigious peer-reviewed journal Physical Review Letters, just as Boyle might have wished, outlining the experimental method and results and declared,
“Furthermore, we observe no evidence for a deviation from the predictions of quantum mechanics, calculated from the measured polarization efficiencies and solid angles,….. We consider the results to be strong evidence against local hidden variable theories.”[15]
In Boyle’s terms, they are declaring “ a matter of fact”: QED things in the quantum world are not embedded in local space with stable properties prior to measurement. Although one set of experiments was done in 1972 produced results that disagreed with the predictions of quantum theory and sat inside the limits of Bell’s equation, showing results that were consistant with quantum things being localized in space with stable properties prior to measurement, the vast majority of physicists accept that Bell’s inequality has been comprehensively violated experimentally by the 1972 experiments and by many other experiments subsequently.
But that has not ended the matter. Firstly for the experiments to test Bell’s equation to be ones that produce matters of fact, the theory in which they are framed has to be right, the maths has to be right and some theoreticians, whether philosophers, mathematicians or physicists do not think it is and thus that whatever happens in the experiments, they cannot be interpreted as ruling out, or indeed in, quantum things being localized in space with stable properties prior to measurement. One mathematician drily remarked that the most significant thing about John Bell’s work “was the great stimulation of experimental technologies for working with entangled photons[16]”.
Secondly, in a way that perhaps Boyle would understand, some physicists think that there are loopholes in the experimental procedure which mean that the matter of fact has not yet been established finally. Fifty years on from Bell’s paper, and seventy-nine years on from the EPR paper that started it all, the race is still on to complete the perfect loophole free experiment. Despite this, few physicists expect a loophole free experiment to contradict the results obtained so far. So, as far as most physicists are concerned, the statistics produced by runs of these type of experiment exceed the limits imposed by Bell’s equation, known as his inequality and agree with the prediction of quantum theory. In Boyle’s terms a matter of fact has been comprehensively established. Shouldn’t that close the matter? The predictions of quantum theory have been fulfilled. But what on earth does that mean? What does that imply for the nature of the quantum things? What does that imply for the nature of the reality in which we are embedded? A negative matter of fact may have been established; we know what we cannot say about quantum things, they do not behave as they would if they were little things with stable properties embedded in spacetime like us. We know what they are not but what can be positively said about quantum reality on the basis of the results of experiments done to test Bell’s inequality is a completely different question. What can be said on the basis of these experimental results is a matter about which Boyle would not “ dare [to] speak confidently and positively.”[17] In the realm of quantum physics, establishing matters of fact, unlile running successful experiment and designing incredible technology, is an almost impossible thing to do.
So where does that leave a theology that wants to engage with the best of modern science? A theologian, Philip Clayton, who works in the theology-science field writes,
“if there is to be any contemporary theology, it cannot be carried out in ignorance of natural scientific results, even the most difficult ones. The one thing worse than a theology that attempts to draw connections between physics and God is a theology that believed it has no need of any such connections, a theology that believes it can concoct the divine out of metaphysical whole cloth. An intellectually responsible theology has no choice, …. theologians either wrestle with the best physical knowledge possible or condemn themselves to the subjectivity that sola fides has come to represent in modern world.”[18]
This is one thing when scientific results are well understood and with meanings that are agreed upon. It is not so straightforward when, in the case of the experiments that set out to test John Bell’s inequality, there is no unanimity within the scientific community as to what they mean. Within the mainstream scientific community, those who do hold that these experiments do establish a matter of fact, to use Boyle’s phrase, think the fact is
The world is nonlocal or
- Joint probability distributions do not apply to quantum experiments or
- The area of time runs backwards in the quantum domain or
- Properties in the quantum domain do not exist until they are measured or ……
It might then seem that the best thing for the theologian to do is to wait until the philosophers of science, the mathematicians, the theoretical and experimental physicists reach a conclusion. But then, as the scientist-theologian Michael Heller points out,
“How should a theologian react …….. Should he or she abandon any contact with the sciences? Seemingly this happens very often: in fact, however, it is impossible to do. The scientific world image (whatever this means) is present in the cultural climate and intellectual atmosphere of any epoch, and no theology can avoid experiencing this climate and breathing this atmosphere. Even if theology makes an effort to distance itself from the scientific image of the world, it implicitly makes use of at least some of its elements. Inertia of concepts and langauge involved in these processes could be enormous. If these processes do not remain under control, the image of the world adopted implicitly, will almost for sure be outdated and no longer scientific.”[19]
If theological discourse is to remain in touch with the world in which people live today, theologians have to grapple with the best scientific thought, otherwise theology will loose the ability to make any sense at all in the modern world. In areas where there is no consensus in the scientific community, such as what quantum physics tells us about the nature of reality, this grappling is not so much looking for new certainties to replace the outdated ones but to be content to do theology a little like jazz: to take themes, matters of facts, ideas suggested by scientists about the world and be prepared to riff, to suggest, to offer a series of differing and tentative ideas. In this area, theology wrestling rationally with science will have to be prepared to be as provisional as science itself because in some areas of science, matters of fact are not established quite so simply as Boyle envisaged.
[1] James Boswell, The Life of Samuel Johnson, (London: 1791). Vol 1:p303-4
[2] Sumit Paul-Choudhury, “Breaking Point,” New Scientist 10th May, (2014). p5
[3] Shapin S and Schaffer S, Leviathan and the Air Pump: Hobbes, Boyle and the Experimental Life, (Princeton: Princeton University Press, 1985).p23
[4] Dietrich Bonhoeffer, Letters and Papers From Prison (Enlarged Edition), Eberhard Bethge, ed., (London: SCM, 1979(1972)).p311 quoted from Rodney D Holder, “Science and Religion in the Theology of Dietrich Bonhoeffer,” Zygon 44, no. 1 (2009).p117
[5] Robert John Russell, “Quantum Physics in Philosophical and Theological Perspective,” in Physics, Philosophy and Theology, (Vatican City State: Vatican Observatory, 2000 (1988)).p368
[6] Edith Dudley Sylla, “The Emergence of Mathematical Probability From the Perpective of the Leibniz-Jacob Bernoulli Correspondence,” Perspectives on Science 6, no. 1 & 2 (1998). p44 from a letter from Jakob Bernoulli to Gottfried Leibniz, 3rd October 1703
[7] Henry G van Leeuwen, The Problem of Certainty in English Thought 1630-1690, (The Hague: Martinus Nijhoff, 1963). p.103 quoting Robert Boyle “Certain Physiological Essays and other Tracts, Works I” p.307
[8] S and S, Leviathan and the Air Pump: Hobbes, Boyle and the Experimental Life. p.67
[9] see eg Eric Cavalcanti, “Reality, Locality and All That: “Experimental Metaphysics” and the Quantum Foundations” (PhD PhD, University of Queensland, 2008).; Geoffrey Hellman, “Intepretations of Probability in Quantum Mechanics: A Case of “Experimental Metaphysics”,” in Quantum Reality, Relativistic Causality, and Closing the Epistemic Circle. Essays in Honour of Abner Shimony, ed. Wayne Myrvold and Joy Christian, (Springer, 2009).; Ronnie Hermens, “The Problem of Contextuality and the Impossiblity of Experimental Metaphysics Thereof,” Studies in History and Philosophy of Modern Physics 42, (2011).
[10] N David Mermin, “Could Feynman Have Said This?,” Physics Today 57, no. 5 (2004).
[11] In a letter to Schrödinger on 19th June 1935 , Einstein notes that the paper was in fact written by Podolsky, after discussion and that the main point was “hidden by the erudition”. See Don Howard, “Einstein on Locality and Separability,” Studies in History and Philosophy of Science Part A 16, (1985). p.177
[12] John S Bell, “On the Einstein-Podolsky-Rosen Paradox,” Physics 1, (1964). p.200 quoting from Albert Einstein, “Autobiographical Notes,” in Albert Einstein Philosopher-Scientist, (New York: MJF Books, 1970 (1949)). p.85
[13] John S Bell, “On the Einstein-Podolsky-Rosen Paradox”. p.195
[14] John F Clauser, Michael A Horne, Abner Shimony, and Richard A Holt, “Proposed Experiment to Test Local Hidden-Variable Theories,” Physical Review Letters 23, no. 15 (1969). p.881
[15] Stuart J Freedman and John F Clauser, “Experimental Test of Local Hidden-Variable Theories,” Physical Review Letters 28, no. 14 (1972). p. 940
[16] Andrei Khrennikov, “Bell’s Inequality From the Contextual Probabilistic Viewpoint,” in Philosophy of Quantum Information and Entanglement, ed. Alisa Bokulich and Gregg Jaeger, (Cambridge: CUP, 2010).p82
[17] van Leeuwen, The Problem of Certainty in English Thought 1630-1690. p.103 quoting Robert Boyle “Certain Physiological Essays and other Tracts, Works I” p.307
[18] Philip Clayton, “Tracing the Lines: Constraint and Freedom in the Movement From Quantum Physics to Theology,” in Quantum Mechanics: Scientific Perspectives on Divine Action (Volume 5), ed. Robert J Russell, Philip Clayton, Kirk Wegter-McNelly, and John Polkinghorne, (Vatican City: Vatican Observatory Publications, 2001). p211
[19] Michael Heller, Creative Tension: Essays on Science and Religion, (West Conshohocken, Pennsylvania: Templeton Foundation Press, 2003). p.2
COMMENT John Baxter.
At a time when technology demonstrates the consistency of the operation of the cosmos as never before as we are supplied with ever more amazing innovations in computing, communication, genetic and medical advances, the idea that because the mathematicians who deal with quantum theory are unable to reach agreement as to how objects in the quantum field work and that that means the fundamental presupposition of all science – i.e. that we live in a totally and rigorously consistent cosmos is to be questioned or set aside, strikes me as ludicrous and a practical impossibility.
Like Johnson’s toe hitting a rock two plus two always equals four in all possible universes. If we cannot assume that to be true we can assume nothing is true, which it appears is where the French philosopher Deleuze ends up with a philosophy which asserts art, science and philosophy are independent and thus I suspect is less meaningful and less useful than the tail of a peacock – i.e. great for display and attracting a mate, but useless as a guide to reality.
Jo says, “I have no intention of trying to pull the rational scientific rug from under our twenty-first century feet” but I wonder. I wonder because of the inbuilt assumptions which may be present when one approaches any examination of what science is or does “from a theological perspective.” I would like to know what that phrase means. To be specific does “a theological perspective” assume that because currently inexplicable events take place at the quantum level the “theologian” is justified in considering that scientifically inexplicable (magical) events take place in history, such as the physical resurrection and “objective” miracles of Jesus, the inerrant inspiration of Scripture, the Koran, the Rig Veda, the Book of Mormon etc etc. Such thinking quickly leads to belief in fairies at the bottom of our gardens.
Against this it seems to me that scientists and mathematicians, (not to mention historians and social scientists) building on the knowledge they aquire through the development of mathematics applied to observation, experiment and theorising will ALWAYS find reasons to disagree with each other for our knowledge is partial and the process is open. This means we can say on many occasions this is a true fact and that cannot happen, while of course other things leave us unsure as to whether they are “facts” and credible or not. If a “theological perspective” is to have credibility it must surely work WITHIN that framework of assumptions.