Multiple discoveries and the nature of scientific knowledge

Part 3 - Contingency vs inevitability in successful science

“Time is the greatest innovator." - Francis Bacon

There was a famous debate between Stephen Jay Gould and Simon Convay Morris about the nature of the evolutionary process. Gould thought that evolution is radically contingent: if “the tape of life” were to be rerun, even a minute change would result in widely divergent outcomes, and the likelihood that intelligent life would evolve again in such a rerun is vanishingly small. Indeed, according to Gould, “no important and sufficiently specific evolutionary outcomes are robustly replicable”1.

Conway Morris agreed with Gould that historical contingencies are pervasive in evolution but:

Put simply, contingency is inevitable, but unremarkable. It need not provoke discussion, because it matters not. There are not an unlimited number of ways of doing something. For all its exuberance, the forms of life are restricted and channeled.

In other words, if the life forms are solving problems in a global search space, there’s a limited number of good solutions to various design problems (limited by laws of physics and chemistry), and by moving towards those optimal solutions, species will eventually acquire characteristics of these options, irrespective of their starting position.

This debate has far-reaching implications for complex search processes not unlike the scientific enterprise.


What does the pervasiveness of multiple discoveries tell us, if anything, about the nature of scientific knowledge? It seems intuitive to think that if a result is obtained independently by multiple researchers, it must be in some sense inevitable, almost forcing itself upon us, and it is likely to reflect something true about the world2. In that sense, multiple discoveries are an instantiation of the “inevitability thesis” that lies at the heart of the contingency vs inevitability debate in philosophy of science: “If the result of the scientific investigation of a certain subject matter is correct, then any investigation of the same subject matter, if successful, will yield basically the same result.

The implication here is that the results of successful science are essentially contained within a final, complete state of scientific knowledge that our collective inquiry is approaching, and in that sense are inevitable. This is a stronger claim than to say that, given enough time and resources, it is inevitable that correct results will be obtained — though the distinction is arguably fine. And one could say that the two statements entail each other. But it is eventually about tying inevitability to a final state of scientific truth (if such a thing exists) and not the research process itself.

In any case, upon a closer look, multiple discoveries may not be ironclad evidence for the inevitability side of the debate.

Contingency in the context of scientific research can mean many things. Experimental results, methods, theories, instruments and devices used to make experimental measurements, identities and boundaries of whole disciplines, scientific frameworks and paradigms — all of these are a result of historical contingencies, a result of choices made, collectively agreed and built upon by the scientific community3.

The existence and abundance of multiple discoveries can be easily accounted for in this view, just as in the inevitability thesis. Since scientists working in a given domain researching the same question have been exposed to the same background knowledge in that domain (Thomas Kuhn would say, the same paradigm), they operate in the same branch of the possible theory space, and it is no wonder that they arrive at the same results. This does not eliminate the possibility of existence of parallel unexplored branches that may be as successful in explaining and predicting the behavior of systems under study. There could be physics without quarks and biology without genes, which is hard to imagine for us who have bought into the current branch — but not unimaginable.

Researchers may work independently from each other but not really be independent from the state of contemporary scientific knowledge in a given field. This background knowledge includes all accessible knowledge that a scientist implicitly or explicitly relies on and eventually uses to interpret a new finding. When a new result (or, for that matter, a multiple discovery) is added to this repository of background knowledge, that does not in itself say anything conclusive about whether the result should be interpreted in realist or antirealist terms. That is because, to an antirealist, the mere acceptance and coherence of new knowledge with the existing body of knowledge does not mean that it necessarily reflects a truth about the world. So it appears that multiple discoveries are neutral with respect to both the contingency/inevitability and realist/antirealist philosophical divides.

With that said, science education and practice are firmly, if implicitly, based on realist and inevitabilist views of scientific knowledge. Contingency and antirealism seem to be relegated to philosophy of science departments and are rarely discussed by scientists themselves. All the more I think it’s worth diving deeper into them, and that’s what the rest of this post is about.

The Self-Vindication of Laboratory Sciences

‘A rule is amended if it yields an inference we are unwilling to accept; an inference is rejected if it violates a rule that we are unwilling to amend.'— Nelson Goodman

One of the major researchers of the contingency vs inevitability debate is Ian Hacking. The title of this section links to his paper which looks into how components of the research process — theories, measurement instruments and practices, and data processing practices — can adjust to each other over time, resulting in a stabilization of science. This stable, internally consistent structure of laboratory sciences may be quite independent from the external world it is supposed to describe.

Despite our recent enthusiasm for refutation and revolution, these sciences lead to an extraordinary amount of rather permanent knowledge, devices, and practice. It has been too little noted of late how much of a science, once in place, stays with us, modified but not refuted, reworked but persistent, seldom acknowledged but taken for granted.

First of all, what does Hacking mean by laboratory sciences? In his definition,

the “prototype” laboratory sciences are those whose claims to truth answer primarily to work done in the laboratory4. They study phenomena that seldom or never occur in a pure state before people have brought them under surveillance. Exaggerating a little, I say that the phenomena under study are created in the laboratory.

To be fair, this definition is contestable, as anything from cell divisions to animals’ foraging behavior happens, of course, all the time outside of the lab. Trees fall in forests even when people aren’t watching. But a fair share of phenomena is indeed being created in the laboratory — from photoelectric effect to lasers to chemical synthesis to grafting tumors in mice for testing anti-cancer drugs.

Disciplines that are mainly observational, taxonomical or historical — like botany and paleontology — are not included among laboratory sciences, even though they may involve methodologies that are carried out in a lab setting. The status of astronomy, astrophysics and cosmology is ambiguous since they don’t generally interfere with the phenomena that they study.

So what does self-vindication mean in this context? It means that any test of theory happens through the use of instruments and devices that evolved in conjunction with it and in conjunction with practices of data analysis. And conversely, the working of the instruments and correctness of analysis is judged by their fit with the theory. Theoretical assumptions may be “built into the apparatus itself”, as Peter Galison put it in his book How experiments end. This makes them a “closed system” (as Heisenberg referred to Newtonian mechanics) that is essentially irrefutable.

The theories of the laboratory sciences are not directly compared to “the world”; they persist because they are true to phenomena produced or even created by apparatus in the laboratory and are measured by instruments that we have engineered. This “true to” is not a matter of direct comparison between theory and phenomenon but relies on further theories, namely, theories about how the apparatus works and on a number of quite different kinds of techniques for processing the data that we generate. High-level theories are not “true” at all. This is not some deep insight into truth but a mundane fact familiar since the work of Norman Campbell (1920, 122-58), who noted that fundamental laws of nature do not directly “hook on to” the discernible world at all. What meshes (Kuhn’s word) is at most a network of theories, models, approximations, together with understandings of the workings of our instruments and apparatus. <…> Our preserved theories and the world fit together so snugly less because we have found out how the world is than because we have tailored each to the other5.

But, Hacking warns, “This is not to suggest that they are mental or social constructs.” He argues “[not for] idealism but rather for down-to-earth materialism.” Also, his thesis is about mature science rather than science at the cutting edge:

That can be as unstable as you please, even when it is what Kuhn called normal science. As a matter of fact such research is usually highly regimented. Results are more often expected than surprising. We well understand why: it is not that sort of short-term stability that is puzzling. I am concerned with the cumulative establishment of scientific knowledge. That has been proceeding apace since the scientific revolution.

Hacking notes that his thesis on the self-vindication of laboratory sciences is compatible with both realist and antirealist views, except the realist assertion that “the ultimate aim or ideal of science is ‘the one true theory about the universe.'” This is because of incommensurability — in the literal sense of the lack of common measure — of theories enmeshed and co-evolved with different types of instruments and measurement practices, while each may remain “true to” their data domain.

It used to be said that Newtonian and relativistic theory were incommensurable because the statements of one could not be expressed in the other — meanings changed. Instead I suggest that one is true to one body of measurements given by one class of instruments, while the other is true to another.

Likewise, Heisenberg wrote in 1948:

some theories seem to be susceptible of no improvement… they signify a closed system of knowledge. I believe that Newtonian mechanics cannot be improved at all… with that degree of accuracy with which the phenomena can be described by the Newtonian concepts, the Newtonian laws are also valid.

To this Hacking adds, “It is rather certain measurements of the phenomena, generated by a certain class of what might be called Newtonian instruments, that mesh with Newtonian concepts.” The accuracy of theory and accuracy of instruments complement each other and depend on each other.

Hacking’s thesis is also compatible with both sides of the contingency vs inevitability debate, albeit at different levels. If mature science is self-stabilizing in the way he describes, its findings may become highly constrained by the interlocking elements of laboratory research, and thus in a sense inevitable.

But this self-stabilization is a contingent process in itself, since the particular theories, instruments, and practices of a mature science are not predetermined, but are a result of a historical process of mutual adjustment and co-evolution. While mature sciences may be stable, they could have stabilized around different research paths given different starting points.

Hacking’s thesis can be summarized as a coherence theory — except not of truth but of the entire scientific enterprise. Scientific stability and progress are brought about by the mutual adjustment of theory, experiment, and data into a coherent whole. Truth in this picture is a way of expressing our commitment to this evolving system rather than a matter of describing an independent external reality.

Ok! But how about vaccines and drugs and materials that came out of the lab and have been tested and clearly work in the real world? Hacking says they work because we’ve essentially remade much of the world in the image of the lab. Success of the scientific enterprise does not necessarily imply that our theories have captured preexisting truths about the world. Instead, we reshape our environment so that it behaves in accordance with our theories:

We remake little bits of our environment so that they reproduce phenomena first generated in a pure state in the laboratory. The reproduction is seldom perfect. We need more than the topical hypotheses and the modeling of the laboratory apparatus; we need more thinking of the same kind as those. But the application of laboratory science to a part of the world remade into a quasi-laboratory is not problematic, not miraculous, but rather a matter of hard work.

This is a radical statement, to be sure. I can’t accept it wholesale but it does prompt the question: How much of the world can we indeed remake in the image of the lab? What proportion of lab science does not translate into solving real life problems because of its disconnect with the world at large, while being internally consistent?

Hacking’s paper suggests that these limits to application of scientific knowledge aren’t just a matter of our theories being incomplete or imperfect. Rather, they reflect the fact that the world is in fact not a lab and can never be made to conform perfectly to our lab-based understanding. There will always be some discrepancy between the phenomena we create in the lab and the complex reality of the world.

The success of science comes from our ability to make the world more lab-like; the limitations come from the inherent resistance of reality to this remaking.


Many thanks to Amanuel Sahilu and Joe Walker for their thoughtful feedback on a draft of this piece.

If you have comments on this post, feel free to get in touch on twitter/X or email me at aghayeva.u@gmail.com.


  1. This was illustrated vividly by Ray Bradbury in his short story “A sound of thunder”. ↩︎

  2. The latter is a claim of scientific realism which I will return to below. To a first approximation, scientific realism is the view that well-confirmed scientific theories are approximately true; the entities they postulate do exist; and we have good reason to believe their main tenets. ↩︎

  3. Discussed in Science as it could have been, p. 19. ↩︎

  4. where a laboratory is defined as “a space for interfering under controllable and isolable conditions with matter and energy.” ↩︎

  5. Elsewhere he writes: “The most succinct statement of the idea, for purely intellectual operations, is Nelson Goodman’s summary (1983, 64) of how we “justify” both deduction and induction: ‘A rule is amended if it yields an inference we are unwilling to accept; an inference is rejected if it violates a rule that we are unwilling to amend.” ↩︎