As explained in the introduction of Modal Empiricism, part of the debate on scientific realism concerns the aim of science: whether it is truth or something weaker, such as empirical adequacy. The aim of science should not be confused with the aim of individual scientists, which can be various: we are talking about the “rules of the game” of the scientific institution, and in particular, the criteria of acceptance and rejection of theories and hypotheses.
The fourth chapter of the book proposes such a criteria by examining what would count as ideal empirical success for a theory in science. The construction of this criteria is based on the account of scientific representation presented in chapter three, and I proceed by first examining ideal empirical success in all contexts at the level of models, then at the level of theories, taken to be families of models. The resulting criteria is modal empirical adequacy, and I argue that it fares better than van Fraassen’s notion of empirical adequacy when it comes to accounting for scientific practice.
From Contextual Accuracy to General Adequacy
In chapter three of the book, I had defined model accuracy in context in terms of the actual state or history of a system being among the states or histories “permitted” by an interpreted model. However, empirical success for a general model cannot be restricted to one particular application. Nor should it be restricted to all actual applications of the model: arguably, an ideally successful model should correctly account for the phenomena even “when we are not looking” (this is a feature of van Fraassen’s notion of empirical adequacy as well, which mentions all observable phenomena, actually observed or not). A criteria of ideal success must not depend on "where we look", otherwise, we could simply “stop looking” and any theory would be adequate.
In this chapter, I introduce the notion of a situation, which is a local state of affairs that could in principle be represented in one or another way, even if it is not. It can be anything: the trajectory of planets during thousands of years, or a mosquito flying during a few seconds, but situations are bounded in space and time. A situation affords several potential contexts, corresponding to potential perspectives on this situation (we could be interested in such or such coarse-grained property of the situation). It seems reasonable to assume that situations are in definite states or have definite histories relative to each of these potential perspectives independently of us, and so it seems to make sense to claim that a model interpreted in terms of this potential context would be accurate if the actual state of the situation is among the ones permitted by the model. For example, a stone falling on a distant planet could respect a model that represents it even if noone is actually using this model to represent it.
Of course, we cannot expect an interpreted model to be accurate if its application to a situation is irrelevant. As explained in the previous chapter, an interpreted model is relevant or not depending on the acceptability of its interpretation in the community of scientists. In a first approach, we can say that a general model is ideally successful, or emprically adequate, if all its relevant interpretations would be accurate, whatever the context and situation in which it would be interpreted.
When I say “would be accurate”, there is an implicit modal component (which is the counterpart of van Fraassen’s “able” in “observable”). This modality seems rather innocuous in so far as this “would be accurate” only depends on the actual state of the situation considered. It does not seem to involve possibilities and constraints of necessity in the world. However, I argue in the book that thinking so is problematic.
The problem is that representational contexts involve, in general, controlling the target or interacting with it. Experimentation is rarely passive: it often produces particular states or histories for the situation concerned that would not occur without our interventions. In consequence, the chemical model of a protein can only be interpreted empirically in very specific situations where various manipulations (including breaking down cell membranes, purifying the resulting mixture and using chromatography) are performed.
If the empirical adequacy of a model of proteins only concerned these controlled situations, it would be effectively limited to the situations “when we are looking”: such models would not be about living organisms in general, but only about purifying mixtures observed through chromatography. Similarly, models of particles in physics would only concern very peculiar situations in colliders, not the physical world in general. Accounting for these controlled situations only does not seem acceptable as an aim for science. Why would we put so much money in the construction of these very unnatural situations if what we learned could not be extended beyond? Of course, a scientific realist does not encounter this difficulty, because she can say that proteins exist in living organisms as well as in those mixtures, but an empiricist who thinks that the aim of science is empirical adequacy cannot afford this solution.
In order to solve this problem, I think that we must accept that empirical adequacy is also about merely possible contexts, that is, alternative ways actual situations would be if we had performed such or such manipulations. This way, the models of science can be about a much larger range of situations, including all the ones “we are not looking at”, and models of proteins can be about living organism in general, in so far as, in principle, we could perform various manipulations on them.
Accordingly, we can say that a model is modally empirically adequate (ideally successful) if all its relevant interpretations in all alternative ways actual situation could be would be accurate. In other words, a model is acceptable in science if scientists have good reasons to believe that it could withstand any possible situation to which it would apply, given natural constraints on what is possible or not. I explain in this chapter how this definition can directly account for counterfactual reasoning in science, including explanatory and causal discourse.
From Models to Theories
Now that we have a criteria of ideal success for models, we can turn to theories, taken to be families of models. What does the failure or success of a model imply for the associated theory?
Neptune was discovered following discrepancies between predictions and observations of the trajectory of Uranus in Newtonian mechanics. This shows that the empirical failure of a model (without Neptune) does not always mean that the theory should be rejected: instead, one can propose a new model (posit a new planet), and this can lead to new discoveries. However, the same strategy did not work in the case of Mercury: a whole new theory, general relativity, was required to account for its trajectory. So sometimes, the theory should be rejected.
Shall we say that a theory is ideally acceptable so long as at least one model can account accurately for any situation of the world? I argue in this chapter that this option is too liberal, because with sufficient ad-hoc hypotheses, all theories will be acceptable under this criteria. Scientists could have posited dark matter in order to account for the trajectory of Mercury. We need more constraints in order to distinguish between ad-hoc and legitimate hypotheses.
We can understand these constraints as a matter of cross-contextual coherence. Ad-hoc hypotheses are problematic because they are only useful in a limited range of contexts, and cannot be extended beyond this range. A theory, to be ideally acceptable, should be such that all its models are cross-contextually coherent as well as accurate. It should provide a certain unity. This is consistent with Lakatos (1978)’s idea that ad-hoc hypotheses should lead to new predictions, as well as with coherentist understandings of ad-hocness schindler_coherentist_2018
Fortunately, the account of representation provided in the previous chapter provides a means of integrating this constraint of coherence, because it assumes that scientific models convey norms of relevance: they tell us how such or such system ought to be represented. This includes domain-specific postulates (for example, that Neptune exists), and these norms delimit the domain of application of the model. The lesson is that a form of unity is implied in the acceptability of scientific models: in order to be acceptable, a model that postulates a new planet should be accurate whatever the context of use (not only for predicting the trajectory of Uranus, but also for observing Netpune directly). It must also be coherent with other models, in so far as their domains of application can overlap, and can possibly be combined into more complex models that are also adequate. This notion of unity can be considered one of the criteria by which models are accepted in science, that is, criteria of relevance.
Accordingly, we can say that a theory is empirically adequate if all its relevant models are adequate in the sense given in the previous section. The relevance of models implies their unity and cross-contextual applicability. According to modal empiricism, the aim of science is to produce adequate theories in this sense. I argue in the book that this is consistent with Kuhn (1962)’s distinction between normal science and revolutionary science and with the idea that theories provide understanding of the world by means of unification of various phenomena.
Comparison with van Fraassen’s Account
According to van Fraassen (1980), a theory is empirically adequate if it has at least one model such that all observable phenomena of the universe, past, present and future, fit inside. This notion of adequacy is distinct from the one I propose in three respects.
First, van Fraassen’s definition involves a unique model of the whole universe, while my definition involves all relevant models of the theory. I argue in this chapter that my definition is more connected to scientific practice, since noone as ever seen a model of the whole universe that would include all observable phenomena. Theories are tested against bounded situations. Furthermore, van Fraassen's mention of past and future phenomena rests on an implicit eternalist metaphysics, but it is questionable that such a metaphysics must be assumed for making sense of scientific activity.
A second difference is that van Fraassen relies on the notion of observable phenomena, which has been criticised by many (for example, Psillos 1999 ch. 9, Alspector-Kelly 2004). Note that his notion of observable phenomena is distinct from appearances: appearances correspond to how phenomena look from a perspective, whereas observable phenomena are objective. In contrast with van Fraassen, I focus on norms of relevance, including norms of experimentation, for establishing a connection between models and appearances, without the mediation of observable phenomena. As we have seen, these norms mainly have to do with unification and cross-contextual stability. This is a more deflationary way of putting things. It allows for more contextual considerations in the way scientists manage appearances in experimentation. It also allows for considering manipulations to be constitutive of the relation between theory and experience, and not only instrumental in bringing about observable phenomena.
The last difference between van Fraassen’s definition and mine is the modal aspect. I argue in this chapter that it is required to make sense of scientific practice, and in particular of the fact that scientists are interested in confronting their theories with unnatural situations. If ideal success only had to do with accounting for actual phenomena, creating such unnatural situations, and taking the risk to invalidate our theories, would not be worth the effort (a similar argument is provided by Ladyman and Ross 2007, p. 110). Furthermore, perhaps observable, but unobserved phenomena are actual, but producible but unproduced phenomena are not, so if we want to extend adequacy beyond actual uses of the theory (as van Fraassen does), and assuming that experimentation is active, adequacy must concern merely possible phenomena. The fact that scientists often control parameters in experiment so as to test all possible values for these parameters supports this idea. Testing theories against various possibilities seems to be part of the ``rules of the game’’ of scientific practice. Furthermore, the notion of possibility involved cannot be epistemic, because the likelihood that a configuration is realised somewhere in the universe is not a relevant motivation for implementing this configuration.
Modal Empiricism as Pragmatism
Despite its commitment to possibilities in the world and to natural constraints on these possibilities, I believe that modal empiricism is not closer to scientific realism than van Fraassen’s constructive empiricism. It is more realist with respect to modalities, but less in other respects: for example, there is no commitment to the existence of a distribution of observable phenomena in an eternalist metaphysics. I would say that modal empiricism is more pragmatically oriented: the main idea is that an ideally successful theory could withstand any possible situation that we could encounter or create.
Aucun commentaire:
Enregistrer un commentaire