lundi 5 avril 2021

Chap. 6: Scientific Success

Codex Egberti fol. 27v

Scientific realists assume that theoretical terms refer to natural properties, and that theories correctly describe how these properties are dynamically related in the world. This view requires an account of representation that is stronger than the one presented in chapter 3 of Modal Empiricism. Instead of assuming that the symbols of models are mapped to a context specifying empirically accessible properties, we should assume that they directly refer to real objects and properties in the world, and models describe their relations. For the realists, our best theories are true, because their models, realistically interpreted, correspond to the physical systems they represent.

The argument generally put forth in favour of realism is based on empirical success. Relativity theory predicted the deviation of light by massive bodies. The wave theory of light predicted that a bright spot should appear at the centre of a circular shadow created by a punctual source. All these predictions were later confirmed by our observations. According to the realist, such a success would be a miracle if our theories were not true, or close enough to the truth. The empiricist cannot explain this incredible empirical success.

In chapter 6 of Modal Empiricism, I examine this argument and its ramifications, and I provide an alternative account of empirical success: according to modal empiricism, the success of science is no miracle in so far as the empirical adequacy of theories can be justified by past experiences, and it can be justified by a particular form of induction: an induction on the models of the theory.

Codex Egberti fol. 20v

The No-Miracle Argument

A simple argument against realism goes as follows: theories, and explanations in general, are underdetermined by experience: there is always more than one explanation available for the same observed phenomenon. So why think that our theories provide the right explanation?

Some authors argue that this problem of underdetermination is overstated, because many alternative explanations for the same phenomena are far-fetched. For example, the idea that a theory would only be true when we make observations, but not otherwise, accounts for the same observations than the theory itself, but it is not really an alternative theory: it looks more like a reinterpretation of the content of the same theory. However, even if we eliminate these far-fetched proposals, problematic cases of underdetermination remain. For example, Newtonian gravitation can be reformulated in terms of deformations of spacetime instead of forces (Earman 1993). The two versions are empirically equivalent, but by realists’ lights, they are distinct theories, because they posit distinct kinds of objects and properties. Which of the two should we be realist about? Newtonian mechanics is no more considered a fundamental theory, but there are probably alternatives to current fundamental theories that account for exactly the same phenomena by positing a distinct ontology, including yet unconceived theories.

In response, the realist can argue that although many theories can account for the same observations, not all theories are as well confirmed (Laudan and Leplin 1991). Some explanations are better and we can confirm them as more likely by means of an inference to the best explanation. Other criteria than a mere fit with observations are involved in what constitutes a good explanation: simplicity, scope, fruitfulness, unification, etc. Inference to the best explanation is how we could go beyond mere empirical adequacy and make inferences about the ontology of the world.

This is the main divide beteen realists and empiricists: empiricists typically deny that such criteria of good explanations are indicators of truth. So, they deny that inference to the best explanation is a valid mode of inference (Van Fraassen 1989 ch. 6, Cartwright 1983 ch. 1). Perhaps some criteria, such as coherence and unificatory power, play a role in confirmation, but then they can be considered empirical criteria. This can be shown in the case of unification in a Bayesian framework (Myrvold 2003): a theory providing unifcation to our observations can be given more credence because it assumes less coincidences (this supports the use of unification in the account of empirical adequacy presented in chapter 4). However, these criteria are not enough to solve the problems of underdetermination faced by realism, because in general, alternative ontologies (gravitational forces or deformations of space-time) are just as unifying (Myrvold 2016).

According to the empiricist, other criteria of "good explanations" invoked by realists, such as simplicity or fruitfullness, are at best strategic or pragmatic. A simple theory will be less prone to overfit data (Forster and Sober 1994), it might be more easily proved inadequate, so it should be tested first, and it is easier to use. But a simple theory is not more likely to be true just because it is simple. Inference to the best explanation should be understood as a heuristic for selecting candidate hypotheses for empirical tests, but it does not provide any justification for the truth of these candidates (and this is actually how Peirce, who introduced the concept under the term “abduction”, viewed it (Nyrup 2015)). If different candidate ontologies make no empirical difference, we should remain agnostic: all we know is that the theory is empirically adequate. Inference to the best explanation is no way out.

The empiricist will also note that superseded theories still explain. For example, Newtonian mechanics is used to explain tides, and it does this better than relativity theory. So, the relation between truth and explanatory virtues put forth by the realist is unfounded.

According to the realist, however, good explanations are more likely or be true. As noted earlier, the realist strategy for justifying this idea is based on the empirical success of theories. A first, naive argument for scientific realism is simply that since our theories are very successful, they must be true. However, this argument is question begging. It amounts to claim that since the phenomena that the theory explains are observed, then the explanation must be true, which is just a rehearsal of the idea that good explanations must be true. This is precisely what the empiricist denies.

For this reason, scientific realists generally put forth a more subtle argument, the aim of which is to justify inference to the best explanation itself. They note that theories often make novel predictions, for example, the deviation of light by massive bodies predicted by relativity theory. What seems miraculous, and requires an explanation, is not that such phenomena occurs, but rather that a theory that was designed to explain other types of phenomena could be so extended to a new context of application and still be successful (regarding this aspect, I agree with Hitchcock and Sober (2004) that the novelty of prediction is not desirable for its own sake, but only because it guarantees that there was no “overfitting”: the only "miracle", in this respect, is that a unified theory with relatively few parameters can account for a large variety of phenomena).

With this argument, the focus is no more on phenomena and their theoretical explanations, but rather on the methods of science by which the best explanations are selected (a move due to Boyd 1980). It seems that these methods are very effective, because the theories selected can be applied successfully to new domains. According to the realist, the best explanation for this effectiveness is that inference to the best explanation is truth-conducive: better explanations are more likely to be true, or close to the truth, which explain their success (so, inference to the best explanation is justified by inference to the best explanation, but realists think that this circularity is not problematic) (Psillos 1999).

Successful theoretical extensions can concern models or theories, applied to new phenomena, or the same phenomena with unprecedented levels of precision, new operational means of accessing the same theoretical entity, or they can concern new theoretical developments, for example on the basis of the conjunction of two theories or two models. This gives rise to various derivations of the no-miracle argument that I examine in the book.

Codex Egberti fol. 90r

An Inductivist Response

According to van Fraassen (1980, p. 40), the explanation for the success of science is more simple: our theories are successful because they were selected in a “fierce competition” for this purpose. Bad theories were eliminated in this process, and only predictive theories remained.

Some have argues that this only provides a “phenotypic explanation” for success, while what is required is a “genotypic explanation” (Psillos 1999 pp. 93–94). For example, we could explain why a tennis player is good either by referring to the fact that this player was selected in a firece competition, or by mentioning the player’s force and intelligence. These are distinct types of explanations. Giving a genotypic explanation for scientific success would require explaining this success by means of a characteristic shared by successful theories. For the realist, this characteristic is truth.

The problem is that being true does not imply having particular concrete features in common with other true theories. Saying “T is true” is equivalent to giving the content of T. So, in effect, for the realist, there is a distinct “genotypic” explanation for each theory’s success, which is the theory itself (it explains the phenomena it predicts), and we are back to the naive, question begging argument mentioned above. Ironically, the empiricist can actually accept this genotypic explanation: indeed, quantum theory explains our observations, so it explains its own success. It is not true that the empiricist is unable to provide genotypic explanations for the success of theories. She will only deny that these explanations are true.

As for the more subtle argument for realism based on successful extensions, it is actually a phenotypic explanation, since it is focused on the methods of theory selection. In this respect, van Fraassen's explanation and the realist explanation are on a par. In favour of van Fraassen's explanation, one could note that theories are relatively flexible, since model construction is not a systematic procedure, and if both theories and their models are subjected to a fierce competition, then successful theoretical extensions might be less miraculous than it seems.

So, our best theories are able to make successful new predictions because they were selected for this reason. But why do they continue to be successfully extended after having been selected? (Lipton 2004 p. 195) Relativity theory was selected for its first successes against Newtonian gravitation (predicting the deviation of light by massive bodies), but it has continued to be successful since then. Perhaps we need an explanation for this. Van Fraassen does not provide it, so his explanation needs to be completed.

I think that all we need to account for this continuing success is a particular form of induction. After all, novel empirical successes are expected for empirically adequate theories, since these theories are supposedly accurate in all possible situations, including these novel ones. So what we are after is a justification of empirical adequacy. The realist makes a detour by truth and explanations in order to provide it. However, there is no reason to focus on a need for explanation as the realist does: a direct inductive justification of empirical adequacy is enough to show that empirical success is not miraculous. Demanding a further explanation for this justified belief is misplaced.

According to modal empiricism, a theory is adequate if all its models are. In order to justify this, all we need is an induction on models (there is also a notion of induction on contexts that I address in the book). Having observed that many models of the theory have been successful, we should expect other models of the theory, unified by the same general principles, but describing new kinds of situations, to be empirically successful as well.

Van Fraassen’s explanation gives us part of the story: the “fierce competition” ensures that a theory already has a sample of successful models. Then induction on the basis of this sample predicts successful extension of this theory to new domains, with new models. So, assuming the validity of an induction on models, the novel predictions put forth by realists are no miracle. We do not need to assume that the theory posits the right ontology to explain it.

Codex Egberti fol. 85v

Objections to the Inductivist Response

An induction on models is quite distinct from an induction on worldly objects, which could raise some worries. A first issue is that not all constructible models respecting the laws of a theory are relevant for representing actual objects of the world, and which objects they are apt to represent can be discovered by adjustment from experience. Applying our inductive reasonning to models that might not be apt to represent anything in the universe would be problematic: these models cannot be empirically successful, because they cannot be applied. The inference could be: all models that apply are empirically successful. However, if we restricted our induction to the models whose domain of application has already been settled by scientists, we could not really account for novel predictions.

This could be a serious issue for standard empiricists, but modal empiricism can address it. Prior to the observation of Neptune, scientists did not know whether a model with more than seven planets was apt to represent the solar system or anything, so inferring the adequacy of this model from past successes could have been problematic. For this reason, a standard empiricist cannot really infer that novel predictions will be successful. However, a modal empiricist could have said prior to the observation of Neptune that a model with eight planet is relevant for some possible situations, and that it would be accurate in these situations. As it happens, the solar system is one of them, so the accuracy of a model with Neptune is not miraculous. The induction on empirical success is less problematic in this case because the fact that models represent actual things is not required for the induction to work: it is sufficient that they represent possible things (I explain in the book why it does not contradict the idea that possible situations are anchored to actual ones, although it implies that success is less likely for situations that are more different from already observed ones). In so far as novel predictions are considered possible prior to their realisations, the modal empiricist predicts that the theory will be successful.

Another potential problem for an induction on models is that we need to assume that the models we have tested are representative of all other models of the theory. However, our sample of models could be biased. Take for example Newtonian mechanics: we know, in light of relativity theory, that it is adequate only for objects away from very massive bodies. As it happens, we live in flat spacetime regions, and before relativity theory was proposed, scientists had almost only tested models with no objects nearby very massive bodies (with the exception of Mercury). So, the impressive history of success of Newtonian mechanics prior to relativity theory was biased in favour of a particular class of models that happen to have a much higher success rate than other models.

This point is related to the pessimistic meta-induction argument against realism that I examine in chapter 7 of the book. Theories are often successfully extended to new domains, which supports our induction on models. However, this extension is not unlimited: it eventually fails, and theories are replaced by better ones. These two observations could be called optimistic and pessimistic meta-inductions respectively. Given the pessimistic meta-induction, why think that our theories are empirically adequate?

I believe that we should accept that an induction on model is likely to be limited to a domain of experience “close enough” to the one in which the theory has proved successful so far. We cannot expect our theories to be empirically adequate in an unlimited domain of experience. However, the pessimistic meta-induction is not as devastating for an empiricist as it is for a realist, because even if the validity of our theories is limited to a domain, they still are adequate in that domain. In contrast, being "true in a domain" does not make much sense for a realist understanding of truth: if gravitational phenomena are explained by deformations of space-time, then Newtonian forces of gravitation do not really exist fundamentally, not even "in a domain".

A final challenge for induction on model concerns what this kind of induction must presuppose. Standard induction presupposes that there are regularities in the world. An induction on models seems to presuppose something like “meta-regularities”, or a structure. It also presupposes that the theoretical categories used to organise models are "projectible" (fit for induction). It could seem that modal empiricism is very close to structural realism, according to which our theories get the structure of reality right.

Modal empiricism is indeed very close to structural realisn, but I would say that crucial differences remain. According to modal empiricism, the structure represented by our theories is not necessarily independent from our position in the universe or from our cognitive constitution. Theories would "correspond" to the structure of our possible interactions with reality rather than with the structure of reality itself in an absolute and unrestricted sense. This is why modal empiricism is not a realist position. I explain these differences in more detail in chapter 7 of the book, that I will present in the next article.

Aucun commentaire:

Enregistrer un commentaire