View: session overviewtalk overview

Organizer: Hasok Chang

It is often said that science is “messy” and, because of this messiness, abstract philosophical thinking is only of limited use in analysing science. But in what ways is science messy, and how and why does this messiness surface? Is it an accidental or an integral feature of scientific practice? In this symposium, we try to understand some of the ways in which science is messy and draw out some of the philosophical consequences of taking seriously the notion that science is messy.

The first presenter discusses what scientists themselves say about messy science, and whether they see its messiness as a problem for its functioning. Examining scientists’ reflections about “messy science” can fulfill two complementary purposes. Such an analysis helps to clarify in what ways science can be considered “messy” and thus improves philosophical understanding of everyday research practice. The analysis also points to specific pragmatic challenges in current research that philosophers of science can help address.

The second presenter discusses the implications of “messy science” for scientific epistemology, specifically for scientific justification. They show how this messiness plays itself out in a particular episode in nineteenth century medicine: the transition from mid-nineteenth-century miasma views to late nineteenth-century early germ views by examining different senses in which scientific epistemology may be said to be messy and lay out in what ways such messy scenarios differ from the idealized circumstances of traditional accounts of justification. They conclude by discussing some limits that taking these differences into account will impose on developing practice-based views of scientific justification, explaining how it is still possible for such views to retain epistemic normativity.

The third presenter explores how the messiness of eighteenth-century botanical practice, resulting from a constant lack of information, generated a culture of collaborative publishing. Given the amount of information required for an accurate plant description let alone a taxonomic attribution, eighteenth-century botanists and their readers were fully aware of the preliminary nature of their publications. They openly acknowledged the necessity of updating and correcting them, and developed collaborative strategies for doing so efficiently. Authors updated their own writings in cycles of iterative publishing, most famously Carl Linnaeus, but this could also be done by others, such as the consecutive editors of the unpublished manuscripts of the German botanist Paul Hermann (1646-1695), who became his co-authors in the process.

The fourth presenter investigates how biological classification can sometimes rely on messy metaphysics. Focusing on the lichen symbiont, they explore what grounds we might have for relying on overlapping and conflicting ontologies. Lichens have long been studied and defined as two-part systems composed of a fungus (mycobiont) and a photosynthetic partner (photobiont). This bipartite metaphysics underpins classificatory practices and determines the criteria for stability that rely on the fungus to name lichens despite the fact that some lichens are composed of three or more parts. The presenter investigates how reliable taxonomic information can be gleaned from metaphysics that makes it problematic to even count biological individuals or track lineages.

Organizer: Frederick Eberhardt

Over the past few years, the Causal Bayes net framework --- developed by Spirtes et. al. (2000) and Pearl (2000), and given philosophical expression in Woodward (2004) -- has been successfully spun off into the sciences. From medicine to neuro- and climate-science, there is a resurgence of interest in the methods of causal discovery. The framework offers a perspicuous representation of causal relations, and enables development of methods for inferring causal relations from observational data. These methods are reliable so long as one accepts background assumptions about how underlying causal structure is expressed in observational data. The exact nature and justification of these background assumptions has been a matter of debate from the outset. For example, the causal Markov condition is widely seen as more than a convenient assumption, and rather as encapsulating something essential about causation. In contrast, the causal faithfulness assumption is seen as more akin to a simplicity assumption, saying roughly that the causal world is, in a sense, not too complex. There are other assumptions that have been treated as annoying necessities to get methods of causal discovery off the ground, such as the causal sufficiency assumption (which says roughly that every common cause is measured) and the acyclicity (which implies, for example, that there is no case in which X causes Y, Y causes Z, and Z causes X, forming a cycle). Each of these assumptions has been subject to analysis and methods have been developed to enable causal discovery even when these assumptions are not satisfied. But controversies remain, and we are confronted with some long standing questions: What exactly is the nature of each of those assumptions? Can any of those assumptions be justified? If so, which? How do the question of justification and the question of nature relate to each other?

This symposium aims to address those questions. It brings together a group of researchers all trained in the causal Bayes nets framework, but who have each taken different routes to exploring how we can address the connection between the underlying causal system and the observational data that we use as basis to infer something about that system. In particular, we will discuss a variety of different approaches that go beyond the traditional causal Bayes net framework, such as the discovery of dynamical systems, and the connection between causal and constitutive relations. While the approaches are largely driven by methodological considerations, we expect these contributions to have implications for several other philosophical debates in the foundations of epistemology, the metaphysics of causation, and on natural kinds.

15:15 | Convergence to the Causal Truth and Our Death in the Long Run ABSTRACT. Learning methods are usually justified in statistics and machine learning by pointing to some of their properties, including (but not limited to) convergence properties: having outputs that converge to the truth as the evidential inputs accumulate indefinitely. But there has long been the Keynesian worry: we are all dead in the long run, so who cares about convergence? This paper sharpens the Keynesian worry and replies to it. The Keynesian worry challenges the epistemic value of convergence properties. It observes that a guarantee of obtaining the truth (with a high chance) in the long run does not seem to be of epistemic value, because the long run might be too long and we might not live long enough to actually believe in the truth. Worse, some empirical problems pursued in science are very hard, so much so that there is no learning method that is guaranteed to help us obtain the truth---even if we are immortal. Many problems about learning causal structures, for example, are that hard. This is the Keynesian worry on causal steroid. (Reichenbach almost anticipates such hard problems [1], but his example does not really work, or so I argue.) The standard reply guarantees eventual convergence by assuming the Causal Faithfulness Condition [2]. But this amounts to simply assuming away the skeptical scenarios that prevent our belief in the truth. I defend the epistemic value of various modes of convergence to the truth, with a new reply to the Keynesian worry. Those modes of convergence are epistemically valuable *not* for a consequentialist reason---i.e. not because they provide us any guarantee of such epistemically good outcome as our actually believing in the truth. There is simply *no* such guarantee. The epistemic significance of convergence lies elsewhere. I argue that modes of convergence to the truth are epistemically valuable for a *non-consequentialist* reason. A good learning method must be one that responds appropriately to evidence, letting evidence play an important role: the role of evidence as a reliable indicator of truth, possibly not perfectly reliable, but becoming reliable in *progressively* more states of the world if the amount of evidence were to increase---all done by making progress in the *best* possible way. This is a role that evidence should play; our longevity plays no role in this picture. And I argue that, thanks to a new theorem, evidence plays that important role only if it serves as input into a learning method that has the right convergence property. In the context of causal discovery, the right convergence property is provably so-called almost everywhere convergence [2,3] plus locally uniform convergence [4]. 1. Reichenbach, H. (1938) Experience and Prediction, University of Chicago Press. 2. Spirtes, P., Glymour, C. and Scheines, R. (1993) Causation, Prediction, and Search, the MIT Press. 3. Meek, C. (1995) “Strong Completeness and Faithfulness in Bayesian Networks”, Proceedings of the 11th UAI, 411-418. 4. Lin, H. (forthcoming) “The Hard Problem of Theory Choice: A Case Study on Causal Inference and Its Faithfulness Assumption”, Philosophy of Science. |

15:45 | Causal Minimality in the Boolean Approach to Causal Inference PRESENTER: Jiji Zhang ABSTRACT. In J.L. Mackie’s (1974) influential account of causal regularities, a causal regularity for an effect factor E is a statement expressing that condition C is sufficient and necessary for (the presence or instantiation of) E (relative to a background or causal field), where C is in general a complex Boolean formula involving a number of factors. Without loss of generality, we can put C in disjunctive normal form, a disjunction of conjunctions whose conjuncts express presence or absence of factors. Since C is supposed to be sufficient and necessary for E, each conjunction therein expresses a sufficient condition. Mackie’s requirement is that such a sufficient condition should be minimal, in the sense that no conjunction of a proper subset of the conjuncts is sufficient for E. If this requirement is met, then every (positive or negative) factor that appears in the formula is (at least) an INUS condition: an Insufficient but Non-redundant part of an Unnecessary but Sufficient condition for E. Mackie’s minimality or non-redundancy requirement has been criticized as too weak (Baumgartner 2008), and a stronger criterion is adopted in some Boolean methods for causal inference, which have found interesting applications in social science (e.g., Ragin and Alexandrovna Sedziaka 2013; Baumgartner and Epple 2014). In addition to minimization of sufficient conditions, the stronger criterion requires that the disjunctive normal form that expresses a necessary condition should be minimally necessary, in the sense that no disjunction of a proper subset of the disjuncts is necessary for the effect. In this talk we identify another criterion of non-redundancy in this setting, which is a counterpart to the causal minimality condition in the framework of causal Bayes nets (Spirtes et al. 1993; Pearl 2000). We show that this criterion is in general even stronger than the two mentioned above. Moreover, we argue that (1) the reasons for strengthening Mackie’s criterion of non-redundancy support moving all the way to the criterion we identified, and (2) an argument in the literature against the causal minimality condition for causal systems with determinism also challenges Mackie’s criterion of non-redundancy, and an uncompromising response to the argument requires embracing the stronger criterion we identified. Taken together, (1) and (2) suggest that the Boolean approach to causal inference should either abandon its minimality constraint on causal regularities or embrace a stronger one. References: Baumgartner, M. 2008. “Regularity theories reassessed.” Philosophia 36: 327-354. Baumgartner, M., and Epple, R. 2014. “A Coincidence Analysis of a Causal Chain: The Swiss Minaret Vote.” Sociological Methods & Research 43: 280-312. Mackie, J. L. 1974. The Cement of the Universe: A Study of Causation. Oxford: Clarendon Press. Pearl, J. 2000. Causality: Models, Reasoning, and Inference. Cambridge, UK: Cambridge University Press. Ragin, C. C., and Alexandrovna Sedziaka, A. 2013. “QCA and Fuzzy Set Applications to Social Movement Research.” The Wiley-Blackwell Encyclopedia of Social and Political Movements, doi: 10.1002/9780470674871.wbespm482. Spirtes, P., Glymour, C., and Scheines. R. 1993. Causation, Prediction and Search. New York: Springer-Verlag. |

Organizer: Sandra Mitchell

Adolf Grünbaum, former president of DLMPST and IUHPS, had an extraordinary impact on philosophy of science in the 20th century. He died November, 15, 2018 at the age of 95. This symposium honors Grünbaum by considering ideas he addressed in his work, spanning philosophy of physics, logic of scientific reasoning, Freud and psychiatry’s status as a science and religion.

Organizers: Benedikt Loewe and Helena Mihaljevic

A Symposium at CLMPST XVI coordinated by DLMPST/IUHPST and the Gender Gap Project

The project "A Global Approach to the Gender Gap in Mathematical, Computing, and Natural Sciences: How to Measure It, How to Reduce It?" is an international and interdisciplinary effort to better understand the manifestation of the Gender Gap in the named scientific fields, and to help overcome barriers for women in their education and career. The collaboration between eleven partners including various scientific unions allows for a comprehensive consideration of gender-related effects in these fields, yielding the opportunity to elaborate common grounds as well as discipline-specific differences.

Currently, existing data on participation of women in the mathematical and natural sciences is scattered, outdated, and inconsistent across regions and research fields. The project approaches this issue mainly from two different perspectives. Through a survey, scientists and mathematicians worldwide have the opportunity to confidentially share information about their own experiences and views on various aspects of education and work in their disciplines and countries. At the same time, we statistically analyze large data collections on scientific publications in order to understand effects of gender and geographical region on publication and collaboration practices. Moreover, the project aims to provide easy access to materials proven to be useful in encouraging girls and young women to study and pursue education in mathematics and natural sciences.

In this symposium, methods and findings of the Gender Gap project will be presented by Helena Mihaljevic, connected to and contrasted with similar issues in philosophy of science. After three presentations, there will be a panel discussion.

15:15 | Geometry, psychology, myth as aspects of the astrological paradigm PRESENTER: Vladimir Vinokurov ABSTRACT. The geometrization of logic, undertaken by Wittgenstein in “Tractatus Logico-Philosophicus”, is related to the spatial representation of the complex. The form of the fact is the formula of the complex “aRb”, where “a” is one event, “b” is another one, and “R” is the relationship between them. The geometric form of the complex will be “a-b” or “a ~ b”, where the ratio is expressed as “straight line” or “curve”. The visualization of life in the form of geometric figures allows comprehending the past in a new way, making the prediction part of the visual image itself, a part of the calculation. The new concept of the picture is a consequence of a new understanding of the experience of life. This is the essence of astrological predictions. Step-by-step analysis of an astrological picture creation: (1). Initially, based on the calculations, two pictures are created. (2). One picture combines two observations; location of the planets at the time of birth and the current location of the planets. We combined two homogeneous, but chronologically different experiences. (3). The third step is structuring life experience. We are looking for the starting point of the geometric projection. We begin to build the figure of experience. (4). We presented the result to the inquirer and asked him to recall the events on a certain date. He creates a picture of his past. We attempt to include the content of his experience into our picture of the events. (5). Now, making predictions based on the picture itself (and this is computation, since all the elements of the picture are connected by rigid connections), we operate on the experience of the questioner. (6). Astronomical calculations of the points and movements of planets become the rule and bring an intuitive perception of the experience of life. “Experience” is now not only “what was”, but also “what should have been”. The prediction became part of the image itself. Mathematical expectation is the basis of psychological persuasion. According to Wittgenstein, “the cogency of logical proof stands and falls with its geometrical cogency”[1]. Particular interest are the problem of differences in conclusions and predictions and, after all, the transformation of thinking itself in the process of creating a geometrical concept. Astrology, through the ideogrammatical form of the writing, turned out to be related to geometry and mathematics, it became tools for predicting. Archaic origins of astrological representations, based on the picture, can be found in the most ancient languages - for example, in the Sumerian, where words expressing the meaning of representations about the fortune and fate, derive from the root with the meaning of “drawing” and “mapping”[2]. The hieroglyph denoting the Egyptian god of magic and astrology Thoth corresponded to the Coptic transcription: Dhwty-Deguti. One of the options for translating this name is the word “artist” (W.Goodwin). REFERENCES Wittgenstein, L. (2001). Remarks on the foundations of mathematics. Oxford: Basil Blackwell, p.174(43). Klochkov I.S. Spiritual culture of Babylonia: man, destiny, time [Клочков И.С. Духовная культура Вавилонии: человек, судьба, время]. М.:Nauka, 1983. |

15:15 | Unfalsifiability and Defeasibility PRESENTER: Linton Wang ABSTRACT. Popper's falsifiability criterion requires sciences to generate falsifiable predictions, and failing the criterion is taken as a vice of a theory. Kitcher (1982) rejects the criterion by arguing that there is no predictive failure to be derived from scientific theories in Popper's sense, and thus Popper-style falsification of a scientific theory cannot be fulfilled. We aim at reconstructing Kitcher's argument on the unfalsifiability of a scientific theory based on the defeasibility of scientific inferences, and further indicate how the unfalsifiability can aid scientific discovery. The reconstruction shows that the unfalsifiability of a scientific theory is a virtue rather than a vice of the theory. Our main argument proceeds as follows. First, we reorganized Kitcher's (1982: 42) argument for the unfalsifiability of scientific theories, and indicate that his main argument is based on that no theory can logically derive a conditional prediction in the form of "if P then Q", since any such conditional is incompatible with the scientific practice that, in case of P but not Q, we can always appeal to some extra intervening factor (e.g. some other force than those under consideration) to explain why it is the case that P and not Q. The second step is to indicate that Kitcher's argument is unsatisfactory, since a conditional in the form of "if P then Q" is incompatible with the practice of appealing to extra intervening factors only if the conditional allows the inference pattern modus ponens (e.g. the material implication and the counterfactual conditional), that Q logically follows from that if P then Q and that P. But the literature has shown that not all sensible conditionals enjoy modus ponens. But the literature has shown that not all sensible conditionals enjoy modus ponens. Furthermore, Kitcher's argument is puzzling from two aspects: his argument makes it unclear how a theory can be used to generate a prediction, and it is also unclear how the appealing to extra intervening factors works in scientific practice. Finally, to respond to the two puzzles but still hold the main objective of Kitcher's argument, we propose that a conditional prediction to follow from a theory is the defeasible conditional "if P then defeasibly Q", from which, given P, Q only defeasibly follows but not logically entailed. This proposal in turn is shown to be compatible with the the practice of appealing to extra intervening factors. We also formalize the practice of appealing to intervening factors by a defeasible inference pattern we call the inference from hidden factors, which represents a way how we can learn from mistaken predictions without falsifying scientific theories. Our proposal is defended by indicating that the defeasible inference patterns can capture good features of scientific reasoning, such as that we generate prediction from a theory based on our partial ignorance of all the relevant facts, and how we may change our prediction when evidence accumulates. References: Kitcher, P. (1982), Abusing Science: The Case against Creationism, MIT Press. |

15:45 | Robustness, Invariance, and Multiple Determination ABSTRACT. Robustness, Invariance, and Multiple Determination ABSTRACT Multiple determination is the epistemic strategy of using multiple independent empirical procedures to establish “the same” result. A classic example of multiple determination is Jean Perrin’s description of thirteen different procedures to determine Avogadro’s number (the number of molecules in a gram-mole of a substance), at the beginning of the twentieth century (Perrin 1909, 1913). In the contemporary literature in philosophy of science, multiple determination is considered to be a variant of robustness reasoning: 'experimental robustness', 'measurement robustness', or simply 'robustness', are the terms that are commonly used to refer to this strategy (Wimsatt 1981, 2007; Woodward 2006; Calcott 2011; Soler et al., eds. 2012). In this paper, I argue that the strategy of using multiple independent procedures to establish “the same” result is not a variant of robustness. There are many variants of robustness strategies, but multiple determination is not one of them. I claim that treating multiple determination strategy as a robustness variant mischaracterizes its structure and it is not helpful for understanding its epistemic role and import in scientific research. I argue that there are many features that distinguish multiple determination from the many robustness variants. I present these features and argue that they are related to the same central difference: whereas all the robustness variants can be construed as involving some sort of invariance (of the robust result) to different types of perturbations, multiple determination cannot be so construed. The distinguishing feature of the multiple determination strategy is its ability to support a specific type of a no-coincidence argument. Namely that it would be an improbable coincidence for multiple determination procedures, independently of one another, to establish “the same” result, and yet for the result to be incorrect or an artefact of the determination procedures. Under specific conditions, the no-coincidence argument from multiple determination - in addition to being used to argue for the validity of the result - can be also used to argue for the validity of the determination procedures. No such no-coincidence argument can be construed from simple invariance to perturbations. Robustness is a set of epistemic strategies better suited for discovering causal relations and dependencies. Finally, I claim that, besides the philosophical reasons, there are also historical reasons to keep multiple determination and robustness distinct. Μultiple determination can be considered to be the historical descendant of William Whewell’s nineteenth century notion of consilience of inductions (a form of hypothetico-deductive reasoning). On the other hand, robustness strategies can be considered to be the descendants of John S. Mill’s nineteenth century methods of direct induction (a form of inductive reasoning). REFERENCES Calcott, Brett. 2011. “Wimsatt and the Robustness Family: Review of Wimsatt’s Re-engineering Philosophy for Limited Beings.” Biology and Philosophy 26:281-293. Perrin, Jean. 1909. “Mouvement Brownien et Réalité Moléculaire.” Annales de Chimie et de Physique 18:1-114. Perrin, Jean. 1913. Les Atomes. Paris: Librarie Félix Alcan. Soler, Léna et al. eds. 2012. Characterizing the Robustness of Science: After the Practice Turn in Philosophy of Science. Boston Studies in the Philosophy of Science, Vol.292, Dordrecht: Springer. Wimsatt, William. 1981. “Robustness, Reliability, and Overdetermination.” In Scientific Inquiry and the Social Sciences, ed. Marylin Brewer and Barry Collins, 124-163. San Francisco: Jossey-Bass. Wimsatt, William. 2007. Re-engineering Philosophy for Limited Beings. Cambridge: Harvard University Press. Woodward, James. 2006. “Some Varieties of Robustness.” Journal of Economic Methodology 13:219-240. |

15:15 | Structuralist Abstraction and Group-Theoretic Practice ABSTRACT. Mathematical structuralism is a family of views holding that mathematics is primarily the study of structures. Besides genuine philosophical reasons to adopt a structuralist philosophy, the "structural turn" of the late 19th and early 20th centuries' mathematics itself can motivate the adoption of structuralist views. For example, the philosophical notion of (structuralist) "abstraction"—a transformation from the concrete/particular/intuitive to the abstract/universal/formal—is often said to be rooted in historical events, such as: 1) F. Klein’s Erlangen Program of 1872. 2) The axiomatic definition of groups of 1882. 3) The invention and use of category theory, starting from 1942. The focus on these particular demonstrations of historical abstraction processes does not exclude, of course, the possibility of alternative interpretations of structuralist abstraction, in philosophy or the history of mathematics. The questions are therefore the following: What other structuralist abstraction process exist in the history of mathematics? What explanatory power on abstraction principles can they bear? In its unrestricted form, the question will allow for an enormous amount of possible answers, but one does not have to depart from the original examples as much as it first seems: To each of the above examples, my suggestion is, there exist closely related historical evidences for rather different interpretations of the alleged "structuralist turn" n mathematics. Further, these new examples will all stem from the theory of groups, which will make comparison easier: 1*) F. Klein’s "Hypergalois Program": While Klein’s aim in the Erlangen Program was to organize several geometries under the unifying approach of algebra, the "Hypergalois Program" promotes means to tackle genuine algebraic problems with geometric means and thus constitutes a forerunner of the theory of representations. This meets Klein’s own aspiration not to subsume geometry under algebra, but to combine the tools and methods of both theories. 2*) If the axiomatic definition of a group ignores the internal nature of its elements, why not climbing further on the "ladder of abstraction" and ignore their existence altogether—focusing solely on sub- and superrelations of algebraic structures like groups? Such ideas can be traced back to E. Noether (and arguably to Dedekind), but only in 1935/1936 O. Ore formalized such a "foundation of abstract algebra". His ideas marked the birth of the theory of lattices that brought new insights into abstract algebra, the foundational claim, however, failed and was soon abandoned. 3*) A more successful unification of algebraic theories was laid down in category theory; a theory that was nevertheless invented for much more modest and particular goals. Here, the focus is laid to the question how newly emerging category theory was thought to provide means for the most urging problems of group theory, namely those of homology and cohomology. A tentative lesson drawn from these new examples could be not to regard structuralist abstraction as an imperative of modern mathematics that is applied for its own sake, but rather as an additional mathematical tool that (together its counterpart of "instantiation") modern mathematicians are free to use to serve their particular purposes. |

15:45 | Expressive power and intensional operators PRESENTER: David Rey ABSTRACT. Expressive power and intensional operators In his book Entities and Indices, M. J. Cresswell advocated the view that modal discourse in natural language requires a semantics whose power amounts to explicit quantification over possible worlds. His argument for this view had two parts. First, he argued that an operator-based intensional language can only reach the expressive power of English if it has a semantics with infinite world indices and employs infinite actuallyn/Refn-operators which store and retrieve such indices. (See Cresswell 1990, ch. 3, esp. pp. 40–46. Cresswell’s actuallyn/Refn-operators are modal correlates of the Kn/Rn-operators introduced in the appendix of Vlach 1973 (pp. 183–185).) Second, he gave a formal proof that an operator-based language of this kind is as powerful as a language of predicate logic with full quantification over worlds (see Cresswell 1990, ch. 4). Cresswell also suggested that possible worlds are semantically on a par with times and argued that temporal discourse in natural language has the power of full quantification over times. In a recent paper, I. Yanovich (2015) argues that a first-order modal language equipped with standard modal operators and Cresswell’s actuallyn/Refn-operators—following Saarinen (1978), Yanovich calls the latter backwards-looking operators—is less powerful than an extensional first-order language with full quantification over worlds or times. Yanovich suggests that the widespread belief to the contrary among philosophers and linguists is the result of a misinterpretation of Cresswell’s formal proof. He observes that Cresswell’s proof assumes that the modal language with backwards-looking operators also employs an operator of universal modality ☐, which adds extra expressive power to the language. One important drawback of Cresswell’s and Yanovich’s discussions is they do not offer a precise definition of the notion of expressive power. Without such a definition, it is not thoroughly clear what criterion they are adopting in order to compare the power of different logical systems. In this paper, we provide a precise definition of expressive power—based on Ebbinghauss’s notion of logical strength (see Ebbinghauss 1985, sec. 3.1)—that can be applied to the formal languages discussed by Cresswell in Entities and Indices. Armed with this definition, we address the question of whether a modal language equipped with Cresswell’s actuallyn/Refn-operators and an operator of universal modality is as powerful as a first-order language with full quantification over worlds or times. References 1. Cresswell, M. J.: Entities and Indices. Kluwer Academic Publishers, Dordrecht (1990) 2. Ebbinghauss, H.-D.: Extended Logics: The General Framework. In: Barwise, J., Feferman, S. (eds.) Model-Theoretic Logics. Spinger-Verlag, New York, 25--76 (1985) 4. Saarinen, E.: Backward-Looking Operators in Tense Logic and in Natural Language. In: Saarinen, E. (ed.) Game-theoretical semantics: Essays on Semantics by Hintikka, Carlson, Peacoke, Rantala, and Saarinen. D. Reidel, Dordrecht, 215--244 (1978) 5. Vlach, F.: ‘Now’ and ‘Then’: A Formal Study in the Logic of Tense Anaphora. Ph.D. dissertation, UCLA (1973) 6. Yanovich, I.: Expressive Power of “Now” and “Then” Operators. Journal of Logic, Language and Information 24, 65--93 (2015) |

16:45 | Progressive Methods for Causal Discovery PRESENTER: Konstantin Genin ABSTRACT. Although conventional wisdom still holds that it is foolhardy to infer causal relationships from non-experimental data, the last two decades have seen the development of methods that are proven to do what was previously thought impossible. Typical methods proceed by a simplicity-guided schedule of conjectures and refutations. Simpler models posit fewer causal relationships among variables. Algorithms infer the presence of causal relationships by iteratively refuting sharply-testable statistical hypothesis of conditional independence. Such methods are routinely proven to converge to the true equivalence class of causal models as data accumulate. Crucially, however, that convergence is merely pointwise — it is not possible to calculate ex-ante the amount of data necessary to have reasonable assurance that the output model is approximately correct. Furthermore, there are infinitely many alternative methods that would have similar limiting performance, but make drastically different conclusions on finite samples. Some of these methods reverse the usual preference for simple graphs for arbitrarily many sample sizes. What justifies the seemingly reasonable procedures that are prominent in the literature? Some have responded to the dilemma by searching for stronger a priori assumptions that would guarantee that the search methods converge uniformly [Zhang and Spirtes, 2002]. But these assumptions are implausible, and amount to insisting that causal discovery is easier than it really is [Uhler et al., 2013]. What is needed is a success criterion for justifying causal discovery that is stronger than mere pointwise convergence, but does not insist on short-run bounds on the chance of error. Say that a method is *progressive* if, no matter which theory is true, the objective chance that the method outputs the true theory is strictly increasing with sample size. In other words: the more data the scientist collects, the more likely their method is to output the true theory. Although progressiveness is not always feasible, it should be our regulative ideal. Say that a method is α-progressive if, no matter which theory is true, the chance that it outputs the true theory never decreases by more than α as the sample size grows. This property ensures that collecting more data cannot set your method back too badly. We prove that, for many problems, including the problem of causal search, there exists an α-progressive method for every α > 0. Furthermore, every α-progressive method must proceed by systematically preferring simpler theories to complex ones. That recovers and justifies the usual bias towards sparse graphs in causal search. |

17:15 | Proportional Causes and Specific Effects ABSTRACT. Standard methods of causal discovery take as input a statistical data set of measurements of well-defined causal variables. The goal is then to determine the causal relations among these variables. But how are these causal variables identified or constructed in the first place? In "Causal Inference of Ambiguous Manipulations ", Spirtes and Scheines (2004) show how mis-specifications of the causal variables can lead to incorrect conclusions about the causal relations. These results suggest that there is a privileged level of description for a causal relation, or at least, that not all levels of description are correct. It is tempting to conclude that the correct level of causal description is then at the finest possible level of description. But apart from the challenge that extant accounts of causality do not fit well with our finest level of description of physical processes, such a conclusion would also imply that the discussion of causation at the macro-level is at best elliptic. In this presentation we propose an account of a "correct" causal description that retains a meaningful notion of causality at the macro-level, that avoids the types of mis-specifications illustrated by Spirtes and Scheines, and that (in general) still permits a choice of the level of description. In this regard it offers a middle route between realist accounts of causation that maintain that there is an objective fact to the matter of what is "doing the causal work", and pragmatic accounts of causation that reject the notion of some privileged level of causal description. As the title suggests, the proposal is closely related to notions of the "proportionality" of causes in the literature on the metaphysics of causation, and considerations of the "specificity" of variables as used in information theory. |

17:45 | Finding causation in time: background assumptions for dynamical systems ABSTRACT. Much of scientific practice is concerned with the identification, prediction, and control of dynamical systems -- systems whose states change through time. Though typically modeled with differential equations, most are presumed to involve causal relations, with the changing states of some variables driving change in others over time, often reciprocally. A variety of new tools have been introduced outside of the causal Bayes net framework to learn aspects of this structure from noisy, real-world time series data. One collection of such tools is built around the notion of a dynamical symmetry [1]. This is a transformation of the state of a system that commutes with its time evolution. The set of dynamical symmetries of a system, along with the algebraic structure of those symmetries under composition, picks out an equivalence class of causal structures in the interventionist sense. This means that a test for sameness of dynamical kind can be used as a tool for validating causal models or determining whether observed systems fail to share a common causal structure. Algorithms have been published for implementing such tests, but they apply only to deterministic systems [2,3]. Here we present a generalization of the theory of dynamical kinds to the case of stochastic systems where the target question is not whether two systems differ in their causal structure, but to what degree. This requires addressing a variety of interrelated problems. First, though distinct from the causal graph approach, any inference from data concerning dynamical symmetries must deploy similar assumptions about the relation between the statistics of sampled variables and causal relations on the one hand, and about the causal relations themselves on the other. Chief among these is the assumption of Stochastic Dynamical Sufficiency (SDS). We clarify this condition, and draw out the ways in which it differs from the Causal Sufficiency condition in the Bayes net context. Second, there is the question of how this sufficiency assumption interacts with the choice of variables. In typical real-world settings one is forced to use variables to describe a system that may aggregate over another set that obeys the SDS. Given that some set of variables meets the SDS condition, under what conditions can lossy transformations of these variables do so as well? How can the presence or absence of these conditions be verified? If a set of variables does not satisfy SDS, is it possible to determine whether a finer-grained description exists that does? How do the answers to these questions depend upon the kinds of interventions possible? For all of these questions, we present some preliminary answers, illustrated with ongoing applications to the study of collective behavior in animal swarms. 1. Jantzen, B. C. Projection, symmetry, and natural kinds. Synthese 192, 3617–3646 (2014). 2. Jantzen, B. C. Dynamical Kinds and their Discovery. Proceedings of the UAI 2016 Workshop on Causation: Foundation to Application (2017). 3. Roy, S. & Jantzen, B. Detecting causality using symmetry transformations. Chaos 28, 075305 (2018). |

16:45 | How science is knowledge ABSTRACT. Truth is commonly required from a proposition to count it as knowledge. On the other hand, the hypothetical character of scientific laws and the ubiquity of idealization and the ceteris paribus clause pose the problem of how science is knowledge. Thus Ichikawa and Steup (2012) claim that scientific knowledge consists in knowing the contents of theories rather than knowing some truths about the world. Some, e.g. Cartwright (1983) suggest that general scientific laws, being false, constitute an instrument of knowledge rather than knowledge themselves. Contrary to that, Popper considers science as the superior kind of knowledge. Even outdated science, one may add, radically differs in epistemic status from prejudice or mere error and may have quite an extensive scope of application. To solve this tension, I propose to adopt a version of contextualism that is in many respect close to that of Williams’s (1996). On this view knowledge is a context-dependent notion, where a context is determined by some presuppositions, often only tacit, that are accepted by entitlement. Some presuppositions of the past, like Newton’s absolute simultaneity, are not accepted any longer. Still, Newton’s laws constitute knowledge in the context of his presuppositions and preserve a vast scope of applications. Idealizations and the ceteris paribus clause also count as presuppositions in Stalnaker’s pragmatic sense. This explains how, in some contexts but not in others, one is entitled to ignore, e.g., friction. The version of contextualism on offer departs in some significant respects from that of Williams’s. First, presuppositions themselves are not included into knowledge. Instead, they form what Wittgenstein calls, in the face of the ultimate groundlessness of our believing, “a (shifty) river-bed of thought”. Once it is shifted, however, in a novel, more comprehensive context one can come to know the denial of some presuppositions of a former, less sophisticated context. Second, the truth-requirement for knowledge is withdrawn. Instead, knowledge in a context is defined as belief that is superwarranted relatively to that context, i.e. it is warranted without defeat at some stage of inquiry and would remain so at every successive stage of inquiry as long as the presuppositions that constitute the context are kept unchallenged. Apart from explaining how science, including outdated science, is knowledge, the account on offer pictures an aspect of the growth of knowledge that consists in falsifying some presuppositions. In a broader, epistemological perspective, it is also applicable to the general skeptical problem like the question of (im)plausibility of the brains-in-a-vat hypothesis. References: Cartwright, N. 1983 How the Laws of Physics Lie, Oxford: Clarendon Press. Ichikawa, J., Steup, M. 2012 The Analysis of Knowledge in: Stanford Encyclopedia of Philosophy, Winter 2016 version (minor correction), https://plato.stanford.edu/archives/win2016/entries/knowledge-analysis/. Popper, K.R. 1972 Objective Knowledge, Oxford: OUP. Stalnaker, R. 2002 Common Ground, Linguistics and Philosophy 25: 701–721. Williams, M. 1996 Unnatural Doubts, Princeton University Press. Wittgenstein, L. 1969 On Certainty, ed. G.E.M. Anscombe and G.H. von Wright, trans. D. Paul and G.E.M. Anscombe. Oxford: Blackwell. |

17:15 | CANCELLED: Commutative transformations of theory structures PRESENTER: Vladimir Kuznetsov ABSTRACT. Reconstruction of scientific theories as complex [1], variable dynamic and coordinated polysystems [2] has been supported by the case studies of various actual theories [3]. In a scientific theory we distinguish subsystems with specific constitutive generative elements: names, languages, propositions, axioms, models, problems, operations, procedures, values, heuristics, approximations, applications etc. As subsystems of the same theory, they are intimately interrelated and a change of any element induces many changes both in its own and other subsystems. Note, that propositional and model-theoretic conceptions of a scientific theory consider its propositions and models as respective constitutive elements. The usage of the informal language of commutative diagrams [4] allows one to separate and classify various types of interconnected amendments of theory structures. Let α: X -> Y symbolize a transformation (in many cases it represents a relation) of one component X of a theory T into another component Y of T. For instance, if Y is a model M from T, then X can be a certain language L used to construct M. Let µ be a certain homogeneous transformation X -> X*, such that there is the inverse transformation µ-1. Since all elements in question belong to T, µ induces the homogeneous transformation π: Y -> Y*. The set α, µ, π and ρ is commutative if the transformation ρ: X* -> Y* is such that ρ = µ-1#α#π, where # is a composition of transformations. Factually, many real non-trivial transformations of theory elements are commutative. Let us, for example, reformulate some problem (P -> P*) in terms of a new model (M -> M*). We have here the set of four transformations 1) α: P -> M; 2) µ: P -> P*, 3) π: M -> M*; and 4) ρ: P -> P*, which will be commutative if ρ = µ-1#α#π. Commutative transformation is T-internal, if all its constituents belong to the same theory (e.g., Le Verrier's resolution of problem of systematic discrepancies between Uranus's observed orbit and the one calculated from Newton classical mechanics through its reformulating in terms of the new model of Solar system that includes Neptune), and T-external, if some constituents belong to different theories (Bohr's resolution of the problem of atom instability in terms of his model of atom stationary orbits). 1. M.Burgin, V.Kuznetsov. Scientific problems and questions from a logical point of view // Synthese, 1994, 100, 1: 1-28. 2. A. Gabovich, V.Kuznetsov. Problems as internal structures of scientific knowledge systems // Philosophical Dialogs' 2015. Kyiv: Institute of Philosophy, 2015: 132-154 (In Ukrainian). 3. A.Gabovich, V.Kuznetsov. What do we mean when using the acronym 'BCS'? The Bardeen–Cooper–Schrieffer theory of superconductivity // Eur. J. Phys., 2013, 34, 2: 371-382. 4. V. Kuznetsov. The triplet modeling of concept connections. In A.Rojszczak, J.Cachro and G.Kurczewski (eds). Philosophical Dimensions of Logic and Science. Selected Contributed Papers from the Eleventh International Congress of Logic, Methodology, and Philosophy of Science. Dordrecht: Kluwer, 2003: 317-330. |

Place: Faculty of Civil Engineering, CTU in Prague, Thákurova 2077/7, 166 29 Prague 6

The welcome reception will be held in the building neighbouring the congress venue; all participants as well as accompanying persons are welcome.