View: session overviewtalk overview

Organizer: John Baldwin

This symposium builds on the proposed Authors and Critics session on Baldwin’s book: Model Theory and the Philosophy of Mathematical Practice: Formalization without Foundationalism. A key thesis of that book asserts: Contemporary model theory enables systematic comparison of local formalizations for distinct mathematical areas in order to organize and do mathematics, and to analyze mathematical practice.

Session I: Appropriate formalization for different areas of mathematics.

Session II: Abstract elementary classes and accessible categories.

Organizer: Steve Russ

The resources for study and scholarship on the thought and writings of Bernard Bolzano (Prague, 1781-1848) have been transformed by the ongoing publication of the Bernard Bolzano-Gesamtausgabe (Frommann-Holzboog, Stuttgart, 1969 - ). This edition is projected to have 132 volumes, of which 99 have already appeared. (See

https://www.frommann-holzboog.de/editionen/20.) The prodigious scale of the work testifies to the wide spectrum of Bolzano’s interests and insights, ranging from his theology lectures and ‘edifying discourses’, through social, political and aesthetic themes, to his major works on philosophy, logic, mathematics and physics. In his thinking and his life he personified the congress theme of, ‘Bridging across academic cultures’. The availability of so much previously unpublished, and significant, material has contributed to an increasing momentum in recent decades for Bolzano-related research, including: publications, PhD theses, translations, conferences, projects, reviews and grant awards. More than half of the Gesamtausgabe volumes, overall, are devoted to methodological or mathematical subjects.

The topic, and purpose, of this symposium is the presentation, and representation, of this thriving area of research which encompasses the history and philosophy of science and mathematics. We propose to divide the symposium into two sessions: Session A on the broader theme of methodology, Session B more specifically on mathematics. The two themes are not disjoint.

09:00 | Bolzano, Kant and the Evolution of the Concept of Concept ABSTRACT. My presentation shall discuss §120 of Bolzano’s Wissenschaftslehre. Bolzano writes: »Bin ich so glücklich hier einen Irrtum, der anderen unbemerkt geblieben war zu vermeiden, so will ich unverhohlen gestehen welchem Umstande ich es zu danken habe, nämlich nur der von Kant aufgestellten Unterscheidung von analytischen und synthetischen Urteilen, welche nicht stattfinden könnte, wenn alle Beschaffenheiten eines Gegenstandes Bestandteile seiner Vorstellung sein müssten« (Wissenschaftslehre § 120). („If I am so fortunate as to have avoided a mistake here which remained unnoticed by others, I will openly acknowledge what I have to thank for it, namely it is only the distinction Kant made between analytic and synthetic judgments, which could not be if all of the properties of an object had to be components of its representation” (Bolzano, WL, §120)). Bolzano recognized Kant‘s insistence on the analytic/synthetic distinction as important and he drew a sharp a distinction between concept and object, like Kant. And on this distinction a new notion of the theoretical concept was crafted, because it has made both Kant as well as Bolzano, aware of the errors of the traditional notion of a concept as something established by abstraction. Kant's fundamentally significant distinction between analytic and synthetic judgments is necessarily bound to the further development of the concept beyond the traditional notion of the concepts of substance or abstraction. (Cassirer, E., 1910, Substanzbegriff und Funktionsbegriff, Berlin: Verlag Bruno Cassirer). The whole edifice of rational knowledge therefore rested on the so-called Ontological Argument for the existence of God (Röd, W., 1992, Der Gott der reinen Vernunft, München: C.H. Beck). The kernel of this argument is the claim that the notion of the non-existence of God is a contradiction; for God is perfect and existence is perfection. Leibniz added to this argumentation, saying “from this argument we can conclude only that if God is possible, we cannot conclude that he exists. For we cannot safely use definitions for drawing conclusions, unless we know …. that they include no contradictions” (Leibniz in: R. Ariew/D. Garber (Eds.), Leibniz, Philosophical Essays, Hackett Publ. Comp. Cambridge, p.25). Kant emphasized that the principle of consistency only applies, if there is an object given. The statement that “a triangle has three angles”, says Kant, “does not enounce that three angles necessary exist, but upon the condition that a triangle exists, three angles must necessarily exist in it” (Kant, Critique of Pure Reason, B 622). So Kant insisted on a distinction between characteristics of objects and parts of concepts. Bolzano has been the first to recognize this clearly and to understand the consequences. |

09:30 | Bolzano’s theory of ground and consequence and the traditional theory of concepts PRESENTER: Annapaola Ginammi ABSTRACT. As known, Bernard Bolzano (1781–1848) was the first to offer a formally sophisticated account of (objective) explanation (Abfolge or grounding) in his Wissenschaftslehre (1837). Grounding is a relation between (collections of) truths Q and their objective reason or ground P, where P in some objective sense explains Q. Bolzanian grounding can be said to impose a hierarchy on truths: grounds are in some sense more fundamental than, and thus prior to, the truths that they ground, i.e. their consequences. As of today, it remains an open question under which conditions exactly Bolzano holds that a (collection of) truth(s) is the ground of another. State-of-the-art reconstructions of (sufficient conditions for) Bolzano’s grounding are given as deductive arguments satisfying certain conditions of simplicity and economy (cf. e.g. Roski & Rumberg 2016). Unfortunately, such this and similar reconstructions disregard several of Bolzano’s claims about grounding, such as the requirement that a ground be at least as general as its consequence. In this talk we put forward an alternative account of grounding that does justice to Bolzano’s claims. We argue that a correct interpretation of Bolzano’s views on explanation must take into account Bolzano's envisioning of a hierarchical ordering not only among the truths, but also among the concepts which make up a science. Such an hierarchy of concepts is a substantial part of the tradition of thinking about science, originating from Aristotle’s Analytica Posteriora, which heavily influenced Bolzano's ideal of science (de Jong & Betti 2008, de Jong 2001). According to this traditional conception, a science consists of some fundamental concepts, and all other concepts are defined from them according to the well-known model of definitions per genus proximum et differentiam specificam. Concepts, accordingly, are on this conception hierarchically ordered as genus and species. We will argue that the hierarchical ordering that grounding imposes on truths in Bolzano’s view emanates from the hierarchical ordering of the concepts which make up those truths. We will show that only by taking into account the traditional theory of concepts, including the age-old doctrine of the five so-called praedicabilia, can one account for Bolzano’s requirements for grounding in a satisfactory manner. We further strengthen our case by showing that our interpretation can account for certain other, general aspects of Bolzano's thinking about science, such as the reason why in Bolzano’s view sciences consist of synthetic truths only. One consequence of our account is that Bolzano's attitude to the traditional theory of concepts turns out to be less 'anti-Kantian' than usually maintained (cf. e.g. Lapointe 2011). References de Jong, W. (2001), ‘Bernard Bolzano, analyticity, and the Aristotelian model of science’, Kant-Studien 9, 328–349. de Jong, W. & Betti, A. (2008), ‘The Classical Model of Science: a millennia-old model of scientific rationality’, Synthese 174, 185–203. Lapointe, S. (2011), Bolzano’s Theoretical Philosophy: An Introduction, Palgrave MacMillan, New York. Roski, S. & Rumberg, A. (2016), ‘Simplicity and economy in Bolzano’s theory of grounding’, Journal of the History of Philosophy 54, 469–496. |

10:00 | Bolzano’s requirement of a correct ordering of concepts and its inheritance in modern axiomatics ABSTRACT. The question of the right order of concepts cannot be separated from the problem of rigor in mathematics and is usually formulated with reference to Aristotle’s distinction between ordo essendi and ordo cognoscendi: the search for rigor in science should include some kind of activity that could lead us from what is first for us to what is first in itself. Bolzano’s remarks about correctness of definitions and proofs are based on the requirement of a right order of concepts and truths. Recent literature has devoted great attention to the objective order of propositions in proofs, explaining it by association with the theory of grounding. Yet, scarce attention has been given to the order of concepts, which is related to the form of definitions and to the distinction between simple and complex concepts. The paper will investigate whether the order of concepts should be considered as having an ontological or epistemological value, given that ‘a concept is called simple only if we ourselves can distinguish no more plurality in it’. Bolzano’s view on the order of concepts will be reconstructed on the basis of his mathematical and logical works, in order to understand the relation between his logical and epistemological viewpoints. The ban on kind crossing as well as the use of philosophical notions (e.g. similarity) in geometrical proofs will be analyzed to verify whether a general hierarchical order of scientific concepts regulates the correct ordering of concepts in mathematics. A comparison with Wolff’s conception and the analysis of the definition of similarity of mathematical objects will suggest a tension, inherited from Leibniz, between the tendency to have a unique hierarchical order of all concepts and an order for each specific mathematical discipline. A further comparison with the investigations on explicit and implicit definitions developed by the Peano School will allow establishing whether, notwithstanding different syntactic formulations of definitions, Bolzano’s requirement of an order of concepts maintained some role up to Peano’s axiomatics. References [1] Betti, A. (2010). Explanation in metaphysics and Bolzano’s theory of ground and consequence. Logique et Analyse, 53(211):281–316. [2] Bolzano, B. (1837). Wissenschaftslehre. Versuch einer ausfuerhlichen und groessentheils neuen Darstellung der Logik mit steter Ruecksicht auf deren bisherigen Bearbeiter. Seidel, Sulzbach. [3] Bolzano, B. (1981). Von der mathematischen Lehrart. Frommann-Holzboog, Stuttgart-BadCannstatt. [4] Bolzano, B. (2004). The mathematical works of Bernard Bolzano, ed. by S. Russ. Oxford University Press. [5] Centrone, S. (2016). Early Bolzano on ground-consequence proofs. The Bulletin of Symbolic Logic, 22(3). [6] Correia, F. and Schnieder, B. (2012). Metaphysical grounding: Understanding the structure of reality. Cambridge University Press. [7] de Jong, W. R. and Betti, A. (2010). The classical model of science: A millennia-old model of scientific rationality. Synthese, 174(2):185–203. [8] Johnson, D. M. (1977). Prelude to dimension theory: The geometrical investigations of Bernard Bolzano. Archive for History of Exact Sciences, 17(3):261–295. [9] Sebestik, J. (1992). Logique et mathématique chez Bernard Bolzano. Vrin, Paris. |

Organizer: Manuel Gustavo Isaac

Conceptual engineering is a fast-moving research program in the field of philosophical methodology. Considering concepts as cognitive devices that we use in our cognitive activities, it basically assumes that the quality of our conceptual apparatuses crucially determines the quality of our corresponding cognitive activities. On these grounds, conceptual engineering adopts a normative standpoint that means to prescribe which concepts we should have, instead of describing the concepts we do have as a matter of fact. And its ultimate goal as a research program is thus to develop a method to assess and improve the quality of any of our concepts working as such cognitive devices—that is, for the identification of improvable conceptual features (e.g. conceptual deficiencies) and the elaboration of correlated ameliorative strategies (e.g. for fixing the identified conceptual deficiencies). Given the ubiquity of deficient and improvable concepts, the potential outreach of conceptual engineering is arguably unlimited. But conceptual engineering is still a very young research program and little has been said so far as to how its method should be devised. The purpose of the MET4CE Symposium is to contribute to filling this theoretical gap. Its main aim will then be to propose critical reflections on the very possibility—whether and why (or why not)? how? to what extent?—of developing an adaptable set of step-by-step instructions for the cognitive optimization of our conceptual apparatuses. With this in mind, the common background of the symposium will be made of the Carnapian method of explication rebooted as an ameliorative project for (re-)engineering concepts. Against this background, a first objective of the symposium will be to present ways to procedurally recast Carnapian explication with complementary frameworks (e.g. via reflective equilibrium, metrological naturalism, formalization, or conceptual modeling) for the purposes of conceptual engineering. A second objective will next be to present ways to extend the scope Carnapian explication as a template method with alternative frameworks (e.g. via conceptual history/genealogy, experimental philosophy, or constructionism in philosophy of information), again, for the purposes of conceptual engineering. And finally, a third objective of the symposium will be to evaluate these upgraded methodological frameworks for (re-)engineering concepts by comparison with competing theories of conceptual engineering that reject the very possibility of developing any template procedural methods for (re-)engineering concepts (such as Cappelen’s ‘Austerity framework’). The expected outcome of the MET4CE Symposium is thereby to provide conceptual engineering with proven guidelines for making it an actionable program for the cognitive optimization of our conceptual apparatuses.

09:00 | Organizational Etiological Teleology: a Selected-Effect Approach to Biological Self-Regulation PRESENTER: Cristian Saborido ABSTRACT. According to selected-effects theories (for instance, Neander 1991; Griffiths 1993), selection is a source of teleology: purposes are effects preserved or promoted through a selective process. Selected-effects theories are favored by several authors who want to claim that Darwinian evolution introduces teleology in the biological world. For the purposes of this presentation, we take selected-effects theories for granted, although we will provide some motivation for them by appeal to certain response-dependent meta-normative views about value, more specifically, views according to which value is generated by evaluative responses. While most selected-effects theories concentrate on natural selection (for an exception, see Garson 2017), our goal is to argue that there are other types of selective processes in biology and that such processes should be seen as giving rise to distinctive types of evaluative standards. More specifically, we suggest that biological self-regulation (the mechanisms by which organisms monitor and regulate their own behavior and that has been the object of careful study in the biological sciences) can be seen as a selective process. In general, biological organisms include dedicated regulatory mechanisms that compensate for possible perturbations and keep the state of the system within certain ranges (Bich et al 2015). The pressures that such self-regulatory submechanisms exercise on the states of the organism are a form of discriminatory reinforcement, as a result of which certain tendencies are inhibited while others are promoted. It is reasonable, therefore, to characterize biological self-regulation as a selective process. So, those who accept selected-effects theories of teleology should also grant that biological self-regulation is a source of teleology – at least to the same extent that selected-effects theories are taken to vindicate the view that biological teleology is generated by natural selection. The purposes and evaluative standards introduced by self-regulation are independent of -and arguably sometimes conflicting with- the standards associated with natural selection. Given that self-regulation is ubiquitous in the biological world, it is to be expected that the evaluative standards generated by it will play a prominent role in our explanations of biological phenomena. We think that the approach sketched in this paper offers an appealing integrative picture of the evaluative dimension of biology. Bich, L., Mossio, M., Ruiz-Mirazo, K., & Moreno, A. (2015). Biological regulation: controlling the system from within. Biology & Philosophy, 31 (2): 237–65 Garson, J. (2017). A generalized selected effects theory of function. Philosophy of Science, 84(3), 523-543. Griffiths, P. E. (1993). Functional analysis and proper functions. BJPS, 44(3), 409-422. Neander, K. 1991. Functions as Selected Effects: The Conceptual Analyst's Defense. Philosophy of Science, 58 (2), pp. 168-184. |

09:30 | Organisms as Situated Models PRESENTER: Rachel Ankeny ABSTRACT. Organisms as Situated Models Rachel A. Ankeny & Sabina Leonelli Proposal for CLMPS Prague August 2019 The philosophical literature now abounds on the use of organisms as model organisms (for an early contribution, see Ankeny & Leonelli 2011), and draws heavily on historical and sociological work which tends to focus on the standardisation of organisms as a critical stage in related research processes. This paper builds on this literature while taking up a new philosophical angle, namely that the environment, experimental set-ups, and other conditions in which the organism is situated are critical to its use as a model organism (for an early discussion of this approach in historical context, see Ankeny et al. 2014; cf. Nelson 2017 on scaffolds). In such cases, the organism itself is only one component in a more complex model. Hence we explore how material systems can ground inferences, extrapolations, and representations made using these organisms, using a series of examples based on recent historical cases including the use of cages and other housing systems with rodents and farm animals (Ramsden 2009; Kirk & Ramsden 2018), and various types of field experiments and stations (drawing on Kohler 2002; de Bont 2015; Raby 2015 among other historical work). We argue that this type of situatedness is a critical part of what constitutes the repertoires connected with model organisms and other uses of experimental organisms (Ankeny & Leonelli 2016), and that the materiality of this situatedness is an essential feature which shapes the way in which such organisms are used in scientific research practices. In addition, this analysis assists us in making sense of the variety of modelling activities associated with the use of organisms in laboratories and elsewhere across various fields within biology (e.g., Meunier 2012; Love & Travisano 2013; Germain 2014; Gross & Green 2017 among many others), while at the same time clarifying why organisms themselves have such an important ‘anchoring’ role for these practices (see especially Rheinberger 2006). References Ankeny RA, Leonelli S (2011). What’s So Special about Model Organisms? Studies in History & Philosophy of Science 41: 313–23 Ankeny RA, Leonelli S (2016). Repertoires: A Post-Kuhnian Perspective on Scientific Change and Collaborative Research. Studies in History and Philosophy of Science 60: 18–28 Ankeny RA, Leonelli S, Nelson NC, Ramsden E (2014). Making Organisms Model Human Behavior: Situated Models in North American Alcohol Research, since 1950. Science in Context 27: 485–509. de Bont R (2015). Stations in the Field: A History of Place-Based Animal Research, 1870-1930. Chicago: University of Chicago Press. Germain P-L (2014). From Replica to Instruments: Animal Models in Biomedical Research. History & Philosophy of the Life Sciences 36: 114–28. Gross F, Green S (2017). The Sum of the Parts: Large-scale Modeling in Systems Biology. Philosophy and Theory in Biology 9: 1–26. Kirk RGW, Ramsden E (2018) Working across Species down on the Farm: Howard S. Liddell and the Development of Comparative Psychopathology, c. 1923–1962. History and Philosophy of the Life Sciences 40: 24 (online first). Kohler R (2002). Landscapes and Labscapes: Exploring the Lab-Field Border in Biology. Chicago: University of Chicago Press. Love AC, Travisano M (2013) Microbes modeling ontogeny. Biology and Philosophy 28: 161–88. Meunier R (2012). Stages in the Development of a Model Organism as a Platform for Mechanistic Models in Developmental bBology: Zebrafish, 1970–2000. Studies in History and Philosophy of Biological and Biomedical Sciences 43: 522–31. Nelson NC (2017). Model Behavior: Animal Experiments, Complexity, and the Genetics of Psychiatric Disorders. Chicago: University of Chicago Press. Raby M (2015). Ark and Archive: Making a Place for Long-term Research on Barro Colorado Island, Panama. Isis 106: 798–824. Ramsden E (2009). Escaping the Laboratory: The Rodent Experiments of John B. Calhoun and their Cultural Influence. Journal of Social History 42: 761–92. Rheinberger, H-J (2006). An Epistemology of the Concrete: Twentieth-Century Histories of Life. Chapel Hill, NC: Duke University Press. |

10:00 | Fitness incommensurability and evolutionary transitions in individuality ABSTRACT. The world of living objects possesses a hierarchical nature. Genes are nested within chromosomes, chromosomes within cells, cells within organisms, organisms within groups, etc. This hierarchical attribute of the natural world is currently considered a consequence of the fact that evolution is a process that not only selects individuals, but also leads to the emergence of higher-level individuals. These events, called evolutionary transitions in individuality (ETIs), consist of mergers of autonomously reproducing units, to the extent that, after an ETI, such units no longer reproduce independently, but jointly, as a single entity. One of the most outstanding examples of an ETI is endosymbiosis, a process during which a host engulfs a free-living bacterium and subsequently (on an evolutionary time scale) transforms it into a part of its body, thus rendering it incapable of a free-living lifestyle. Although this might seem to be a rare event, it is currently established among biologists and philosophers that endosymbiosis has had a tremendous effect on the evolutionary history of species. For instance, the mitochondrion, one of the most important organelles within cells, has an endosymbiotic origin. Due to its extraordinary role in the evolution of species, endosymbiosis has recently been the object of careful study. Specifically, its genetic aspect has been studied intensively. However, the ecological aspect of endosymbiotic events is still poorly understood, especially the question of whether endosymbiosis is a kind of parasitism or, perhaps, mutualism for the endosymbiont. In other words, figuring out whether endosymbiosis reduces or enhances the fitness of the bacterium in comparison to its free-living relatives is a hard nut to crack. Therefore, the popular approach is to argue that endosymbiosis is a kind of slavery, i.e. the endosymbiont is a slave of the host. Although metaphorically this analogy sounds interesting, it has not provided much illumination. The aim of my speech is to show that science can obtain a more precise understanding of the ecological aspects of endosymbiosis, one that transcends shallow analogies. I will do this by using an idea of fitness incommensurability which basically states that it is not always possible to compare the fitness of two objects. As a case study, I will analyse the origin of aphids’ endosymbiotic bacteria, Buchnera sp., and show that, in this symbiotic system, inquiring about the fitness benefits to the endosymbiont is not theoretically justified. As a result, I will argue that asking whether endosymbiosis is beneficial or harmful to the bacteria is not always appropriate, and thus, before we start looking for an answer to such a question, we should first determine whether, in a given symbiotic system, it makes sense to pose it at all. |

09:00 | Deterministic and Indeterministic Situations ABSTRACT. The arguments for establishing the compatibility of determinism and chance usually turn on different levels of description of reality. Different levels of description delineate different sets of possible worlds with their associated state spaces and chance functions. I try a different route to argue for the compatibility of some form of determinism with indeterminism by explicating these ideas in terms of truth-maker semantics, as developed by Kit Fine. I believe the appeal of this approach is that the point can be made without having recourse to a linguistic framework to delineate different levels of description. Instead we can model different levels of reality in a more direct manner. To that end, I start with a situation space (S, ⊑), where S (the set of situations) is non-empty and ⊑ (part) is a partial order on S. Informally, the situation of my tasting a piece of chocolate is part of my tasting and finishing the chocolate bar. The situation space is also endowed with a fusion operator, which enables us to talk about extensions of a situation. The maximal extensions of a situation make up possible worlds (called world-states), but we need not define determinism at that global level. Situations can be actual or possible, and we discern possible situations, a subset of S, by S◊, so that we can define compatible or incompatible situations. My tasting the chocolate is compatible with the situation of my finishing it and also compatible with the situation of my not finishing it. We call a situation s in S deterministic iff whenever some s’ and t in S are such that s’⊑s, and s’⊑t, then either s⊑t or t⊑s. That is, s is deterministic iff it is part of the unique extension of its sub-situations. Suppose we aim to model a micro-level reality grounding the situations of S. Let us assume we have another situation space (Sm, ⊑m), with its set of possible situations denoted by Sm◊, and fusion operator ⊔m, satisfying the mereological conditions sketched above. Assume for each s in S◊, there exists a subset Sm(s) of Sm such that the fusions of the elements of Sm(s) make up any part of s. More precisely, if s’⊑s, then s’ is equivalent to some ⊔msm,i, where {sm,i} is a subset of Sm(s). What I have described so far does not preclude the possibility that the micro-level is identical with the macro-level. That is not a short-coming, as it can very well be the case in some possible worlds. The interesting possibilities, however, are when the levels differ, and allow us to find examples when the situations are deterministic at the macro-level and indeterministic at the micro-level or vice versa. That possibility is what I illustrate in my talk. I also dwell on the advantages of this approach for its avoidance of a chance function to represent indeterminism, invoking Norton's indeterministic dome example. |

09:30 | How are mathematical structures determined ABSTRACT. Despite rather general consensus among the philosophers of mathematics that the objects of mathematics are structures (patterns, structural possibilities) (cf. e.g. Hellman 2005, Horsten 2012, Landry 2016, Shapiro 2016), there seems to be some disagreement about how mathematical structures are determined. The traditional view reflects that since all concepts of an axiomatized mathematical theory are defined implicitly, by their mutual relations specified by the axioms, the whole theory is nothing but a general structure determined by these relations. The meaning of all the concepts being strictly internal to the theory, their names being arbitrary tokens. As Hilbert neatly put it, "every theory is only a scaffolding or schema of concepts together with their necessary relations to one another, and ... the basic elements can be thought of in any way one likes," (cf. e.g. Reck and Price 2000, Shapiro 2005). The potentially competing view is connected with the modern abstract algebra of category theory. Some categories may be interpreted as domains of objects sharing some particular mathematical structure (e.g. group structure) taken together with their mutual ("structure-preserving") morphisms. Despite their original motivation, within any category, the objects are primitive concepts, lacking any "internal properties", determined strictly by their mutual relations in the form of the category morphisms. Given a category, say, of all groups and group homomorphisms, we can define the group structure wholly in categorial terms, without ever invoking the "group elements" or the "group operations". The advantage is that by avoiding to mention its "underlying substance", structures are determined without any unnecessary "non-structural" connotations. Moreover, in this way the relevant notion of isomorphism is also automatically obtained (cf. e.g. Awodey 1996, Landry and Marquis 2005). Exponents of the respective views regard them as competing (Shapiro 2005, p. 68), even "radically different" (Awodey 2013, p. 4). I want to argue that the difference is rather in the question asked than in the actual answer provided. There are two levels on which we can consider determination of mathematical structures. First, any theory determines a structure. There exist many models of a theory -- actual mathematical systems satisfying the structural requirements of the theory. These models are not mutually isomorphic and each exhibits some properties irrelevant vis-a-vis the given theory. It is on us, human mathematicians, to ascertain that a given concrete system matches the structural requirements of the given theory. To do this, we have to step outside of the theory proper; in relation to this theory, we enter the realm of meta-mathematics. Second, if we want mathematically to speak about structures, we need to stay within some theory which means we have to embed them into its structure. As objects of a theory, they are determined strictly by their positions within the whole structure of the theory. Although being particularly apt for this purpose, category theory does not differ in this sense from other theories (c.f. Resnik, 1997, p. 201--224). To describe a structure as a position within another mathematical structure, without invoking other underlying structures, and without overstepping the limits of mathematics proper, constitutes a laudable exercise. Category theory in particular is instrumental in this. Yet, to start determining structures using a theory, we need to grasp the structure of the theory to begin with. To determine its structure cannot, ultimately, be relegated to another mathematical theory. Moreover, the theory can only determine structures of the same or lesser complexity: we have to be operating within a more complex structure than the one we want to mathematically determine. Awodey, S. (1996). "Structure in Mathematics and Logic: A Categorical Perspective", In "Philosophia Mathematica", 4(3). Awodey, S. (2013). "Structuralism, Invariance, and Univalence", In "Philosophia Mathematica", 22(1). Hellman, G. (2005). "Structuralism", In "The Oxford handbook of philosophy of mathematics and logic". Horsten, L. (2012). "Philosophy of Mathematics", In "The Stanford Encyclopedia of Philosophy". Landry, E. and Marquis, J-P. (2005). "Categories in Context: Historical, Foundational, and Philosophical", In "Philosophia Mathematica", 13(1). Landry, E. (2016). "Mathematical Structuralism", In "Oxford Bibliographies Online Datasets". Reck, E. H. and Price, M. P. (2000). "Structures and structuralism in Contemporary Philosophy of Mathematics", In "Synthese", 125(3). Resnik, M. D. (1997). "Mathematics as a~science of patterns" Shapiro, S. (2005). "Categories, Structures, and the Frege-Hilbert Controversy: The Status of Meta-mathematics", In "Philosophia Mathematica", 13(1). Shapiro, S. (2014). "Mathematical Structuralism", In "The Internet Encyclopedia of Philosophy". |

10:00 | Modelling Minimalism and Trivialism in the Philosophy of Mathematics Through a Notion of Conceptual Grounding PRESENTER: Andrea Sereni ABSTRACT. Minimalism [Linnebo 2012, 2013, 2018] and Trivialism [Rayo 2013, 2015] are two forms of lightweight platonism in the philosophy of mathematics. Minimalism is the view that mathematical objects are thin in the sense that “very little is required for their existence” [Linnebo 2018, 3]. Trivialism is the view that mathematical theorems have trivial truth-conditions, in the sense that “nothing is required of reality in order for those conditions to be satisfied” [Rayo 2013, 98]. Minimalism and trivialism can be developed on the basis of abstraction principles, universally quantified biconditionals stating that the same abstract corresponds to two entities of the same type if and only if those entities belongs to the same equivalence class (e.g. Hume’s Principle, HP, which states that the cardinal number of the concept F is identical to the cardinal number of G if and only if F and G can be put into one-to-one correspondence; cf. Frege [1884, § 64]). The minimalist claims that the truth of the right-hand side of HP suffices for the truth of its left-hand side. The trivialist claims that for the number of F to be identical with the number of G just is for F and G to stand in one-to-one correspondence. Moreover, the minimalist and the trivialist alike submit that the notion of the notion of sufficiency, on one side, and the ‘just is’-operator, on the other, cannot be identified with, or are not be interpreted as, a species of grounding or metaphysical dependence. More precisely, Linnebo [2018, 18] requires that “any metaphysical explanation of [the right-hand side] must be an explanation of [the left-hand side], or at least give rise to such an explanation”; the notion of grounding, by contrast, would fail to provide an explanation where one is required. Rayo [2013, 5] argues that ‘just is’-statements are not understood in such a way that “[they] should only be counted as true if the right-hand side ‘explains’ the left-hand side, or it is in some sense ‘more fundamental’”; the notion of grounding, by contrast, would introduce an explanation where none is expected. In this paper we argue that both minimalism and trivialism can be modelled through an (appropriate) notion of grounding. We start off by formulating a ‘job description’ for the relevant notion(s). As for minimalism, grounding must be both non-factive – viz. a claim of grounding, ‘A grounds B’, must not entail that either A or B are the case – and non-anti-reflexive – viz., it must not be the case that for any A, it is not the case that A grounds A. As for trivialism, the relevant notion of grounding must be non-factive and reflexive – viz., for any A, it is the case that A grounds A. Alternatively, trivialism can be formulated by introducing the notion of portion of reality, consisting in a (possibly fundamental) fact and whatever is grounded by that fact, and arguing that the two sides of a ‘just is’-statement represent (different) facts belonging to the same portion of reality. We then suggest some definitions of both the minimalist’s notion of sufficiency, on one side, and of the trivialist’s ‘just is’ operator, on the other, in terms of (weak) ground. Finally, we point out that a suitable elaboration of the notion of conceptual grounding [Correia & Schnieder 2012, 32], which takes into account the relations of priority among the concepts by which the two sides of HP are described, effectively responds to Linnebo’s and Rayo’s objections. |

09:00 | Poincaré Read as a Pragmatist ABSTRACT. Although there are scant direct connections between Poincaré and the pragmatists, he has been read as one from early on, for example by René Berthelot (1911). Berthelot’s idea was to present Poincaré as the most objective of the pragmatists, while presenting Nietzsche as the most subjective. The idea of a book on pragmatism based on two authors neither of whom are typically put in the cannon of pragmatism may seem bizarre, but there is a compelling logic to looking at the extremes in order to define what pragmatism is and to find common themes throughout the movement. Poincaré certainly shares some themes with the pragmatists, especially the idea of a human element in knowledge that can be seen in his theory of the role that conventions play in science. Poincaré also emphatically rejects a metaphysically realist account of truth as correspondence to an external reality. Perhaps wisely, he does not specify precisely what he does mean by truth, but he frequently uses the language of “useful” or “convenient” theories. Of course, for Poincaré there are limits to conventions. First, he holds that conventions are guided by experience so that we are more likely to choose certain alternatives. Second, he directly and forcefully rejects LeRoy’s interpretation that conventions are found everywhere in science. Poincaré insisted that there are empirical facts, along with conventions. His position is easily comparable to Dewey’s insistence that science is objective even if we reject the metaphysical realist account of representation and hold that values and aims play a role in defining scientific knowledge. Besides clarifying Poincaré’s philosophy of science, reading him as a pragmatist puts his writings into a larger context. The development of 20th century philosophy was influenced heavily by dramatic developments in mathematics and physics. Poincaré was a pioneer in incorporating these developments into philosophy of science and his pragmatic attitude towards the development of non-Euclidean geometries and relativity in physics was a profoundly influential contribution to the philosophy of science. The development and professionalization of the philosophy of science is often seen as part of the eclipse of pragmatism. In fact, pragmatic ideas were used in many areas of the philosophy of science and continue to provide guidance in current debates. Indeed pragmatism was always a form of scientific philosophy, maintaining a connection to scientists and philosophers of science. From our current perspective, the pragmatists were right on several issues where they disagreed with the logical positivists. Pragmatists advocated the continuity of inquiry into values and natural science, a type of holism, thoroughgoing fallibilism, and focused on the practice of science, rather than its logical reconstruction. Reading Poincaré as a pragmatist will give us a new perspective on the development of the philosophy of science. Berthelot, René. 1911. Un Romantisme Utilitaire: Étude Sur le Mouvement Pragmatiste; le Pragmatisme chez Nietzsche et chez Poincaré. Paris: Félix Alcan. |

09:30 | Three Ways to Understand the Inductive thoughts of Whewell PRESENTER: Xiaoming Ren ABSTRACT. The inductive thinking of Whewell has been underestimated for a long time, and it seems that there is trend which simply treat his theory as a typical version of hypothetico -deductivism. However, our research found that, besides the classical hypothetico -deductivism interpretation, Whewell's thoughts also can be interpreted as coherence theory, or the best explanatory inference. As a result, these three ways of interpretations are mutually intertwined. In our opinion, existing many ways to understand Whewell's inductive thoughts shows that, on the one hand, his ideas are very complicated and profound, therefore deserved to study in-depth; on the other hand, it shows that our understanding of the nature of the induction is still rather poor, and we must take into account that the coherence standard is indispensable for inductive logic system. References [1]Aleta Quinn. Whewell on classification and consilience. In Studies in History and Philosophy of Biol & Biomed Sci August 2017 64:65-74. [2] Cobb, Aaron D.History and scientific practice in the construction of an adequate philosophy of science: revisiting a Whewell/Mill debate. Studies in history and philosophy of science. Amsterdam ; Jena [u.a.]. 42 (2011),1, S. 85- 93. [3] Dov M. Gabbay. Handbook of the History of Logic. Volume 4. Elsevier’s Science & Technology Rights Department in Oxford, UK,2008. [4]Gilbert H. Harman. The Inference to the Best Explanation. The philosophical Review,Vol.1(Jan.,1965)pp.88-95. [5]Gregory J. Morgan. Philosophy of Science Matters. Oxford university press.2011. |

10:00 | The Nascency of Ludwik Fleck’s Polemics with Tadeusz Bilikiewicz ABSTRACT. The debate between Fleck and Bilikiewicz—a historian and philosopher of medicine—took place shortly before the outbreak of WWII and remained virtually unnoticed until 1978; some slightly wider recognition of their exchange was, however, possible only when English (Löwy 1990) and German (Fleck 2011) translations appeared. Basically, the polemics concerns understanding of the concept of style and the influence of the environment on scientific activity as well as its products and it starts as a review of Bilikiewicz’s book (1932) where the historical account of the development of embryology in early and late Baroque was interwoven with (at times) bold sociological remarks. The commentators of the debate were quick to notice that the claims made by Fleck at that time are crucial for understanding of his position—esp. because they support its non-relativist reading. While the importance of the controversy was univocally acknowledged, its assessments so far have been defective for two reasons. First, for decades the views of Bilikiewicz were known only from the short and rather critical presentation given by Fleck and this put their discussion into an inadequate perspective. Second, for over 40 years it remained a complete puzzle how this symposium originated. This paper aims to close these gaps. Thus, on the one hand, it is to indicate the gist of the disputation between Fleck and Bilikiewicz within the context of Bilikiewicz’s views; on the other one—and more importantly—it is to show its genesis basing on recently discovered and unpublished archival materials. For the preserved correspondence of theirs gives an opportunity to advance some hypotheses about the aims and hopes tied to the project but also about its failure. Bibliography Bilikiewicz, Tadeusz (1932). Die Embryologie im Zeitalter des Barock und des Rokoko, Leipzig: Georg Thieme Verlag. — (1990a). “Comments on Ludwik Fleck’s ‘Science and Social Context’”, in: Ilana Löwy (ed.). The Polish School of Philosophy of Medicine. From Tytus Chalubinski (1820–1889) to Ludwik Fleck (1896–1961), Dordrecht: Kluwer, pp. 257–266. — (1990b). “Reply to the Rejoinder by Ludwik Fleck”, ibid., pp. 274–275. Fleck, Ludwik (1990a). “Science and Social Context”, ibid., pp. 249–256. — (1990b). “Rejoinder to the Comments of Tadeusz Bilikiewicz”, ibid., pp. 267–273. — (2011). Denkstile und Tatsachen. Gesammelte Schriften und Zeugnisse, Sylwia Werner, Claus Zittel (Hrsg.), Suhrkamp Taschenbuch Verlag, Berlin, pp. 327–362. |

09:00 | Mutually inverse implication inherits from and improves on material implication ABSTRACT. The author constructs mutually-inversistic logic with mutually inverse implication ≤-1 as its core. The truth table for material implication correctly reflects the establishment of A being a sufficient but not necessary condition of B. The truth table for material equivalence correctly reflects the establishment of A being a sufficient and necessary condition of B. A≤-1B denotes A being a sufficient condition of B, its truth table of establishment combines the two truth tables: The first, third, and fourth rows of both truth table are T, F, and T respectively, so the first, third, and fourth rows of the truth table of establishment for ≤-1 are T, F, and T respectively; the two truth tables differ on the second row, so the second row of the truth table of establishment for ≤-1 is n (n denotes "need not determine whether it is true or false"). After an implicational proposition has been established, it can be employed as the major premise to make hypothetical inference. In classical logic, the affirmative expression of hypothetical inference is made in this way: both A materially implying B and A being true is the fourth row of the truth table for material implication, in which B is also true. The author argues that this is incorrect. There is a fundamental principle in philosophy: human cognition is from the known to the unknown. There is a fundamental principle in mathematics: the evaluation of a function is from the arguments to the value, if we want to evaluate from the value to the argument, then we should employ its inverse functions. In order to mathematize human cognition, we let the known be the arguments, let the unknown be the value, so that the human cognition from the known to the unknown become the evaluation of the function from the arguments to the value. The truth table for material implication is a truth function, in which A and B are the known, the arguments, A materially implying B is the unknown, the value, therefore, it can only be employed to establish A materially implying B from A and B. After A materially implying B has been established, it becomes known, becomes the argument. While in the generalized inverse functions of the truth table for material implication, A materially implying B is the known, the argument, therefore, we can employ its generalized inverse functions to make hypothetical inference. Following this clue, the author constructs two generalized inverse functions for the truth table of establishment for ≤-1, one for the affirmative expression of hypothetical inference, the other for the negative expression of hypothetical inference. Mutually inverse implication is free from implicational paradoxes. Reference Xunwei Zhou, Mutually-inversistic logic, mathematics, and their applications. Beijing: Central Compilation & Translation Press, 2013 |

09:30 | Expansions of relevant logics with a dual intuitionistic type negation PRESENTER: José M. Méndez ABSTRACT. Da Costa's paraconsistent logic C$\omega $ is axiomatized by adding to positive intuitionistic logic H$_{+}$ the \textquotedblleft Principle of Excluded Middle\textquotedblright\ (PEM), $A\vee \lnot A$, and \textquotedblleft Double Negation Elimination\textquotedblright\ (DNE), $% \lnot \lnot A\rightarrow A$ (cf., e.g, [2]). Richard Sylvan (\textit{n\'{e} }% Routley) notes that \textquotedblleft C$\omega $ is in certain respects the dual of intuitionistic logic\textquotedblright\ ([4], p. 48) due to the following facts (cf. [4], pp. 48-49). (1) Both C$\omega $ and intuitionistic logic H expand the positive logic H$_{+}$; (2) H rejects PEM but accepts the \textquotedblleft Principle of Non-Contradiction\textquotedblright\ (PNC), $% \lnot (A\wedge \lnot A)$; and (3) H accepts \textquotedblleft Double Negation Introduction\textquotedblright\ (DNI), $A\rightarrow \lnot \lnot A$% , but rejects DNE. Sylvan adds ([4], p. 49) \textquotedblleft This duality also takes a semantical shape: whereas intuitionism is essentially focused on evidentially incomplete situations excluding inconsistent situations, the C systems admit inconsistent situations but remove incomplete situations.\textquotedblright The aim of this paper is to define an unreduced Routley-Meyer semantics for a family expansions of minimal De Morgan relevant logic B$_{\text{M}}$ with a basic dual intuitionistic negation in Sylvan's sense. In order to fulfill the aim stated above, we shall proceed as follows. First of all, it has to be remarked that it is not possible to give an RM-semantics to logics weaker than (not containing) Sylvan and Plumwood's minimal De Morgan logic B$_{\text{M}}$ (cf. [1]). Consequently, the minimal dual intuitionistic logic in this paper is the logic Db, which is the result of expanding B$_{\text{M}}$ with a basic dual intuitionistic negation in Sylvan's sense (\textquotedblleft D\textquotedblright\ stands for \textquotedblleft dual intuitionistic negation\textquotedblright ; and \textquotedblleft b\textquotedblright\ for basic). Once Db is defined, we shall built a family of its extensions included in a dual intuitionistic expansion of positive (i.e., negationless) G\"{o}delian logic G3 (cf. [5]), which can be here named G3$^{\text{D}}$. All logics in this family are given an unreduced RM-semantics w.r.t. which they are sound and complete. Also, all logics in this family are shown paraconsistent in the sense that there are non-trivial inconsistent theories definable upon each one of them. Finally, it will be proved that the dual intuitionstic negation introduced in this paper and the De Morgan negation characteristic of relevant logics are independent in the context of G3$^{\text{D}}$. It has to be noted that Sylvan's extension of C$\omega $, CC$\omega $, does not include Db and, consequently, neither does it contain any of the logics defined in the paper.\bigskip ACKNOWLEDGEMENTS: Work supported by research project FFI2017-82878-P of the Spanish Ministry of Economy, Industry and Competitiveness. \begin{thebibliography}{9} \bibitem{} Brady, R. T. (ed.). (2003). \textit{Relevant Logics and Their Rivals}, vol. II. Ashgate, Aldershot.\noindent \bibitem{} Da Costa, N. C. A. (1974). On the theory of inconsistent formal systems. \textit{Notre Dame Journal of Formal Logic}, 15(4), 497--510. \bibitem{} Routley, R. Meyer, R. K., Plumwood, V., Brady R. T. (1982). \textit{Relevant Logics and their Rivals}, vol. 1. Atascadero, CA: Ridgeview Publishing Co. \bibitem{} Sylvan, R. (1990). Variations on da Costa C Systems and dual-intuitionistic logics I. Analyses of C$\omega $ and CC$\omega $. Studia Logica, 49(1), 47--65. \bibitem{} Yang, E. (2012). (Star-Based) three-valued Kripke-style semantics for pseudo- and weak-Boolean logics. \textit{Logic Journal of IGPL}, 20(1), 187--206. \end{thebibliography} |

10:00 | Basic quasi-Boolean expansions of relevant logics with a negation of intuitionistic kind ABSTRACT. Let L be a logic including the positive fragment of Anderson and Belnap's First Degree Entailment Logic, the weakest relevant logic (cf. [1]). In [5] (cf. also [3] and [4]), it is shown that Boolean negation (B-negation) can be introduced in L by adding to it a strong version of the \textquotedblleft conjunctive contraposition\textquotedblright\ ($(A\wedge B)\rightarrow \lnot C\Rightarrow (A\wedge C)\rightarrow \lnot B$, in particular) and the axiom of double negation introduction (i.e., $\lnot \lnot A\rightarrow A$). Nevertheless, it is not difficult to prove that B-negation can equivalently be axiomatized by adding to L the \textquotedblleft Ex contradictione quodlibet axiom\textquotedblright\ (ECQ, i.e., $(A\wedge \lnot A)\rightarrow B$) and the \textquotedblleft Conditioned Principle of Excluded Middle axiom\textquotedblright\ (Conditioned PEM, i.e., $B\rightarrow (A\vee \lnot A)$). From the point of view of possible-worlds semantics, the ECQ-axiom can be interpreted as expressing the thesis that all possible worlds are consistent (no possible world contains a proposition and its negation). The conditioned PEM, in its turn, would express that all possible worlds are complete (no possible world lacks a proposition and its negation). Thus, the ECQ-axiom and the conditioned PEM-axiom are the two pillars upon which B-negation can be built in weak positive logics such as the positive fragment of Anderson and Belnap's First Degree Entailment logic. This way of introducing B-negation in relevant logics suggests the definition of two families of quasi-Boolean negation expansions (QB-expansions) of relevant logics. One of them, intuitionistic in character, has the ECQ-axiom but not the conditioned PEM-axiom, the other one, dual intuitionistic in nature, has the conditioned PEM-axiom, but not the ECQ-axiom. The aim of this paper is to define and study the basic QB-expansions of relevant logics built up by using the former type of negation, the one of intuitionistic sort. We shall provide an unreduced Routley-Meyer type semantics (cf. [2] and [5]) for each one of these basic QB-expansions. B-negation extensions or expansions of relevant logics are of both logical and philosophical interest (cf. [5], pp. 376, ff.). For example, the logic classical R, KR, the result of extending relevant logic R with the ECQ-axiom, plays a central role in the undecidability proofs for relevant logics by Urquhart (cf. [6]). It is to be expected that quasi-Boolean negation expansions of relevant logics (not considered in the literature, as far as we know) will have a similar logical and philosophical interest.\bigskip ACKNOWLEDGEMENTS: Work supported by research project FFI2017-82878-P of the Spanish Ministry of Economy, Industry and Competitiveness. \begin{thebibliography}{9} \bibitem{} \noindent Anderson, A. R., Belnap, N. D. Jr. (1975). \textit{% Entailment}. The Logic of Relevance and Necessity, vol. I. Princeton, NJ: Princeton University Press. \bibitem{} Brady, R. T. (ed.). (2003). \textit{Relevant Logics and Their Rivals}, vol. II. Ashgate, Aldershot. \bibitem{} Meyer, R. K., Routley, R. (1973). Classical relevant logics. I. \textit{Studia Logica}, 32(1), 51--66. \bibitem{} Meyer, R. K., Routley, R. (1974). Classical relevant logics II. \textit{Studia Logica}, 33(2), 183--194. \bibitem{} Routley, R. Meyer, R. K., Plumwood, V., Brady R. T. (1982). \textit{Relevant Logics and their Rivals}, vol. 1. Atascadero, CA: Ridgeview Publishing Co. \bibitem{} Urquhart, A. (1984). The Undecidability of Entailment and Relevant Implication. \textit{Journal of Symbolic Logic}, 49(4), 1059--1073. \end{thebibliography} |

Organizers: Erich Reck and Georg Schiemer

While philosophers of mathematics usually focus on notions such as proof, theorem, concept, definition, calculation, and formalization, historians of mathematics have also used the notion of “style” to characterize the works of various mathematicians (from Euclid and Archimedes through Riemann, Brouwer, Noether, and Bourbaki to Mac Lane and Grothendieck). One question is, then, whether that notion should be seen as having significance from a philosophical point of view, and especially, for epistemological purposes. The notion of “style” is quite ambiguous, however, both in general and as applied to mathematics. In the present context, it is typically used in a sense close to “methodology” or “general approach”, i.e., a characteristic and distinctive way of investigating, organizing, and presenting mathematical ideas (geometric, algebraic, conceptual, computational, axiomatic, intuitive, etc.); but it has also been used in a personal/psychological sense (individual style), a sociological/political sense (e.g., national styles), a literary or more broadly aesthetic sense (writing style, stylish,), and as indicating a brand (an easily recognizable, influential, and visible style).

The seven talks in this session will explore this topic in a broad and multi-faceted way. They will investigate supposed differences in style within ancient and medieval mathematics (not just in ancient Greece but also China), early and late modern mathematics (into the 19th and 20th Centuries, e.g., Boole, Riemann, and Dedekind), and current mathematics (especially category theory, but more computational approaches too). A particular focus in several of the talks will be the “structuralist” style that has dominated much of mathematics since the second half of the 19th century. But “stylistic” issues concerning logic, on the one hand, and more popular presentations of mathematics, on the other, are also considered. In addition, several more general discussions of the notion of style in science, e.g., by Ludwig Fleck, G.-G. Granger, and Ian Hacking, are addressed and related to mathematics, as are questions about the dynamics of styles, i.e., the ways in which they get modified and transformed over time. Overall, it will become evident that the notion of “style” should, indeed, be seen as significant philosophically, but also as being in need of further disambiguation and sharpening.

11:00 | The Reach of Socratic Scientific Realism: From axiology of science to axiology of exemplary inquiry ABSTRACT. This paper constitutes an effort to engage directly on the conference theme, “bridging across academic disciplines.” I will argue that a specific refined axiological scientific realism—that is, an empirical meta-hypothesis about the end toward which scientific reasoning is directed—can be extended across domains of inquiry. The ultimate focus here will be on those domains that do not generally fall under the rubric of “science.” I will begin by clarifying the nature of axiological meta-hypotheses in general, defusing a set of concerns philosophers tend to express about them. I will then introduce the refined realist axiological meta-hypothesis and will emphasize that it is one asserted to be independent of its epistemic counterpart (i.e. the scientific realist's epistemic thesis that, roughly, we can justifiably believe successful scientific theories). The axiological meta-hypothesis I advocate specifies as the end toward which scientific theorizing is directed, not merely truth per se, but instead a particular sub-class of true claims, those that are experientially concretized as true. I will then identify a set of theoretical virtues that must be achieved were this end to be achieved; these in turn become desiderata required of the pursuit of the posited end. I will also point to a set of virtues the quest for the posited end encourages or promotes, even if those virtues are not required of its achievement. After showing that my axiological meta-hypothesis both explains and justifies these crucial and agreed upon aspects of theory choice in science, I will argue that it does so better than its primary live competitors—that it fares better at living up to what both it and its competitors, themselves, demand. I will then turn to apply this axiological meta-hypothesis to disciplines beyond “science” to demonstrate its promise as a theory of inquiry in general, with a special emphasis on the humanities. I will focus on one of the theoretical virtues as pivotal here, one closely related to the familiar notion of “likelihood,” but, more specifically, the degree to which a theoretical system implies what it explains and, in the case of axiology, justifies. After showing how the axiological meta-hypothesis I embrace can be liberated from the epistemic baggage by which it is traditionally encumbered, and after indicating the ways in which myths about the scientific method and about demarcation criteria have led us away from seeing this axiological bridge, I will illustrate the prospects for this bridge with respect to history, focusing specifically on a set of issues in the history of science. I will also show how the axiological meta-hypothesis can be used to adjudicate between metaphysical theories as well as meta-ethical theories. I will close by noting the unique justificatory position afforded by the kind of axiological turn I propose—by appealing, not to an epistemic or ontic justificatory foundation, but, instead, to one that is purely axiological. |

11:30 | An Attempt to Defend Scientific Realism PRESENTER: Lisa Zorzato ABSTRACT. In today metaphysics, the debate between realist and antirealist is of central importance with respect to the truth value attributed to our best scientific theories. Scientific realism is a realism regarding whatever is described by our best scientific theories and it aims at dealing with the following questions: i) can we have compelling reasons to believe in the (approximate) truth of our scientific beliefs? ii) which are the criteria used to attribute truth value to scientific theories? On the one hand it seems fair to admit that scientific realism is the best option to embrace in order to give a definitive answer to these questions; but on the other hand, scientific realism seems hard to defend because, unlike antirealism, it has to bear the burden of proof. In our presentation we aim at presenting Stanford’s antirealism position put forward in his Exceeding our grasp: Science, history, and the Problem of Unconceived Alternatives (2006), and then at giving some realist’s replies inspired us by Chakravartty (2008), Magnus (2010) Forber (2008), Godfrey-Smith (2008), Ruhmkorff (2011), and Devitt (2011). Besides the two main antirealist arguments (i.e. the “empirical underdetermination of theories by data” (EU); and the “pessimistic induction” (PI) also labelled “pessimistic meta-induction” (PMI)) there is a new challenges offered by Stanford in 2006 which has been labelled “the problem of unconceived alternatives” (PUA) or the “new induction” over the history of science (NI) which combines (EU) with (PMI) suggesting that: P1) science is subjected to the problem of unconceived alternatives: plausible alternatives are not conceived, thus our choice is not the best or the true one; P2) recurrent, transient underdetermination maintains that by looking at the history of science there were scientifically plausible alternatives to the past accepted theories that were not conceived; some of them were accepted later, while the theories actually accepted at that time were later shown to be false; C1 (new induction) our theories are not different from past ones. Thus, by induction, there are some plausible alternatives to our current theories that are not even imagined and entertained; C2) there are no compelling reasons to believe in our best theories. Towards this latest antirealist argument there are many realist replies which aim at showing that Stanford’s argument is inappropriate and it diverts the attention from the main realist claim, namely the induction over scientific theories in Stanford becomes an induction over scientists and their cognitive abilities in exhausting the set of all plausible alternatives. Assuming that PUA is similar to PI, we are going to prove that a possible reply to the classic PI can be also used as a reply to PUA because they are both based on a historical induction. First, a principle of continuity can be established between the different formulations of a theory in order to see which elements of the theory are retained over its historical development. This allow to save at least a partial version of scientific realism. Secondly, Stanford’s argument does not work as a real argument against scientific realism because relies on a distinction between community level proprieties and individual level properties; in fact, Stanford appeals to the cognitive limits of scientist’s without focusing the attention on the real realist’s claim: the comparison between the content of our best scientific theories and the physical reality they aim at describing, explaining and predicting. Third, the cognitive limits of past scientists would not necessary be the limits of future theorizers because history teaches us that science has been getting better and better. Let us just take for example the last century physics which has undergone to an exponential growth if compared with the centuries before. Finally – as Devitt suggests – to undermine the PUA challenge we can appeal to the methodological and technological improvements shown in our scientific life. In fact, this version (with respect to the classic realist reply, namely that there is a difference in the breadth, precision, novelty, or other important features of the predictive and explanatory accomplishment of past and present theories) explains why present theories are more successful and hence removes the whiff of ad-hoc-ery. |

12:00 | Practical Realism and Metaphysics in Science ABSTRACT. In early XXI century the Estonian philosopher of science and chemistry Rein Vihalemm initiated a new account of science that he called practical realism. Vihalemm claimed having the influence of the realist or practice based approaches of Rom Harré, Joseph Rouse, Ilkka Niiniluoto and Sami Pihlström but most notably of the criticism of modern science of Nicholas Maxwell (Vihalemm 2011). Vihalemm’s main idea is that science is, although based on theories, a practical activity that cannot be totally value free and independent of the cultural-historical context. In the course of practical research, we get access to an aspect of the world that becomes available to us through the prism of the theories we apply. This is a limited access, not the God’s Eye view but according to Vihalemm it is the real world that offers some access to the researcher. Still, there may be something more that is necessary, in order to make a proper sense of science and its progress that we can hardly deny. According to Nicholas Maxwell, scientists consistently presume that the universe is comprehensible and prefer unified theories to disunified ones and simple theories to complicated ones, although the latter are often empirically more successful than the former ones. This means that science actually includes assumptions that cannot be empirically tested, i.e. they are metaphysical in the Popperian sense. The acknowledgement of metaphysical assumptions in science is an inherent part of a novel approach to science of Nicholas Maxwell that he calls aim-oriented empiricism (see for instance Maxwell 2017). Rein Vihalemm accepts Maxwell’s critique of the prevalent common understanding of science that the latter calls standard empiricism although Maxwell’s approach is not necessarily realist and does not emphasize the practical side of research. Vihalemm likes the normative side of Maxwell’s account. The latter agrees that science cannot be and has not to be value free. Vihalemm seems to be positive concerning aim-oriented empiricism as well. However, he rejects the need for acknowledging the metaphysical assumptions in science. Just the opposite is true. Vihalemm’s intention is that practical realism has to be free of metaphysics. However, this puts us face to face with the problem in what respect is practical realism actually realism and in what respect is it different from standard empiricism. The solution can be combining practical realism with the idea of adding the metaphysical assumptions to the approach. By this move, practical realism would obtain a necessary foundation that enables to understand why scientific research remains a systematic quest for truth and does not limit itself just to a special kind of practical activity. However, this way we would rather get aim-oriented empiricism than practical or any other kind of realism. Vihalemm, Rein. “Towards a Practical Realist Philosophy of Science.” Baltic Journal of European Studies, 1, no. 1 (2011): 46-60. Maxwell, Nicholas. Understanding Scientific Progress. Aim-Oriented Empiricism. Paragon House, St. Paul, Minnesota, (2017). |

11:00 | A Learning Theoretic Argument for Scientific Realsm PRESENTER: Kevin Kelly ABSTRACT. Scientific realism has long been tied to no-miracle arguments. Two different measurement technologies would not miraculously generate the same spurious result (Hacking 1981). If there were absolute motion, the moving magnet would not miraculously produce the same current as the moving coil (Einstein 1905). Independent mechanisms in the complete causal story would not miraculously cancel---else there is a meta-cause of the cancelation missing from the story (Spirtes et al. 1993). Other celebrated examples include Copernicus' argument against epicycles, Newton's argument against celestial magnetism, Darwin's argument against special creation, Lavoisier's argument against phlogiston, and Fresnel's argument against Newton's particle theory of light. In each of those examples, there are two competing theories, a simple one and a complex one, and a miraculous tuning of the complex causes is capable of generating simple observations for eternity. Anti-realists (van Fraassen 1980) ask how a bias in favor of the simple theory could be truth-conducive, since one would receive exactly the same data for eternity regardless whether the truth is simple or miraculous. A pertinent response is that favoring the simple theory until it is refuted fails to converge to the truth only over the set of all miraculous worlds, which is negligible given both the simple theory and given the complex theory, whereas favoring the complex theory no matter what fails in all simple possibilities, which is non-negligible given the simple theory. But a gap remains in that argument, because one can still favor the complex theory over a refutable set of miraculous worlds, for which the empirically equivalent simple worlds are negligible given the simple theory. The failure set for that method is still negligible given both theories, so the ball is still in the realist's court. But not all convergence to the truth is equal. Plato held that one finds the truth better if one finds it indefeasibly. We show, by a novel, topological argument, that any method that (1) favors even a single miraculous world, and whose failure set is negligible given both the simple and the complex theory, drops the truth in some complex world. Thus, the Pyrrhic reward for favoring complex worlds indistinguishable from simple worlds is either to fail non-negligibly given some theory or to drop the truth in a complex world. In contrast, the realist's favored Ockham method that always favors simplicity fails only negligibly given either theory, and never drops the truth. The argument is extendible to statistical inference. It provides a new foundational perspective on theoretical science in general, and on recent causal discovery algorithms in particular. REFERENCES Einstein, A. (1905) “Zur Elektrodynamik bewegter Körper”, Annalen der Physik, 17: 891-921. Hacking, I. (1981) “Do We See Through a Micoscope?” Pacific Philosophical Quarterly, 64: 305-322. Spirtes, P., Glymour C., and Scheines, R. (1993) Causation, Prediction, and Search, New York: Springer. van Fraassen, B. (1980) The Scientific Image, Oxford: Oxford University Press. |

11:30 | Mutual Misunderstanding in Signalling Games ABSTRACT. It is a platitude that the relationship between a sign and the meaning it represents is arbitrary and based on convention. While W. V. Quine (1936) argued that this notion implies a vicious circle. Conventions depend on agreements, and, in order to make an agreement, we have to be able to communicate with each other through some kind of primary sign systems. While the emergence of sign systems is the very thing we want to explain. David Lewis, whose PhD was advised by Quine, challenged Quine’s argument in his dissertation (1969). He argued that conventions emerge from social interactions between different agents and formalized the process as signalling games. Signalling systems in which information is communicated from the sender to the receiver in a signalling game are strict Nash equilibria in the game. Then, the problem of the emergence of meaning becomes the problem that how to converge to and maintain strict Nash equilibria in signalling games. The solutions provided by Lewis are common knowledge and salience. However, Brian Skyrms (1996, 2004) argues that Lewis’ solution cannot escape Quine’s critique. Instead, Skyrms proposes an evolutionary dynamic approach to the problem. The dynamic analysis of signalling games shows that signalling systems spontaneously emerge in the interacts between senders and receivers. Common knowledge and salience are unnecessary. It is believed by many philosophers that Lewis-Skyrms signalling game theory brings fundamentally new insights to questions concerning with the explanation of meaning. As a result, studies of signalling games are prosperous in recent years. Nevertheless, this paper does not intend to discuss the technical problems of signalling game but concerns with the epistemic aspect. The question the paper tries to discuss is that whether the selection and maintenance of a strict Nash equilibrium in a signalling game mean the establishment of a signalling system. In the case of a signalling game at a strict Nash equilibrium, the receiver plays the act proper to a state the sender perceives. In other words, the act causally maps the state correctly. According to the evolutionary approach to signalling games, strict Nash equilibria in a signalling game equal to signalling systems. That is to say when the causal relationship between the act and the state is established, a signalling system emerges. However, the causal relationship can be established without signalling systems in the case of mutual misunderstanding. There may be two orders of mutual misunderstanding in signalling games: the first-order between senders and receivers and the second-order between the observer and the signalling game s/he observes. Mutual misunderstanding is the result of the absence of common knowledge between the sender and the receiver in a signalling game, and the observer of a game and the players in the game. Therefore, signalling games by evolutionary dynamics are insufficient to reject the common knowledge. The source of mutual misunderstanding is a long-standing confusion in information studies: confusing signal sequence and pragmatic effects of information with informational content. In the case of signalling game, philosophers take the success conditions of acts as the success conditions of communication while the mutual misunderstanding argument shows that they are different. In order to avoid mutual misunderstanding without appealing to common knowledge in signalling games, the distinction between signals, informational content and acts should be made clear first. The configuration of signalling game and the evolutionary approach to it will be introduced in section 2. Section 3 analyzes a Chinese folk story, Magical Fight, as an exemplar of mutual misunderstanding. The story shows that although the interacts between two players in the magical fight are successful for both the players and audiences, there is no effective communication between the players due to no common knowledge sharing by them. Section 4 argues that the magical fight is a signalling game in which two orders of mutual misunderstanding happen. Possible objections to the mutual misunderstanding argument are considered in section 5. Section 6 investigates the source of mutual misunderstanding in the studies of signalling game: possible mismatches between signals, informational content and acts. |

12:00 | A Constructivist Application of the Condorcet Jury Theorem ABSTRACT. The Condorcet Jury Theorem (CJT) tells us (roughly) that a group deciding on a yes or no issue by majority voting will be more reliable than any of its members, and will be virtually infallible when the number of members tends to infinity, provided a couple of conditions on individual reliability and independence are in place. Normally, the CJT presupposes the existence of some objective fact of the matter (or perhaps moral fact) F, whose truth (or desirability) does not depend on the method used to aggregate different opinions on whether F holds/ should hold. Thus, the CJT has been vindicated by advocates of epistemic democracy (with some caveats), while this move has typically been unavailable to authors with sympathies for proceduralist or constructivist accounts. In this paper I suggest an application of the CJT in which the truth/correctness of F is a direct result of the action of voting. To this effect I consider a n-person generalization of the stag hunt game, in which a stag is hunt only if there is a majority of hunters choosing stag. I show how to reinterpret the independence and competence conditions of the CJT to fit the example, and how to assess the import of the infallibility result in the present context of discussion. As a result of this we are able to identify both a selfish and a cooperative instance of the theorem, which help us draw some morals on what we may call ‘epistemic optimism’. More generally, the proposal shows that we can establish links between epistemic and purely procedural conceptions of voting; this, in turn, can lead to novel ways to understand the relation between epistemic and procedural democracy. |

11:00 | Deductive Savages: The Oxford Noetics on Logic and Scientific Method ABSTRACT. In 1832, William Whewell reminded his friend, the political economist Richard Jones, that “if any truth is to be got at, pluck it when it has grown ripe, and do not, like the deductive savages, cut down the tree to get at it.” [1] He preferred a cautious ascent from particular facts to general propositions over the reckless anticipations of people like the Oxford Noetics. To Whewell, this was more than an epistemic preference; it was a moral one. [2] Progress in all fields should be slow and sure or it could potentially lead down atheistic, materialistic, even revolutionary paths. A few major figures comprised the Noetics, including Edward Copleston, Nassau Senior, and Richard Whately. Despite Whewell’s conflation of their methods with the more radical political economists like David Ricardo, Jeremy Bentham, and James Mill, the foundation of their own programme was to champion Anglican theology on the grounds of its rationality. As part of this programme, Copleston engaged with Scottish scholars who considered the Oxford curriculum backward; Senior accepted a position as the first Drummond Professor of Political Economy; and Whately published his inordinately popular Elements of Logic (1826) and accepted a position as the second Drummond Professor. Most significantly for my paper, they revitalized syllogistic logic and divorced political economy from its “subversive” connotations. [3] The Noetics were influential in a number of aspects of Victorian logic, philosophy, theology, and science. Yet their programme has gone underappreciated. First of all, Whewell’s depiction of them as “deductivists” is not exactly right. Christophe Depoórtere has already shown this in the case of Nassau Senior in the context of political economy, but I intend to do the same for the general programme. Second, their revitalization of syllogistic logic has been misinterpreted as a revitalization of scholastic logic despite their harsh criticisms of the scholastics. [4] Still other aspects of the their programme have been left virtually unexplored, like Whately’s notion of induction, or the relationship between logical and physical discovery. In this paper, I will provide a more sympathetic account of the Noetic movement in the context of its positions on logic and scientific method. I will focus mostly on Whately, though Copleston and Senior will have their important parts to play. Instead of interpreting their scientific method as deductivist, I will show that they did not believe there was a single method suitable for all sciences; rather, they believed that deduction and induction (in Whewell’s sense) played variable roles in each according to their nature. [1] William Whewell to Richard Jones, Feb. 19, 1832, in Isaac Todhunter, William Whewell: An Account of His Writings, Vol. II, London: Macmillan and Co, 1876: 140. [2] William Whewell, Astronomy and General Physics, 2nd edition, London: William Pickering, 1833: esp. Book III, Chapter V and VI. [3] Richard Whately, Review of Nassau Senior, An Introductory Lecture on Political Economy, in The Edinburgh Review 48 (1827): 171. [4] See, for example: Marie-Jose Durand-Richard, “Logic versus Algebra: English Debates and Boole’s Mediation,” in A Boole Anthology, edited by James Gasser, Dordrecht: Kluwer Academic Publishers, 2000: 145. |

11:30 | Logical Empiricism in Exile. Hans Reichenbach's Research and Teaching Activities at Istanbul University (1933–1938) ABSTRACT. The purpose of this paper is to shed new light on a less well-known stage in the development of Hans Reichenbach's thought, namely his research, output and teaching activities at Istanbul University (1933–1938). During his exile in Turkey, Reichenbach was able to continue his efforts to popularize and extend the program of "scientific philosophy," not only through the restructuring of the Department of Philosophy at Istanbul University, but also through different academic exchanges with European countries. Between the beginning and end of Reichenbach's exile in Istanbul, the Turkish reception of logical empiricism and scientific philosophy is characterized by a shift from a mere external interest in these fields to an effective implementation of the principles and methods that characterize Reichenbach's philosophical approach. The aim of this paper will be to show that Reichenbach's impact was not limited to an unilateral transfer of knowledge but that it has led to an active participation of the Turkish side in establishing a link between philosophy and particular scientific disciplines (Einzelwissenschaften) at Istanbul University. Therefore, the consideration of Reichenbach's output between 1933 and 1938 must necessarily be complemented with the study of the courses he gave at Istanbul University as well as the study of the work of his students, most of which were only completed after Reichenbach's departure to the United States. His students’ under-researched contribution to the development of this scientific philosophy may seem to have quickly faded away at the philosophy department of Istanbul University, but one hypothesis I will examine is whether its more durable impact did not in fact happen in other fields, for example the development of experimental psychology in the same university. Secondary Literature Danneberg, L., Kamlah, A., & Schäfer, L. (Eds.). (1994). Hans Reichenbach und die Berliner Gruppe. Braunschweig-Wiesbaden: Vieweg. Irzik, G. (2011). Hans Reichenbach in Istanbul. Synthese 181/1 (2011), Springer, pp. 157–180. Örs, Y. (2006). Hans Reichenbach and Logical Empiricism in Turkey, in: Cambridge and Vienna: Frank P. Ramsey and the Vienna Circle, Maria C. Galavotti (Ed.), Springer, pp. 189–212. Stadler, F. (1993). Scientific philosophy: Origins and developments. Dordrecht, Boston, London: Kluwer. Stadler, F (2011). The road to “Experience and Prediction” from within: Hans Reichenbach’s scientific correspondence from Berlin to Istanbul. Synthese 181/1 (2011), Springer, pp. 137–155. |

12:00 | V.N. Ivanovsky's Conception of Science PRESENTER: Elena Sinelnikova ABSTRACT. The multidimensional model of science created by Russian philosopher Vladimir Nikolayevich Ivanovsky at the beginning of the 1920s covered social, psychological, logical and methodological, ideological aspects of science, took into account the variety of types, methods and contents of various sciences, associated their development with both internal factors and interaction with other sciences and other areas of culture. Ivanovsky presented his conception in “Methodological Introduction to Science and Philosophy” (1923). Unfortunately, he and his works currently are unknown not only to the world scientific community, but also to Russian philosophers and historians of philosophy. In the beginning of the XX century, however, V.N. Ivanovsky was famous in international academic community. He participated in the 1st International Philosophical Congress in Paris and represented Russia in the Bureau of Philosophical Congresses until the October Revolution. He studied in Moscow University, then in the main scientific European centers (Berlin, London, Oxford, and Paris). Ivanovsky taught philosophy, psychology, and history of pedagogy in universities of Kazan, Moscow, Samara, and Minsk. He also was a secretary both of journal “Voprosy filosofii i psikhologii” (Questions of philosophy and psychology) in 1893-1896 and of Moscow Psychology Society in 1893-1900. The multidimensional model of science by V.N. Ivanovsky demonstrated socio-psychological, logical-methodological, and philosophical aspects of science. Socio-psychological aspects of science are due to the fact that living conditions affect the content of knowledge, as they give sciences a stock of experience, provide with analogies suitable for explaining the unknown, for formulating hypotheses. The recognition of scientists’ works in particular era depends on “life”, because a scientist always risks getting ahead of his time, being misunderstood, not appreciated, not recognized by contemporaries. Ivanovsky emphasizes that science is a system of views that is not only proven, but also recognized as the true known by many people. Ivanovsky drew attention to the importance of psychological preferences and traditions in the development of social and natural sciences, the role of the scientific community. Considering the influence of social and psychological factors on the form and content of scientific knowledge, V.N. Ivanovsky determined their social role and importance in opposite to the manifestations of Soviet Marxism in vulgar “class approach” to science in the post-revolutionary period. Describing science as a multidimensional system, V.N. Ivanovsky expresses a number of ideas relating to the psychology of knowledge and creativity. He wrote about the “theoretical instinct of curiosity” as the psychological basis of the pursuit of truth. Logic, by V.N. Ivanovsky, is the crucible through which thought must pass in order to become a science. Requirements of proof, truth, significance depersonalize all achievements of thought; tear them away from their subjective roots and motives. A researcher may be driven by purely theoretical curiosity, motivations of a socio-ethical, religious, aesthetic, or other nature. The “logical test” ensures the objectivity of science. V.N. Ivanovsky interpreted it quite widely, referring to this and the procedures for confirming by experience, i.e. empirical verification. Systematic is a necessary property of science, which distinguishes it from a collection of disparate information. By Ivanovsky, science is always based on certain principles, general prerequisites. Information becomes scientific when it is included in a logical whole. He stressed that not all the results of science are and must be put to practical use. Speaking against the narrow practicality in relation to science, V.N. Ivanovsky argued that the social value of genuine scientific creativity is immense. The scale, depth and clarity of the concept of science developed by V.N. Ivanovsky, put it on par with the achievements of postpositivism. References Ivanovsky V.N., Methodological Introduction to Science and Philosophy (Minsk: Beltrestpechat, 1923). Acknowledgment: The reported study was funded by RFBR according to the research project № 17-33-00003. |

11:00 | Abstract semantic conditions and the incompleteness of intuitionistic propositional logic with respect to proof-theoretic semantics PRESENTER: Thomas Piecha ABSTRACT. In [1] it was shown that intuitionistic propositional logic is semantically incomplete for certain notions of proof-theoretic validity. This questioned a claim by Prawitz, who was the first to propose a proof-theoretic notion of validity, and claimed completeness for it [3, 4]. In this talk we put these and related results into a more general context [2]. We consider the calculus of intuitionistic propositional logic (IPC) and formulate five abstract semantic conditions for proof-theoretic validity which every proof-theoretic semantics is supposed to satisfy. We then consider several more specific conditions under which IPC turns out to be semantically incomplete. In validity-based proof-theoretic semantics, one normally considers the validity of atomic formulas to be determined by an atomic system S. This atomic system corresponds to what in truth-theoretic semantics is a structure. Via semantical clauses for the connectives, an atomic base then inductively determines the validity of formulas with respect to S, called 'S-validity' for short, as well as a consequence relation between sets of formulas and single formulas. We completely leave open the nature of S and just assume that a nonempty finite or infinite set of entities called 'bases' is given, to which S belongs. We furthermore assume that for each base S in such a set a consequence relation is given. The relation of universal or logical consequence is, as usual, understood as transmitting S-validity from the antecedents to the consequent. We propose abstract semantic conditions which are so general that they cover most semantic approaches, even classical truth-theoretic semantics. We then show that if in addition certain more special conditions are assumed, IPC fails to be complete. Here a crucial role is played by the generalized disjunction principle. Several concrete notions of proof-theoretic validity are considered, and it is shown which of the conditions rendering IPC incomplete they meet. From the point of view of proof-theoretic semantics, intuitionistic logic has always been considered the main alternative to classical logic. However, in view of the results to be discussed in this talk, intuitionistic logic does not capture basic ideas of proof-theoretic semantics. Given the fact that a semantics should be primary over a syntactic specification of a logic, we observe that intuitionistic logic falls short of what is valid according to proof-theoretic semantics. The incompleteness of intuitionistic logic with respect to such a semantics therefore raises the question of whether there is an intermediate logic between intuitionistic and classical logic which is complete with respect to it. References [1] Piecha, Thomas, Wagner de Campos Sanz, and Peter Schroeder-Heister, 'Failure of completeness in proof-theoretic semantics', Journal of Philosophical Logic, 44 (2015), 321–335. https://doi.org/10.1007/s10992-014-9322-x. [2] Piecha, Thomas and Peter Schroeder-Heister, Incompleteness of intuitionistic logic with respect to proof-theoretic semantics, Studia Logica 107 (2019). Available online at https://doi.org/10.1007/s11225-018-9823-7 and via Springer Nature SharedIt at https://rdcu.be/5dDs. [3] Prawitz, Dag, 'Towards a foundation of a general proof theory', in P. Suppes et al. (eds), Logic, Methodology and Philosophy of Science IV, North-Holland, Amsterdam, 1973, pp. 225–250. [4] Prawitz, Dag, 'An approach to general proof theory and a conjecture of a kind of completeness of intuitionistic logic revisited', in L. C. Pereira, E. H. Haeusler, and V. de Paiva, (eds), Advances in Natural Deduction, Springer 2014, pp. 269–279. |

11:30 | First-degree entailment and structural reasoning ABSTRACT. In my talk I will show how the logic of first-degree entailment of Anderson and Belnap Efde can be represented by a variety of deductively equivalent binary (Fmla-Fmla) consequence systems, up to a system with transitivity as the only inference rule. Some possible extensions of these systems, such es "exactly true logic" and "non-fasity logic" are briefly considered as well. System Efde: a1. A & B |- A a2 A & B |- B a3. A |- A v B a4. B |- A v B a5. A & (B v C) |- (A & B) v C a6. A |- ~~A a7. ~~A |- A r1. A |-B, B |- C / A |- C r2. A |- B, A |- C / A |- B & C r3. A |- C, B |- C / A v B |- C r4. A |- B / ~B |- ~A System Rfde is obtained from Efde by replacing the contraposition rule r4 with the following De Morgan laws taken as axioms: dm1. ~(A v B) |- ~A & ~B dm2. ~A & ~B |- ~(A v B) dm3. ~(A & B) |- ~A v ~B dm4. ~A v ~B |- ~(A & B) Efde and Rfde are deductively equivalent. Yet, the latter system has less derivable rules, and allows thus certain non-classical (and non-trivial) extensions, which are impossible with Efde. Consider the following set of consequences: (dco) A v B |- B v A (did) A v A |- A (das) A v (B v C) |- (A v B) v C (dis2) A v (B & C) |- (A v B) & (A v C) (dis3) (A v B) & (A v C) |- A v (B & C) (dni) A v B|- ~~A v B (dne) ~~A v B |- A v B (ddm1) ~(A v B) v C |- (~A & ~B) v C (ddm2)(~A & ~B) v C |- ~(A v B) v C (ddm3) ~(A & B) v C |- (~A v ~B) v C (ddm4) (~A v ~B) v C |- ~(A & B) v C We obtain a system of first-degree entailment with conjunction introduction as the only logical inference rule (together with the structural rule of transitivity) FDE(ci) by adding this list to a1-a3, and taking as the inference rules r1 and r2. Lemma1. Systems Efde, Rfde and FDE(ci) are all deductively equivalent. Since the rule of disjunction elimination is not derivable in FDE(ci), it allows for some interesting extensions not attainable on the basis of Rfde. In particular, a Fmla-Fmla version of "exactly true logic" by Pietz and Rivieccio can be obtained as a straightforward extension of FDE(ci) by the following axiom (disjunctive syllogism): (ds) ~A & (A v B) |- B. A duality between the rules of conjunction introduction and disjunction elimination suggests a construction of another version of the logic of first-degree entailment FDE(de) with only one logical inference rule (accompanied by transitivity), but now for disjunction elimination. This system is obtained from FDE(ci) by a direct dualisation of all its axioms and rules. FDE(de) is indeed an adequate formalization of first-degree entailment, as the following lemma shows: Lemma 2. Systems Efde, Rfde, FDE(ci) and FDE(de) are all deductively equivalent. Absence of conjunction introduction among the initial inference rules of FDE(de) enables the possibility of extending it in a different direction as compared with FDE(ci). Namely, it is possible to formalize Fmla-Fmla entailment relation as based on the set {T, B, N} of designated truth values, preserving thus any truth value among the four Belnapian ones, except the bare falsehood F. One obtains the corresponding binary consequence system of non-falsity logic NFL1 by extending FDE(de) with the following axiom (dual disjunctive syllogism): (dds) A |- ~B v (B & A). Another formalization of the first-degree entailment logic with the only (structural) inference rule of transitivity can be obtained by a straightforward combination of FDE(ci) and FDE}(de). |

12:00 | The irrelevance of the axiom of Permutation ABSTRACT. The axiom of Permutation (A→(B→C))→(B→(A→C)) is valid in many relevant logic systems such as R. Although Permutation has not been particularly problematic I consider that there are good relevantist reasons to distrust this axiom. Thus, I am interested in investigating if Permutation should be relevantly valid. There has been previous research on the matter. In ``Paths to triviality", Øgaard shows how different principles could lead to triviality of naïve truth theories of paraconistent relevant logics. It is important to note that Øgaard's proofs regard rules and not axioms and therefore his results only assess the consequences of having an instance of the rule as part of the theory. Amongst the proofs one can find the combination of the principle of Excluded Middle and the rule of Permutation. In ``Saving the truth schema from axioms", Field characterices a logic by adding a conditional that avoid paradoxes such as Curry's to Kleene's logic and the resulting logic does not validate the axiom of Permutation. Despite the counterexamples in the natural language and Field's results, Permutation still satisfies the usual relevantist properties. Variable sharing property (VSP): A logic L has the VSP iff in any theorem of L of the form A→B, A and Bshare at least one propositional variable. Effective Use in the Proof (EUP): In every theorem of the form A1→(...(An→B)...), each Ai is used to prove B. Permutation also satisfies stronger versions of these properties. However, I think there is a way of understanding the VSP that does not validate Permutation. I want to suggest that one could state a relevantist principle such as the ones mentioned above in which Permutation is not valid. The principle is the following: Non-Implicative Extremes Property (NIEP): In every theorem of the form X → (Y → Z), X and Z cannot be implicative formulas. I want to show that NIEP is an interesting relevantist principle that recovers enough axioms to characterize a fairly expressive relevance logic. My plan is to start by explaining the three principles which I will focus on: VSP, EUP and NIEP. I will motivate these principles by showing some results that are valid in Classical Logic but are not relevantly valid. Due to the ambiguity of EUP, I will have to suggest an interpretation with which I will work throughout this paper and I will relate it to NIEP in order to highlight the differences between EUP and NIEP. Afterwards I will focus on pinpointing the logics that exist between B and R to investigate what can be recovered from B taking into account NIEP. The goal is to find a logic characterized by VSP, EUP and NIEP. |

14:00 | Bernard Bolzano’s 1804 Examination: Mathematics and Mathematical Teaching in Early 19th Century Bohemia PRESENTER: Davide Crippa ABSTRACT. We shall present here a manuscript containing Bernard Bolzano’s written examination to become professor of elementary mathematics at Prague University. This examination took place in October 1804, and consisted of a written and an oral part. Only two candidates took part to it: Bernard Bolzano and Ladislav Jandera. The latter won. The committee asked three questions to the candidates : to find the formula of the surface and volume of a sphère, to find the formula which measures the speed of water filling a tank, to explain the proof of the law of the lever. In our talk, we shall analyze Bolzano’s answers, especially to the first question, in light of his later reflections on the foundations of mathematics. This document represents an important source to understand both the evolution of Bernard Bolzano’s mathematical thought and, more generally, an important source on the practice of teaching in early 19th Century Bohemia. |

14:30 | Looking at Bolzano's mathematical manuscripts ABSTRACT. Large parts of Bolzano's mathematical manuscripts are today published in the Bernard Bolzano-Gesamtausgabe (BBGA), the most important of them being several volumes of the Grössenlehre (GL) containing Einleitung in die GL und Erste Begriffe, Reine Zahlenlehre, and Functionslehre. The manuscripts of GL also contain fragments of algebra and of the theory of series, and a beautifully written complete text Raumwissenschaft. A small volume Zahlentheorie appeared as Bolzano's Schriften, vol. 2, Prague 1931, which is in fact a part of the future volume 2A9 of the BBGA, Verhältniss der Theilbarkeit unter den Zahlen. Many of the manuscripts are preliminary sketches or auxiliary notes of later published works. Bolzano's earlier manuscripts (1810-1817) are on the one hand a continuation of the Beyträge where appears the concept of the possibility of thinking together (Zusammendenkbarkeit), yielding the concept of whole or system, on the other hand similar contributions to the foundation of mathematics with the concepts of collection (Inbegriff), of number, of quantity (Grösse), of imaginary (=complex) number and of infinity, and those of analysis and of geometry (several developments about the theory of parallels). Bolzano returns to these subjects very often in his mathematical diaries, which are an exceptional source for the study of the state of mathematical knowledge in the first half of the 19th century. Eventually, Bolzano's manuscripts contain important extracts, comments and annotations of the books he studied, e.g. those of Carnot (Géométrie de la position), Wallis, Wolff, Kästner, Legendre, Lagrange (64 pages of the summary of the Théorie des fonctions analytiques), Laplace, and Gauss among others. |

14:00 | Conceptual Engineering in the Philosophy of Information ABSTRACT. Conceptual engineering is not only a growing topic of methodological interest in contemporary, primarily analytically oriented, philosophy. It also occupies a central place within the philosophy of information (Floridi 2011b). Yet, despite the agreement that conceptual engineering is the philosophical task (Floridi 2011a, Cappelen 2018) par excellence, we have two intellectual endeavours that have developed independently of each other, have shown little interest in each other, and whose interdependencies remain unclear. For the sake of terminological clarity, I will reserve the term conceptual engineering to refer to the project of Cappelen and others (the primary project targeted in this symposium), while I will use constructionism to refer to the importance that is accorded to making / engineering / designing in the philosophy of information. My goal in this paper is to clarify how constructionism relates to conceptual engineering; both as a means to situate the philosophy of information vis-à-vis the mainstream debate and identify the defining differences, and as a means to identify fruitful convergences and opportunities for mutual influence. As a starting point, I would like to present the constructionism within the philosophy of information as a convergence of three conceptual shifts: 1. A focus on a maker’s conception of knowledge as an alternative to the more traditional focus on user’s conceptions of knowledge (Floridi 2018). The key idea here is the view that we only know what we make. Here, constructionism is an epistemological thesis about what we can know and about the kind of knowledge we ought to pursue. 2. An account of philosophical questions as open questions whose resolution require the development of new conceptual resources (Floridi 2013). Here, constructionism is first and foremost a meta-philosophical thesis that addresses the question which kind of inquiries philosophers should engage in. 3. A view about the nature of and our responsibilities towards the infosphere and the project involved in “construction, conceptualization, semanticization, and finally the moral stewardship of reality” (Floridi 2011b: 23); especially in mature information-societies (Floridi 2014). Here, constructionism becomes a ethical and political thesis. Once this stage is set, we can begin to identify a number of notable divergences between the project of conceptual engineering and constructionism. Here, I propose to focus on only three such divergences. First, constructionism is best described as a pluralist project; not as a meliorative or revisionist project. Second, constructionism is best understood relative to a restricted domain. Its focus is on the conceptual resources we need for specific purposes; tasks or questions that are always relative to a purpose, context, and level of abstraction. Global changes of our language are not an immediate concern; even if specific goals may often require the development of new terminologies or demand the re-purposing of existing terms for novel uses. Third, constructionism does not engage in the design of concepts for representational purposes. REFERENCES Cappelen, H. (2018), Fixing language : an essay on conceptual engineering, Oxford University Press. Floridi, L. (2011a), ‘A defence of constructionism: philosophy as conceptual engineering’, Metaphilosophy 42(3), 282–304. Floridi, L. (2011b), The Philosophy of Information, Oxford University Press, Oxford. Floridi, L. (2013), ‘What is A Philosophical Question?’, Metaphilosophy 44(3), 195–221. Floridi, L. (2014), The Fourth Revolution How the Infosphere is Reshaping Human Reality., Oxford University Press, USA, Oxford. Floridi, L. (2018), ‘What a maker’s knowledge could be’, Synthese 195(1), 465–481. |

14:30 | The Common-Sense Notion of Truth as a Challenge for Conceptual Re-Engineering PRESENTER: Georg Brun ABSTRACT. Tarski’s semantic theory of truth is often considered one of the prime examples of an explication. Aiming to satisfy what Carnap called the criterion of ‘similarity to the explicandum’, Tarski claimed that his definition of truth is in line with the ordinary notion of truth, which in turn can broadly be interpreted in the sense of correspondence with reality. In the first part of the talk, we present results of experimental studies which challenge the idea that – within the empirical domain – the common-sense notion of truth is rooted in correspondence. In these experiments, participants read vignettes in which a person makes a claim that either corresponds with reality but is incoherent with other relevant beliefs, or that fails to correspond with reality but is coherent with other beliefs. Perhaps surprisingly – at least from a philosopher’s perspective – a substantial number of participants (in some experiments up to 60%) responded in line with the predictions of the coherence account. These results put substantial pressure on monistic accounts of truth. However, they also seem to undermine their most popular alternative: scope pluralism. While scope pluralists acknowledge that the truth predicate picks out different properties in different domains, no one has yet, as far as we know, worked out a pluralistic account within the same domain. In the second part of the talk, we explore the consequences of these results for the project of re-engineering truth. In particular, we discuss the prospects of (i) defending a unique explication of truth, of (ii) re-engineering truth as a non-classical concept (e.g. as a family resemblance concept), and of (iii) giving more than one explicatum for true. Whereas the first of these options might seem attractive for theoretical reasons, it performs surprisingly poor with respect to the similarity to the explicandum, given the results of our experimental studies. Adopting (i) would simply amount to dismissing a great deal of applications of the truth-predicate. In this respect, options (ii) and (iii) are more promising. However, the success of the second option would not only depend on whether a non-classical concept of truth could be theoretically fruitful while being in line with enough of the data, but first of all on whether an exact description of the structure of such a non-classical concept could be given in a convincing way. In contrast to the first two options, (iii) would substantiate the claim that ‘truth’ is ambiguous. While this looks perhaps most apt to account for our data, such a proposal requires us to know more about the mechanisms that play a role in ordinary discourses on truth. Merely specifying several explicata without an account of what the conditions are for using one rather than the other would not make for an adequate conceptual re-engineering. |

Organizers: Giovanni Valente and Roberto Giuntini

SILFS (Società Italiana di Logica e Filosofia della Scienza) is the Italian national organization devoted to fostering research and teaching in the fields of logic, general philosophy of science and philosophy of the special sciences. It comprises a large number of academics working in such areas, who are based in Italy as well as in other countries. This symposium proposes to explore philosophical and methodological issues concerning the foundations of our best scientific theories, with the aim of bridging across the diverse research trends characterizing the Italian community of logicians and philosophers of science. Specifically, the symposium focuses on the formal status of successful theories developed in various fields of science, most notably the life-sciences, the mathematical sciences and the social sciences. For this purpose, it brings together experts on the logic and philosophy of medicine, physics, computation and socio-economics, so as to jointly investigate from different perspectives a host of inter-connected questions that arise when facing the outstanding problem of how to formalize scientific theories.

More to the point, we plan to deal with the following issues: (1) how to provide a formal treatment of empirical evidence in medical research; (2) how to elaborate a computational notion of trust that can be applied to socio-economical contexts; (3) how to construct a rigorous framework for the logic of physical theories, with particular focus on the transition from classical to quantum mechanics; (4) how to develop a mathematical foundation for the concept of reduction between different theoretical systems. By addressing such specific questions with a systematic and inter-disciplinary approach, the symposium wishes to advance our general understanding of the relation between theories and formalization.

Organizer: Charles Sebens

One of the primary tasks of philosophers of physics is to determine what our best physical theories tell us about the nature of reality. Our best theories of particle physics are quantum field theories. Are these theories of particles, fields, or both? In this colloquium we will debate this question in the context of quantum field theory and in an earlier and closely related context: classical electromagnetism. We believe that the contemporary debate between particle and field interpretations of quantum field theory should be informed by a close analysis of classical electromagnetism and seek to demonstrate the fruitfulness of such a dialogue in this session.

Our first speaker will start the session by discussing the debate between Einstein and Ritz in the early 20th century over whether classical electromagnetism should be formulated as theory of particles interacting directly with one another or interacting via fields. They will discuss the technical challenges facing each approach as well as the role that philosophical and methodological presuppositions play in deciding which approach is to be preferred.

Our second speaker will defend a dual ontology of particles and fields in classical electromagnetism. They argue that the singularities which arise in the standard Maxwell-Lorentz formulation of electromagnetism are unacceptable. However, the standard equations of electromagnetism can be modified (as is done in the Born-Infeld and Bopp-Podolsky formulations).

Our third speaker will recount the problems of self-interaction that arise for a dual ontology of particles and fields in the context of classical electromagnetism and defend point particle ontology. They will go on to argue that attempts to formulate quantum field theory as a theory of fields have failed. They believe that it too should be interpreted as a theory of particles.

Our final speaker will defend a pure field ontology for quantum field theory. They will argue that quantum theories where the photon is treated as a particle are unacceptable. On the other hand, treating the electron as a field yields significant improvements over the ordinary particle interpretation.

14:00 | Some formal and informal misunderstandings of Gödel's incompleteness theorems ABSTRACT. G\"{o}del's incompleteness theorem is one of the most remarkable and profound discoveries in foundations of mathematics. G\"{o}del's incompleteness theorems have wide and profound influence on the development of mathematics, logic, philosophy, computer science and other fields. G\"{o}del's incompleteness theorems raise a number of philosophical questions concerning the nature of logic and mathematics, as well as mind and machine. However, there are ample misinterpretations of G\"{o}del's incompleteness theorems from the literature and folklore. In this paper, we will focus on some misinterpretations of G\"{o}del's incompleteness theorems in mathematics and logic which are not covered in [1]. The motivation of this paper is to review and evaluate some formal and informal misinterpretations of G\"{o}del's incompleteness theorems and their consequences from the literature and folklore as well as to clarify some confusions based on the current research on G\"{o}del's incompleteness theorems in the literature. In this paper, we focus on how recent research on incompleteness clarifies some popular misinterpretations of G\"{o}del's incompleteness theorems. Firstly, we discuss some misinterpretations of G\"{o}del's first incompleteness theorem (G1). Especially, we will focus on the following interpretations or aspects of G1: the claim that there is a truth that cannot be proved; the metaphorical application of G1 outside mathematics and logic; the claim that any consistent formal system is incomplete; the claim that any consistent extension of PA is incomplete; the dependence of incompleteness on the language of the theory; the difference between the theory of arithmetic and the theory of reals; the claim that G\"{o}del's proof is paradoxical due to the use of the Liar Paradox; the difference between the notion of provability in PA and the notion of truth in the standard model; sentences of arithmetic independent of PA with real mathematical content; and the theory with the minimal degree of interpretation for which G1 holds. Secondly, we discuss some misinterpretations of G\"{o}del's second incompleteness theorem (G2). Especially, we will focus on the following interpretations or aspects of G2: a delicate mistake for the proof of G2; the vagueness of the consistency statement; the intensionality of G2; the claim that G2 holds for any consistent extension of PA; the claim that there are arithmetic truths which can not be proved in any formal theory in the language of arithmetic; the claim that the consistency of PA can only be proved in a stronger theory properly extending PA; and the claim that G2 refutes Hilbert's program. Finally, we discuss the popular interpretation that G\"{o}del's incompleteness theorems show that the mechanism thesis fails based on the current advances in this field. Reference: [1] T. Franzen. G\"{o}del's Theorem: an incomplete guide to its use and abuse. A.K.Peters, 2005. |

14:30 | Constructing illoyal algebra-valued models of set theory PRESENTER: Sourav Tarafder ABSTRACT. The construction of algebra-valued models of set theory starts from an algebra A and a model V of set theory and forms an A-valued model V(A) of set theory that reflects both the set theory of V and the logic of A. This construction is the natural generalisation of Boolean-valued models, Heyting-valued models, lattice-valued models, and orthomodular-valued models [1, 2, 7, 5] and was developed in [4]. Recently, Passmann introduced the terms “loyalty” and “faithfulness” while studying the precise relationship between the logic of the algebra A and the logical phenomena witnessed in the A-valued model of set theory in [6]. A model is called loyal to its algebra if the propositional logic in the model is the same as the logic of the algebra from which it was constructed and faithful if every element of the algebra is the truth value of a sentence in the model. The model constructed in [4] is both loyal and faithful to PS3 , which is a three-valued algebra and can be found in [4] as well. In this talk, we shall give elementary constructions to produce illoyal models by stretching and twisting Boolean algebras. After we give the basic definitions, we remind the audience of the construction of algebra-valued models of set theory. We then introduce our main technique: a non-trivial automorphisms of an algebra A excludes values from being truth values of sentences in the A-valued model of set theory. Finally, we apply this technique to produce three classes of models: tail stretches, transposition twists, and maximal twists. It will be shown that there exist algebras A which are not Boolean algebras and hence its corresponding propositional logic is non-classical, but any sentence of set theory will get either the value 1 (top) or 0 (bottom) of A in the algebra- valued model V(A), where the sub algebra of A having domain {0, 1} is same as the two-valued Boolean algebra. This concludes that the base logic for the corresponding set theory is not classical, whereas the set theoretic sentences act classically in this case. This talk is based on [3]. Bibliography [1] Bell, J. L. (2005). Set Theory, Boolean-Valued Models and Independence Proofs (third edition). Oxford Logic Guides, Vol. 47. Oxford: Oxford University Press. [2] Grayson, R. J. (1979). Heyting-valued models for intuitionistic set theory. In Fourman, M. P., Mulvey, C. J., and Scott, D. S., editors. Applications of sheaves, Proceedings of the Research Symposium on Applications of Sheaf Theory to Logic, Algebra and Analysis held at the University of Durham, Durham, July 9–21, 1977. Lecture Notes in Mathematics, Vol. 753. Berlin: Springer, pp. 402–414. [3] Loewe, B., Passmann, R., & Tarafder, S. (2018). Constructing illoyal algebra-valued models of set theory. Submitted. [4] Loewe, B., & Tarafder, S. (2015). Generalized Algebra-Valued Models of Set Theory. Review of Symbolic Logic, 8(1), pp. 192–205. [5] Ozawa, M. (2009). Orthomodular-Valued Models for Quantum Set Theory. Preprint, arXiv 0908.0367. [6] Passmann, R. (2018). Loyalty and faithfulness of model constructions for constructive set theory. Master’s thesis, ILLC, University of Amsterdam. [7] Titani, S. (1999). A Lattice-Valued Set Theory. Archive for Mathematical Logic, 38(6), pp. 395–421. |

14:00 | Integrating HPS: What’s in it for a Philosopher of Science? PRESENTER: Hakob Barseghyan ABSTRACT. Since the historical turn, there has been a great deal of anxiety surrounding the relationship between the history of science and the philosophy of science. Surprisingly, despite six decades of scholarship on this topic, we are no closer to achieving a consensus on how these two fields may be integrated. However, recent work has begun to identify the crucial issues facing a full-fledged account of integrated HPS (Domski & Dickson (Eds.) 2010; Schickore 2011; Mauskopf & Schamltz (Eds.) 2011). We contend that the inability to deliver on a model of integrated HPS is partly due to an insufficient appreciation of the distinction between normative and descriptive accounts of science, an over-emphasis on individual case studies, and the lack of a general theory of science to mediate between historical data and philosophical conceptions of science. In this paper, we provide a novel solution to this conundrum. We claim that the emerging field of scientonomy provides a promising avenue for how the philosophy of science may benefit from the history of science. We begin by showing that much of contemporary philosophy of science is ambiguous as to whether it attempts to answer normative or descriptive questions. We go on to argue that this ambiguity has led to many attempts to cherry-pick case studies and hastily draw normative methodological conclusions. Against this, we claim that these two features of our reasoning should be clearly separated so that we may show how descriptive history of science may benefit normative philosophy of science. Specifically, we show that a general theory of scientific change is required to mediate between individual findings of the history of science and normative considerations of the philosophy of science. Such a general descriptive theory is necessary to avoid the problem of cherry-picking and avoid the temptation to shoehorn historical episodes into the normative confines of a certain methodology. The main aim of scientonomy is to understand the mechanism that underlies the process of changes in both theories and the methods of their evaluation. We demonstrate how a gradual advancement of this general descriptive theory can have substantial input for the normative philosophy of science and turn scientonomy into a link between the descriptive history of science and the normative philosophy of science. |

14:00 | A Naturalized Globally Convergent Solution to the Problem of Induction ABSTRACT. A Naturalized Globally Convergent Solution to the Problem of Induction Abstract The problem of induction questions the possibility of justification of any rule of inductive inference. Besides, avoiding the paradoxes of confirmation is a prerequisite for any adequate solution to the problem of induction. The confirmation relation has traditionally been taken to be a priori. In this essay a broader view of confirmation is adopted. It is shown that evidence itself must be interpreted on empirical grounds by bridge-hypotheses. Thus I develop an interpreted inductive scheme which solves both paradoxes of confirmation as well as the problem of induction. Since distinct interpreted partitions corresponding to the same evidence can be related by means of a unique testable bridge-hypothesis, the confirmatory relations can be univocally determined by assessing the latter. Only the partitions corresponding to adequate hypotheses stabilize into a nomic chain which reflects the admissible bridge-hypotheses. A duality thesis is deduced, according to which any alteration in the relations of inductive support produced as a consequence of changes in some partition of the inductive basis can be neutralized by restating the inductive basis in terms of a corresponding dual partition. Therefore, the two paradoxes of confirmation are rendered solvable dual problems. The interpretative inductive schema also avoids Norton’s “no-go” results, because inductive strengths will not only depend on the deductive relations within the algebra of propositions but also on the semantic relations thereof. I invoke the formal methods of lattice theory and algebraic logic to reformulate the problem of induction. Thus, the application of the interpreted inductive scheme to the data of experience yields a system of hypotheses with a lattice structure, the ordering of which is given by a relation of material inductive reinforcement. In this framework the problem of induction consists in determining whether there is a unique stable inductively inferable system of hypotheses describing the totality of experience. The proof of the existence of such system results from the application of Knaster-Tarski fixpoint theorem. Proving the existence of this fixed point is tantamount to a formal means-ends solution to the problem of induction. In this approach induction is locally justified, i.e. based on matters of fact; and globally justified, i.e. in the sense of topological closure. This avoids the problem of regress of inductive justifications in material theories of induction. Finally, it is proved that interpretatively supplemented inductive schemes are globally convergent; that is, converge to an asymptotically stable solution without a priori knowledge of preferred partitions. Bibliography 1. Achinstein, Peter, 2001, The Book of Evidence, Oxford: Oxford University Press. 2. Achinstein, Peter, 2010, “The War on Induction: Whewell Takes on Newton and Mill (Norton Takes on Everyone)”, Philosophy of Science, 77(5): 728–739. 3. Cleve, James van, 1984, “Reliability, Justification, and the Problem of Induction”, Midwest Studies In Philosophy: 555–567. 4. Davey B.A. 2002, [2012], Introduction to Lattices and Order, Cambridge: Cambridge University Press. 5. Hempel, Carl G., 1945, “Studies in the Logic of Confirmation I”, Mind, 54: 1–26. 6. Hempel, Carl G., 1943, “A Purely Syntactical Definition of Confirmation”, Journal of Symbolic Logic, 8: 122–143 7. Huemer, Michael, 2009, “Explanationist Aid for the Theory of Inductive Logic”, The British Journal for the Philosophy of Science, 60(2): 345–375. 8. Hume, David, 1739, A Treatise of Human Nature, Oxford: Oxford University Press. 9. Hume, D. 1748, An Enquiry Concerning Human Understanding, Oxford: Oxford University Press. 10. Kelly, Kevin T., 1996, The Logic of Reliable Inquiry, Oxford: Oxford University Press. 11. Garcia R. J. L., 2019, A Naturalized Globally Convergent Solution to Goodman Paradox. Notes for the GWP conference, Cologne. 12. Goodman, N., 1955, Fact, Fiction and Forecast, Cambridge, MA: Harvard University Press 13. Norton, John D., 2003, “A Material Theory of Induction”, Philosophy of Science, 70(4): 647–670. 14. Norton, J., 2018, “A Demostration of the Incompletness of Calculi of Inductive Inference”, British Journal of Philosophy of Science. (0): 1-26. 15. Reichenbach, H. (1940), On the Justiﬁcation of Induction. Journal of Philosophy 37,97-103. 16. Reichenbach, H. 1966, The Foundations of Scientific Inference, Pittsburgh: University of Pittsburgh Press. 17. Salmon, W. C. 1966, The Foundations of Scientific Inference, Pittsburgh: University of Pittsburgh Press 18. Schulte, Oliver, 1999, “Means-Ends Epistemology”, British Journal for the Philosophy of Science, 50(1): 1–31. 19. Shafer, G., 1976, A mathematical theory of evidence. Princeton: Princeton Univ. Press. |

14:30 | Meta-Inductive Prediction based on Attractivity Weighting: Mathematical and Empirical Performance Evaluation PRESENTER: Paul Thorn ABSTRACT. A meta-level prediction method (as opposed to an object-level prediction method) is one that bases its predictions on the predictions of other prediction methods. We call a (meta-level) prediction method “access-optimal” in a given environment just in case its long-run predictive success rate is at least as great as the success rate of the most successful method (or cue) to which it has access it that environment (where access consists in knowledge of the present predictions and the past predictive accuracy of a respective method). We call a prediction method “universally access-optimal” just in case it is access-optimal in all possible environments. Universal access-optimality is obviously a very desirable feature. However, universal access-optimality is also rare, and we show: (1) There are no ‘one-reason’ prediction methods (i.e., methods that base each prediction on the prediction a single object-level method or cue) that are universally access-optimal, and (2) none of a wide range of well known weighting methods is universally access-optimal, including success weighting, linear regression, logistic regression, and typical Bayesian methods. As shown in previous work, there is a prediction method known as Attractivity Weighting (AW) that is universally access-optimal, assuming accuracy is measured using a convex loss function (Cesa-Bianchi & Lugosi, 2006; Schurz, 2008; Schurz & Thorn, 2016). Although AW is universally access-optimal, there are other meta-level prediction methods that are capable of outperforming AW in some environments. In order to address this limitation of AW, we introduce two refined variants of AW, which differ from AW in having access to other meta-level methods. We present results showing that these refined variants of AW are universally-access optimal. Despite guarantees regarding long-run performance, the short-run performance of the variants of AW is a theoretically open question. To address this question, we present the results of two simulation studies that evaluate the performance of various prediction methods in making predictions about objects and events drawn from real world data sets. The first study involves predicting the results of actual sports matches. The second study uses twenty data sets that were compiled by Czerlinski, Gigerenzer, and Goldstein (1999), and involves, for example, the prediction of city population, the attractiveness of persons, and atmospheric conditions. In both simulation studies, the performance of the refined variants of AW closely matches the performance of whatever meta-level method is the best performer at given time, from the short run to the long run. References Cesa-Bianchi, N., & Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge: Cambridge University Press. Czerlinski, J., Gigerenzer, G., & Goldstein, D. G. (1999). How good are simple heuristics? In G. Gigerenzer, P. M. Todd, & the ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 97–118). Oxford: Oxford University Press. Schurz, G. (2008). The meta-inductivist’s winning strategy in the prediction game: a new approach to Hume’s problem. Philosophy of Science, 75, 278–305. Schurz, G., & Thorn, P. D. (2016). The Revenge of Ecological Rationality: Strategy-Selection by Meta-Induction Within Changing Environments, Minds and Machines, 26(1), 31–59. |

15:15 | Thin Objects and Dynamic Abstraction versus Possible Structures PRESENTER: Ismael Ordóñez ABSTRACT. One perennial research question in the philosophy of mathematics is the one about the existence of mathematical entities. If they exist, they are abstract entities; but abstract entities are philosophically problematic because, among other reasons, of the difficulties involved in providing an account of how it is possible for us to know them. As a result, some authors contend that what is relevant to explain the role of mathematics in our intellectual lives is not whether mathematical entities exist, but whether mathematical statements are objectively true. Hellman (1989) is one of them. He, following Putnam (1967), proposed a modal paraphrases of arithmetic and set theory (among other mathematical theories) that intends to guarantee the truth of mathematical statements, not to compromise with the existence of mathematical entities in this world and to provide an adequate epistemology. Linnebo (2018) contends that mathematics is true and should be read at face value. Nevertheless, he claims that we do not need to give up on mathematical entities to provide an appropriate epistemology; all we need is a thin account of mathematical entities, an account such that “their existence does not make a substantial demand on the world” (Idem, xi). To prove his point, he reconstructs arithmetic in abstractionist terms and set theory in abstractionist and modal terms. Both reconstructions of set theory are modal though Hellman chooses Second Order S5 while Linnebo goes for Plural S4.2. These choices are motivated by the different ways in which they conceive of sets. Hellman understands them as positions in possible structures and defines them as the reification of the result of a selection process; Linnebo sees sets as a result of abstractions and introduces them by means of a predicative, plural, and modal version of Frege’s Law V. This allows him to accept non-predicative versions of the Comprehension Axiom for “∈”, while Hellman says that whether they are compatible with his definition is an open question. Nevertheless, both avoid compromise with infinite quantities while asserting that their respective proposals manage to reconstruct the most abstract levels of the set hierarchy. As it is well known, Gödel established it is impossible to paraphrase arithmetic in trivial terms, hence, any reformulation of arithmetic (or of any mathematical theory that includes it) is going to be controversial in one aspect or other. It can be controversial (Rayo 2015) because of the linguistic resources it uses, because of the metaphysical assumptions that underlie it, or because of the subtracting strategy proposed. Our purpose is to analyze in detail the two reconstructions of set theory provided, list out the logical tools used by each of them and see how their choices relate to the philosophical constraints each of them advocates. HELLMAN, G. (1989) Mathematics without Numbers. Towards a Modal-Structural Interpretation. Oxford: Clarendon Press. LIINNEBO, Ø. (2018) Thin Objects. An Abstraccionist Account. Oxford: OUP. PUTNAM, H. (1967) “Mathematics without Foundations. ” Journal of Philosophy, LXIV(1): 5 – 22. RAYO A. (2015). “Nominalism, Trivialism and Logicism.” Philosophia Mathematica, 23, 65–86, https://doi.org/10.1093/philmat/nku013. |

15:45 | Extensionalist explanation and solution of Russell’s Paradox ABSTRACT. In this paper, I propose a way out from the aporetical conclusion of the debate about the, so-called, explanation of Russell’s paradox. In this debate, there are traditionally two main and incompatible positions: the "Cantorian" explanation and the Predicativist one. I briefly rehearse the reasons why both these ones can be neglected and propose a third, "Extensionalist", one.
The Extensionalist explanation identifies the key of Russell’s Paradox in a proposition about the extensions: ∀F ∃x(x = ext(F )), which allows to derive, from the existence of Russell’s concept, the existence of Russell’s extension. This proposition is a theorem of classical logic whose derivation presupposes the classical treatment of identity (Law of identity) and quantification (Law of Specification and Law of Generalisation). So, we can explain Russell’s paradox by the (inappropriate) classical correlation between concepts and extensions: the flaw of this correlation does not consists (as in the Cantorian explanation) in the injective feature of the correlation but (as in the Predicativist explanation) in its domain, namely in the implicit assumption that it is defined on the whole second order domain; however this result does not mean that, for restoring consistency, the whole second order domain has to be restricted (as in the Predicativist solution) but only the domain of the extensionality function.
The solution related to the Extensionalist explanation consists in a reformulation of Frege’s theory which allows the derivation of Peano Arithmetic as a logical theory of extensions. The new language L comprises two sorts of first order quantifiers (generalised Π, Σ, and restricted ∀,∃) respectively governed by classical and by negative free logic. From a syntactic point of view, the proposed system consists of the axioms of propositional classical logic, some specific axioms of predicative negative free logic, an impredicative comprehension’s axioms schema and, as the only non logical axiom, a version of Basic Law V (with a generalised universal first order quantification) restricted to the existents abstracts: ∀F ∀G(ext(F ) = est(G) ↔ ∃x(x = ext(F )) ∧ Πx (F x ↔ Gx)). This version of Basic Law V characterises the behaviour of the correlation denoted by "ext" as functional and injective only for a subset of second order domain, which excludes Russell’s concepts. Then, this system does not allow to derive Russell’s Paradox.
From a semantic point of view, the interpretation of the theory is provided by a model M = References: [1] Antonelli A. and May R. (2005). Frege’s Other Program, Notre Dame Journal of Formal Logic, Vol. 46, 1, 1-17. [2] Boolos, G. (1993). Whence the Contradiction?, Aristotelian Society, Supplementary Volume 67, 213–233. [3] Cocchiarella, N. B. (1992). Cantor’s power-set Theorem versus Frege’s double correlation Thesis, History and Philosophy of logic, 13, 179-201. [4] Uzquiano, G. (forthcoming). Impredicativity and Paradox. |

15:15 | Formalisation and Proof-theoretic Reductions ABSTRACT. Proof-theoretic reductions from a prima facie stronger theory to a prima facie weaker one have been used for both epistemological ends (conceptual reduction, reduction of the levels of explanation, relative consistency proofs) and ontological ones (chiefly, ontological parsimony). In this talk I will argue that what a proof-theoretic reduction can achieve depends on whether the proof transformation function meets certain intensional constraints (e.g. preservation of meaning, theoremhood, etc.) that are determined locally, by the aims of the reduction. In order to make this point more precise, I will use Feferman [1979]’s terminology of a formalisation being faithful or adequate, and direct or indirect, and I will consider two case studies: (I) The proof-theoretic reduction of the prima facie impredicative theory Δ1-1-CA to the predicative theory ACA<ε_0. The aim of this reduction was not to dispense with the assumptions of the former theory in favour of those of the latter, but rather to sanction the results obtained in the former theory from a predicativist standpoint. Even though the patterns of reasoning carried out in Δ1-1-CA are not faithfully represented by proofs in ACA<ε_0, the proof-theoretic reduction yields a conservativity result for Π1-2 sentences, important from a predicative perspective because they define arithmetical (and thus predicative) closure conditions on the powerset of the natural numbers. Using Feferman’s terminology, this reduction demonstrates that ACA<ε_0 is an indirectly adequate formalisation of the mathematics that can be directly formalised within Δ1-1-CA, but not an indirectly faithful one, because the methods of proof available within Δ1-1-CA do go beyond those available in ACA<ε_0. (II) The proof-theoretic reduction of subsystems of second order arithmetic S to first order axiomatic theories of truth T, conservative over arithmetical sentences. The aim of such reductions is to obtain a more parsimonious ontology, thus showing that even though reasoning carried out in S can be epistemically more advantageous (shorter proofs, closeness to informal analytical reasoning, etc.), the existential assumptions of S can be ultimately eliminated in favour of the leaner assumptions of T. I will argue that since the aim of this reduction is the elimination of a part of the ontology of S, we should demand that the proof theoretic reduction is not only indirectly adequate, but also indirectly faithful––that is, that it does not merely preserve the arithmetical theorems of S, but also that it preserves (under some appropriate translation) the second-order theorems of S. In the last part of the talk, I will apply these arguments to the debate on theoretical equivalence in the philosophy of science and I will argue that by imposing additional intensional criteria on the proof-theoretic translation function between two theories that are coherent with the aims of the idealisation, we can obtain locally satisfactory syntactical criteria of theoretical equivalence. |

15:45 | The generalized orthomodularity property: configurations, pastings and completions PRESENTER: Antonio Ledda ABSTRACT. Quantum logic is a set of rules for reasoning about propositions that takes the principles of quantum theory into account. This research area originated in a famous article by Birkhoff and Neumann [3], who were attempting to reconcile the apparent inconsistency of classical logic with the facts concerning the measurement of complementary variables in quantum mechanics, such as position and momentum. A key role is played by the concept of orthomodular lattice, an algebraic abstraction of the lattice of closed subspaces of any Hilbert space [1, 14, 2, 5, 9, 15, 13]. In 1968, it was realized that a more appropriate formalization of the logic of quantum mechanics could be obtained relaxing the lattice conditions to the weaker notion of orthomodular poset [4, 15]. Actually, in the case of orthomodular posets, even if the orthocomplementation is still a total operation, lattice operations are only partially defined, in general. This difficulty cannot be resolved by considering the completion of the underlying poset. In fact, Harding [2] [14] showed that the Dedekind-MacNeille completion of an orthomodular lattice need not be orthomodular, in general. Therefore, there is no hope to find orthomodularity preserved by completions even for posets. In our approach, we will weaken the notion of orthomodularity for posets to the concept of generalized orthomodularity property by considering LU-operators [6], and then analyzing the order theoretical properties it determines. This notion captures important features of the set of subspaces of a (pre-)Hilbert space, the concrete model of sharp quantum logic [7]. After dispatching all basics, we define the concept of generalized orthomodular poset, we provide a number of equivalent characterizations and natural applications. In the same section, we study commutator theory in order to highlight the connections with Tkadlec’s Boolean posets. Finally, we apply those results to provide a completely order-theoretical characterization in terms of orthogonal elements, generalizing Finch’s celebrated achievements [8]. Subsequently, we will describe some Dedekind's type Theorems, that describe generalized orthomodular posets by means of particular subposets, generalizing [6]. We then propose a novel characterization of atomic amalgams of Boolean algebras [1]. In particular, a development of our arguments will yield Greechie’s Theorems as corollaries [10, 11]. Finally, we will prove that our notion, for orthoalgebras, corresponds to orthomodularity. This allows to conclude that an orthoalgebra has a Dedekind-MacNeille completion if and only if its induced poset is orthomodular, and it can be completed to an orthomodular lattice. To the best of our knowledge, these results are new and subsume, under a unifying framework, many well know facts sparsely scattered in the literature [16, 17]. References [1] Beran L., Orthomodular Lattices: Algebraic Approach, Riedel, Dordrecht, 1985. [2] Bruns G., Harding J., “Algebraic Aspects of Orthomodular Lattices”, In: Coecke B., Moore D., Wilce A. (eds) Current Research in Operational Quantum Logic. Fundamental Theories of Physics, vol 111. Springer, Dordrecht, 2000. [3] Birkhoff G., von Neumann J., “The logic of quantum mechanics”, Annals of Mathematics, 37, 1936, pp. 823-843. [4] Chajda I., Kolaˇr ́ık M., “Variety of orthomodular posets”, Miskolc Mathematical Notes, 15, 2, 2014, pp. 361-371. [5] Chajda I., L ̈anger H., “Coupled Right Orthosemirings Induced by Orthomodular Lattices”, Order, 34, 1, 2017, pp. 1-7. [6] Chajda I., Rach ̊unek J., “Forbidden configurations for distributive and modular ordered sets”, Order, 5, 1989, pp. 407–423. [7] Dalla Chiara M. L., Giuntini R., Greechie R., Reasoning in Quantum Theory–Sharp and Unsharp Quantum Logic, Kluwer Dordrecht, 2004. [8] Finch P. D., “On orthomodular posets”, Bulletin of the Australian Mathematical Society, 2, 1970, pp. 57-62. [9] Foulis D. J., “A note on orthomodular lattices”, Portugaliae Math., 21, 1962, pp. 65-72. [10] Greechie R. J., “On the structure of orthomodular lattices satisfying the chain condition”, Journal of Combinatorial Theory, 4, 1968, pp. 210-218. [11] Greechie R. J., “Orthomodular lattices admitting no states”, Journal of Combinatorial Theory, 10, 1971, pp. 119-132. [12] Husimi K., “Studies on the foundations of quantum mechanics, I”, Proceedings of the Physics-Mathematics Society of Japan, 19, 1937, pp. 766-789. [13] Kalmbach G., “Orthomodular logic”, Z. Math. Logic Grundlagen J. Math., 20, 1, 1974, pp. 395-406. [14] Kalmbach G., Orthomodular Lattices, London Math. Soc. Monographs, vol. VIII, Academic Press, London, 1983. [15] Matouˇsek M, Pt ́ak P.,“Orthocomplemented Posets with a Symmetric Difference”, Order, 26, 1, 2009, pp. 1-21. [16] Navara M., Rogalewicz V.,“The Pasting Constructions for Orthomodular Posets”, Mathematische Nachrichten, 154, 1991, pp. 157-168. [17] Riecˇanov ́a Z., “MacNeille Completions of D-Posets and Effect Algebras”, International Journal of Theoretical Physics, 39, 2000, pp. 859-869. |