This symposium builds on the proposed Authors and Critics session on Baldwin’s book: Model Theory and the Philosophy of Mathematical Practice: Formalization without Foundationalism. A key thesis of that book asserts: Contemporary model theory enables systematic comparison of local formalizations for distinct mathematical areas in order to organize and do mathematics, and to analyze mathematical practice.
Session I: Appropriate formalization for different areas of mathematics.
Session II: Abstract elementary classes and accessible categories.
Chair:
John Baldwin (University of Illinois at Chicago, United States)
James Freitag (University of Illinois at Chicago, United States)
Some recent applications of model theory
ABSTRACT. After some general remarks we will explain recent applications of model theory which use, in an essential way, structural results coming from stability theory.
The first application centers around automorphic functions on the upper half plane, for instance, the j-function mapping the generator of a lattice to the j-invariant of the associated elliptic curve. The central problem of interest involves understanding which algebraic varieties V have the property that j(V) is an algebraic variety. We call such varieties bi-algebraic. The philosophy is that the bi-algebraic varieties should be rare and reveal geometric information about the analytic function. At least two general sort of approaches using model theory have emerged in the last decade. The first involves o-minimality and the second involves the model theory of differential fields, applied to the algebraic differential equations satisfied by the analytic functions. We concentrate on the second approach in this talk.
The second application is related to machine learning. In the last several years, the dividing line between learnability/non-learnability in various settings of machine learning (online learning, query learning, private PAC learning) have proved to be related to dividing lines in classification theory. By using structural results and inspiration from model theory, various new results in machine learning have been established. We will survey some of the recent results and raise a number of open questions.
09:30
Tibor Beke (University of Massachusetts Lowell, United States)
Feasible syntax, feasible proofs, and feasible interpretations
ABSTRACT. Recursion theory --- in the guise of the Entscheidungsproblem, or the arithmetic coding of the syntax of first-order theories --- has been a part of symbolic logic from its very beginning. The spectacular solution of Post's problem by Friedberg and Muchnik, as well as the many examples of decidable and essentially undecidable theories found by Tarski, focused logicians' attention on the poset of Turing degrees, among which recursive sets appear as the minimal element.
Starting with the work of Cook and others on computational complexity in the 1970's, computer scientists' attention shifted to resource-bounded notions of effective computation, under which primitive recursive --- in fact, elementary recursive --- algorithms may be deemed `unfeasible'. The threshold of "feasible computability" is reduced to polynomial-time and/or polynomial-space computations, or possibly their analogues in singly or doubly exponential times. Under this more stringent standard, for example, Tarski's decision algorithm for the first order theory of the reals is not feasible, and it took considerable effort to discover a feasible alternative.
This talk examines what happens to the classical notion of bi-interpretability when the translation between formulas, and between proofs, are required to be feasibly computable. The case of propositional logic is classical, and the extension to classical first order logic is not hard. Interesting and, I believe, open problems arise when one compares two theories with different underlying logics. The most intriguing case is when the theories do not share a common syntax, such as when one compares first order logic with the lambda calculus, or ZFC with Voevodsky's "Univalent Foundations".
The case of category theory is yet more interesting, since the "syntax" of category theory is not clearly defined. The language of category theory, as understood by the "working category theorist", certainly includes diagrams of "objects" and "arrows". We will also outline some theorems on the computational complexity of verifying the commutativity of diagrams.
Bibliography [partial]
Boolos: Don't eliminate cut
Mathias: A Term of Length 4,523,659,424,929
Cook, Reckhow: The relative efficiency of propositional proof systems
Cavines, Johnson: Quantifier elimination and cylindrical algebraic decompositions
ABSTRACT. Methods with ultraproducts of finite structures have been used extensively by model theorists to prove theorems in extremal graph theory and additive combinatorics. In those arguments, they exploit ultralimits of the counting measures of finite structures, turning asymptotic analyses into questions about dimension and measure in an infinite structure.
Looked at in reverse, pseudo-finite structures always have meaningful notions of dimension and measure associated with them, so it seems valuable to characterize pseudo-finiteness itself. The best known existing theorem of this kind is Ax’s characterization of pseudo-finite fields. I will discuss an ongoing project to find a characterization of pseudo-finiteness for countably categorical theories in which algebraic closure is trivial. Our approach to proving such a characterization is, in a sense, the standard one for model theorists, but the details are novel. First, we’d like to identify certain primitive building blocks out of which models of pseudo-finite theories are made. Second, we’ll need to understand the program for actually putting those building blocks together. Our working hypothesis is that pseudo-finite theories are those that are approximable in a certain sense by almost-sure theories (those arising from 0,1-laws for classes of finite structures), which we also speculate are precisely the rank-1-super-simple theories. In a loose sense, randomness seems to take the place combinatorial geometry in the primitive building blocks of this discussion, and the process of assembling those building blocks into a model has a more analytic flavor than one usually sees in model theory. I will discuss the current state of this work and try to point out some of the interesting contrasts between this program and other classification programs we’ve seen.
The resources for study and scholarship on the thought and writings of Bernard Bolzano (Prague, 1781-1848) have been transformed by the ongoing publication of the Bernard Bolzano-Gesamtausgabe (Frommann-Holzboog, Stuttgart, 1969 - ). This edition is projected to have 132 volumes, of which 99 have already appeared. (See
https://www.frommann-holzboog.de/editionen/20.) The prodigious scale of the work testifies to the wide spectrum of Bolzano’s interests and insights, ranging from his theology lectures and ‘edifying discourses’, through social, political and aesthetic themes, to his major works on philosophy, logic, mathematics and physics. In his thinking and his life he personified the congress theme of, ‘Bridging across academic cultures’. The availability of so much previously unpublished, and significant, material has contributed to an increasing momentum in recent decades for Bolzano-related research, including: publications, PhD theses, translations, conferences, projects, reviews and grant awards. More than half of the Gesamtausgabe volumes, overall, are devoted to methodological or mathematical subjects.
The topic, and purpose, of this symposium is the presentation, and representation, of this thriving area of research which encompasses the history and philosophy of science and mathematics. We propose to divide the symposium into two sessions: Session A on the broader theme of methodology, Session B more specifically on mathematics. The two themes are not disjoint.
Bolzano, Kant and the Evolution of the Concept of Concept
ABSTRACT. My presentation shall discuss §120 of Bolzano’s Wissenschaftslehre. Bolzano writes:
»Bin ich so glücklich hier einen Irrtum, der anderen unbemerkt geblieben war zu vermeiden, so will ich unverhohlen gestehen welchem Umstande ich es zu danken habe, nämlich nur der von Kant aufgestellten Unterscheidung von analytischen und synthetischen Urteilen, welche nicht stattfinden könnte, wenn alle Beschaffenheiten eines Gegenstandes Bestandteile seiner Vorstellung sein müssten« (Wissenschaftslehre § 120).
(„If I am so fortunate as to have avoided a mistake here which remained unnoticed by others, I will openly acknowledge what I have to thank for it, namely it is only the distinction Kant made between analytic and synthetic judgments, which could not be if all of the properties of an object had to be components of its representation” (Bolzano, WL, §120)).
Bolzano recognized Kant‘s insistence on the analytic/synthetic distinction as important and he drew a sharp a distinction between concept and object, like Kant. And on this distinction a new notion of the theoretical concept was crafted, because it has made both Kant as well as Bolzano, aware of the errors of the traditional notion of a concept as something established by abstraction. Kant's fundamentally significant distinction between analytic and synthetic judgments is necessarily bound to the further development of the concept beyond the traditional notion of the concepts of substance or abstraction. (Cassirer, E., 1910, Substanzbegriff und Funktionsbegriff, Berlin: Verlag Bruno Cassirer).
The whole edifice of rational knowledge therefore rested on the so-called Ontological Argument for the existence of God (Röd, W., 1992, Der Gott der reinen Vernunft, München: C.H. Beck).
The kernel of this argument is the claim that the notion of the non-existence of God is a contradiction; for God is perfect and existence is perfection. Leibniz added to this argumentation, saying “from this argument we can conclude only that if God is possible, we cannot conclude that he exists. For we cannot safely use definitions for drawing conclusions, unless we know …. that they include no contradictions” (Leibniz in: R. Ariew/D. Garber (Eds.), Leibniz, Philosophical Essays, Hackett Publ. Comp. Cambridge, p.25).
Kant emphasized that the principle of consistency only applies, if there is an object given. The statement that “a triangle has three angles”, says Kant, “does not enounce that three angles necessary exist, but upon the condition that a triangle exists, three angles must necessarily exist in it” (Kant, Critique of Pure Reason, B 622). So Kant insisted on a distinction between characteristics of objects and parts of concepts. Bolzano has been the first to recognize this clearly and to understand the consequences.
ABSTRACT. As known, Bernard Bolzano (1781–1848) was the first to offer a formally sophisticated account of (objective) explanation (Abfolge or grounding) in his Wissenschaftslehre (1837). Grounding is a relation between (collections of) truths Q and their objective reason or ground P, where P in some objective sense explains Q. Bolzanian grounding can be said to impose a hierarchy on truths: grounds are in some sense more fundamental than, and thus prior to, the truths that they ground, i.e. their consequences.
As of today, it remains an open question under which conditions exactly Bolzano holds that a (collection of) truth(s) is the ground of another. State-of-the-art reconstructions of (sufficient conditions for) Bolzano’s grounding are given as deductive arguments satisfying certain conditions of simplicity and economy (cf. e.g. Roski & Rumberg 2016). Unfortunately, such this and similar reconstructions disregard several of Bolzano’s claims about grounding, such as the requirement that a ground be at least as general as its consequence.
In this talk we put forward an alternative account of grounding that does justice to Bolzano’s claims. We argue that a correct interpretation of Bolzano’s views on explanation must take into account Bolzano's envisioning of a hierarchical ordering not only among the truths, but also among the concepts which make up a science. Such an hierarchy of concepts is a substantial part of the tradition of thinking about science, originating from Aristotle’s Analytica Posteriora, which heavily influenced Bolzano's ideal of science (de Jong & Betti 2008, de Jong 2001). According to this traditional conception, a science consists of some fundamental concepts, and all other concepts are defined from them according to the well-known model of definitions per genus proximum et differentiam specificam. Concepts, accordingly, are on this conception hierarchically ordered as genus and species. We will argue that the hierarchical ordering that grounding imposes on truths in Bolzano’s view emanates from the hierarchical ordering of the concepts which make up those truths.
We will show that only by taking into account the traditional theory of concepts, including the age-old doctrine of the five so-called praedicabilia, can one account for Bolzano’s requirements for grounding in a satisfactory manner. We further strengthen our case by showing that our interpretation can account for certain other, general aspects of Bolzano's thinking about science, such as the reason why in Bolzano’s view sciences consist of synthetic truths only. One consequence of our account is that Bolzano's attitude to the traditional theory of concepts turns out to be less 'anti-Kantian' than usually maintained (cf. e.g. Lapointe 2011).
References
de Jong, W. (2001), ‘Bernard Bolzano, analyticity, and the Aristotelian model of science’,
Kant-Studien 9, 328–349.
de Jong, W. & Betti, A. (2008), ‘The Classical Model of Science: a millennia-old model of scientific rationality’, Synthese 174, 185–203.
Lapointe, S. (2011), Bolzano’s Theoretical Philosophy: An Introduction, Palgrave MacMillan, New York.
Roski, S. & Rumberg, A. (2016), ‘Simplicity and economy in Bolzano’s theory of grounding’, Journal of the History of Philosophy 54, 469–496.
Bolzano’s requirement of a correct ordering of concepts and its inheritance in modern axiomatics
ABSTRACT. The question of the right order of concepts cannot be separated from the problem of rigor in mathematics and is usually formulated with reference to Aristotle’s distinction between ordo essendi and ordo cognoscendi: the search for rigor in science should include some kind of activity that could lead us from what is first for us to what is first in itself. Bolzano’s remarks about correctness of definitions and proofs are based on the requirement of a right order of concepts and truths. Recent literature has devoted great attention to the objective order of propositions in proofs, explaining it by association with the theory of grounding. Yet, scarce attention has been given to the order of concepts, which is related to the form of definitions and to the distinction between simple and complex concepts. The paper will investigate whether the order of concepts should be considered as having an ontological or epistemological value, given that ‘a concept is called simple only if we ourselves can distinguish no more plurality in it’.
Bolzano’s view on the order of concepts will be reconstructed on the basis of his mathematical and logical works, in order to understand the relation between his logical and epistemological viewpoints. The ban on kind crossing as well as the use of philosophical notions (e.g. similarity) in geometrical proofs will be analyzed to verify whether a general hierarchical order of scientific concepts regulates the correct ordering of concepts in mathematics.
A comparison with Wolff’s conception and the analysis of the definition of similarity of mathematical objects will suggest a tension, inherited from Leibniz, between the tendency to have a unique hierarchical order of all concepts and an order for each specific mathematical discipline. A further comparison with the investigations on explicit and implicit definitions developed by the Peano School will allow establishing whether, notwithstanding different syntactic formulations of definitions, Bolzano’s requirement of an order of concepts maintained some role up to Peano’s axiomatics.
References
[1] Betti, A. (2010). Explanation in metaphysics and Bolzano’s theory of ground and consequence. Logique et Analyse, 53(211):281–316.
[2] Bolzano, B. (1837). Wissenschaftslehre. Versuch einer ausfuerhlichen und groessentheils neuen Darstellung der Logik mit steter Ruecksicht auf deren bisherigen Bearbeiter. Seidel, Sulzbach.
[3] Bolzano, B. (1981). Von der mathematischen Lehrart. Frommann-Holzboog, Stuttgart-BadCannstatt.
[4] Bolzano, B. (2004). The mathematical works of Bernard Bolzano, ed. by S. Russ. Oxford University Press.
[5] Centrone, S. (2016). Early Bolzano on ground-consequence proofs. The Bulletin of Symbolic Logic, 22(3).
[6] Correia, F. and Schnieder, B. (2012). Metaphysical grounding: Understanding the structure of reality. Cambridge University Press.
[7] de Jong, W. R. and Betti, A. (2010). The classical model of science: A millennia-old model of scientific rationality. Synthese, 174(2):185–203.
[8] Johnson, D. M. (1977). Prelude to dimension theory: The geometrical investigations of Bernard Bolzano. Archive for History of Exact Sciences, 17(3):261–295.
[9] Sebestik, J. (1992). Logique et mathématique chez Bernard Bolzano. Vrin, Paris.
Conceptual engineering is a fast-moving research program in the field of philosophical methodology. Considering concepts as cognitive devices that we use in our cognitive activities, it basically assumes that the quality of our conceptual apparatuses crucially determines the quality of our corresponding cognitive activities. On these grounds, conceptual engineering adopts a normative standpoint that means to prescribe which concepts we should have, instead of describing the concepts we do have as a matter of fact. And its ultimate goal as a research program is thus to develop a method to assess and improve the quality of any of our concepts working as such cognitive devices—that is, for the identification of improvable conceptual features (e.g. conceptual deficiencies) and the elaboration of correlated ameliorative strategies (e.g. for fixing the identified conceptual deficiencies). Given the ubiquity of deficient and improvable concepts, the potential outreach of conceptual engineering is arguably unlimited. But conceptual engineering is still a very young research program and little has been said so far as to how its method should be devised. The purpose of the MET4CE Symposium is to contribute to filling this theoretical gap. Its main aim will then be to propose critical reflections on the very possibility—whether and why (or why not)? how? to what extent?—of developing an adaptable set of step-by-step instructions for the cognitive optimization of our conceptual apparatuses. With this in mind, the common background of the symposium will be made of the Carnapian method of explication rebooted as an ameliorative project for (re-)engineering concepts. Against this background, a first objective of the symposium will be to present ways to procedurally recast Carnapian explication with complementary frameworks (e.g. via reflective equilibrium, metrological naturalism, formalization, or conceptual modeling) for the purposes of conceptual engineering. A second objective will next be to present ways to extend the scope Carnapian explication as a template method with alternative frameworks (e.g. via conceptual history/genealogy, experimental philosophy, or constructionism in philosophy of information), again, for the purposes of conceptual engineering. And finally, a third objective of the symposium will be to evaluate these upgraded methodological frameworks for (re-)engineering concepts by comparison with competing theories of conceptual engineering that reject the very possibility of developing any template procedural methods for (re-)engineering concepts (such as Cappelen’s ‘Austerity framework’). The expected outcome of the MET4CE Symposium is thereby to provide conceptual engineering with proven guidelines for making it an actionable program for the cognitive optimization of our conceptual apparatuses.
Chair:
Manuel Gustavo Isaac (Swiss National Science Foundation (SNSF) + Institute for Logic, Language, and Computation (ILLC), Netherlands)
Manuel Gustavo Isaac (Swiss National Science Foundation (SNSF) + Institute for Logic, Language, and Computation (ILLC), Netherlands)
Broad-Spectrum Conceptual Engineering
ABSTRACT. I. TOPIC INTRODUCTION — Conceptual engineering is commonly characterized as the method for assessing and improving our representational apparatuses. The basic assumption is that, in so doing, conceptual engineering will enable the amelioration of the quality of one’s performance when executing a cognitive task—“from the most sophisticated scientific research, to the most mundane household task” (Burgess and Plunkett 2013: 1097). And accordingly, the expectations on the program of conceptual engineering are very high. Yet, to date, no proper account for how to devise the methodological framework of conceptual engineering is available in the literature on conceptual engineering. The purpose of this talk is to make a first step towards overcoming this theoretical gap by providing a way to ensure the broadest scope and impact for the method of conceptual engineering.
II. BASIC ASSUMPTION — After having introduced the topic of my talk [Part I], I will next turn to spelling out its basic assumption. This assumption concerns what the subject matter of conceptual engineering should be, that is, how to construe the representational apparatuses that conceptual engineering is meant to assess and improve. And, building on a taxonomy that distinguishes several different types of ‘representational engineering’ (e.g. ‘lexical,’ ‘terminological,’ ‘semantic’, etc.), I will argue that conceptual engineering should be about concepts, on pan of being a misnomer otherwise—which would obviously turn the label itself into a very bad case of conceptual engineering (call it “the self-discrediting predicament”).
III. MAIN ISSUE — Against the background previously set out [Part I, II], I will then focus on the main issue of the MET4CE Symposium, namely, the lack of any detailed methodological framework for conceptual engineering, at least as the program stands now. I will give two reasons why the Carnapian method of explication very well serves the purposes of the program of conceptual engineering—namely, its normativity and the fact that it may well have identified criteria that “govern by default (and thus defeasibly) conceptual [engineering]” (Machery 2017: 215). Then, I will identify three shortcomings of Carnapian explication—namely, its focus on individual concepts, the linearity of its structure, and its restriction to a theoretically-driven agenda (cf. Brun 2016, 2017). And in the remaining of my talk, I will focus on the last of these three shortcomings with a view to broadening the scope and impact of the method of conceptual engineering.
IV. POSITIVE PROPOSAL — In the last part of my talk, I will eventually outline a way to make conceptual engineering a widely applicable and highly adaptable method for the cognitive optimization of our conceptual apparatuses. To this end, building on a basic psychological characterization of concepts as default bodies of information about some referent (Machery 2009), I will argue that, for the purposes of conceptual engineering, concepts should be construed as as multiply realizable functional kinds (Machery 2010), that is: (i) as performing some specific causal/explanatory functions in our higher cognitive processes (e.g. abstraction, categorization, induction, etc.), and (ii) as realizable by several different context-dependent and task-appropriate basic kinds (viz. exemplars, prototypes, theories). And I will conclude, at last, by presenting a prototype framework for implementing this variant of hybrid theories of concepts (cf. Machery and Seppälä 2011) as a methodological tool for assessing and improving ANY of our conceptual apparatuses. Thereby, I will claim, conceptual engineering will be ensured the broadest scope and impact—namely, in the form of a broad-spectrum method.
REFERENCES
Brun, Georg (2016). “Explication as a method of conceptual re-engineering”. Erkenntnis 81.6, pp. 1211–1241.
— (2017). “Conceptual re-engineering: from explication to reflective equilibrium”. Synthese, pp. 1–30. doi: 10.1007/s11229-017-1596-4.
Alexis Burgess and David Plunkett (2013). “Conceptual ethics I”. Philosophy Compass 8.12, pp. 1091–1101.
Machery, Edouard (2009). Doing without Concepts. Oxford: Oxford University Press.
— (2010). “Précis of Doing Without Concepts”. With open peer commentary and author’s response. Brain and Behavioral Science 33.2, pp. 195–244.
— (2017). Philosophy within its Proper Bounds. Oxford: Oxford University Press.
Machery, Edouard and Selja Seppälä (2011). “Against hybrid theories of concepts”. Anthropology and Philosophy 10, pp. 99–126.
ABSTRACT. Cappelen (2018) proposes a radically externalist framework (the ‘Austerity Framework’) for conceptual engineering. This approach embraces the following two theses (amongst others). Firstly, the mechanisms that underlie conceptual engineering are inscrutable: they are too complex, unstable and non-systematic for us to grasp. Secondly, the process of conceptual engineering is largely beyond our control. One might think that these two commitments – ‘Inscrutability’ and ‘Lack of Control’ – are peculiar to the Austerity Framework or, at least, to externalist approaches more generally. And, indeed, this sort of thought has been suggested in the literature. Burgess and Plunkett (2013), for example, claim that internalists are better able to accommodate the idea that we have the semantic control required for deliberate engineering of concepts and expressions. However, Cappelen argues that neither Inscrutability nor Lack of Control are unique to his approach. Rather, they must be accepted by externalist and internalist views of meaning and content alike.
Cappelen argues as follows. Internalists claim that meaning supervenes on the internal properties of the individual. But this does not give us a direct route to semantic control or scrutability. The fact that the contents of an individual’s thoughts, or the meanings of her expressions, supervene on her internal properties does not entail that they supervene in any stable, systematic, or surveyable way. Cappelen argues that, for the internalist to avoid commitment to Inscrutability and Lack of Control, she must provide arguments for 3 claims: (a) There are inner states that are scrutable and within our control; (b) concepts supervene on these inner states; and (c) the determination relation from supervenience base to content is itself scrutable and within our control.
In this talk, I will consider how internalist metasemantic views might meet Cappelen’s 3 challenges. With regard to (a), I will argue that it is plausible that we have a weak sort of control over some of our inner states, some of the time. E.g., We have mental control as contrasted with mind-wandering (Metzinger 2013). With regard to (b), I will argue that it is reasonable to treat concepts as supervening on these states, as the resultant view is largely in keeping with widely accepted desiderata on a theory of concepts. With regard to (c), I will argue that we should appeal, not to mere supervenience, but to alternative relations such as identity or realization in order secure the result that the relation from determination base to content is both scrutable and within our control. For example, concepts might be identical with locations in a semantic network, or they may be the realizers of conceptual roles. Thus, internalists may offer the resources needed to understand conceptual engineering as an actionable method for improving concepts.
Burgess, Alexis, and Plunkett, David (2013). Conceptual ethics I. Philosophy Compass 8: 1091–101.
Cappelen, Herman (2018). Fixing Language. Oxford: Oxford University Press,
Metzinger, Thomas (2013). The myth of cognitive agency: subpersonal thinking as a cyclically recurring loss of mental autonomy. Frontiers in Psychology, 4: 931.
On two kinds of conceptual engineering and their methodological counterparts
ABSTRACT. ‘Conceptual Engineering’ is the name of a method which aims to revise rather than describe our representational devices. Current metaphilosophical debates about conceptual engineering (CE) have uncovered a variety of important foundational issues about this method, such as its compatibility with semantic externalism or how the original topic can be preserved through the process of engineering. Surprisingly, however, recent debates have rarely touched upon a question that seems central for the project of CE, namely on what kind of representation – e.g. concepts, lexical items, or conceptions – this method operates on. I will argue that answering this question is not only relevant for developing an adequate metatheory of CE, but also has dramatic consequences for its actual implementation.
In my talk, I will begin with a critical discussion of two extant attempts at answering it. According to Cappelen (2018), CE is about words instead of concepts. The engineer’s goal is thus to change the meaning of lexical items. I will argue that this proposal is wanting, for once we take into account that word meanings are determined by facts external to the minds of their users, this picture of CE does not sit well with the idea that CE aims to bring about a change in how people mentally categorize the world – an idea I take to be central to almost any actual CE project. According to Machery (2017), CE is about psychological concepts. I will argue that this proposal helps to remedy the problems of the first, as psychological concepts arguably play an important role in how we mentally categorize our environment. Ultimately, however, this proposal is wanting in that it does not show how such concepts relate to language. This is worrisome because changing lexical meanings is crucial to many actual CE projects.
In the second part of this talk, I will sketch how a more comprehensive account of CE could look like. The basic idea of my proposal is that concepts have a dual character, i.e., that they are constituted by both semantic and non-semantic features. Whereas the semantic features determine meaning and reference, the non-semantic features play a role particularly in one’s mental categorization. I will substantiate this view by linking it to various extant discussions in the literature about concepts. The upshot for CE is that due to the dual character of concepts, there are also two fundamentally different kinds of conceptual amelioration: psychological and referential. Whereas referential engineering is achieved by establishing new causal relations between a concept and an object, psychological engineering requires changing one’s dispositional profile. These two projects have different success conditions and suggest different strategies for their actual implementation. Nonetheless, my proposal allows to see them as two aspects of a unified and broadly applicable methodological framework.
References
Cappelen, H. (2018). Fixing language: An essay on conceptual engineering (1st ed.). Oxford: Oxford University Press.
Machery, E. (2017). Philosophy within its proper bounds. Oxford: Oxford University Press.
ABSTRACT. According to selected-effects theories (for instance, Neander 1991; Griffiths 1993), selection is a source of teleology: purposes are effects preserved or promoted through a selective process. Selected-effects theories are favored by several authors who want to claim that Darwinian evolution introduces teleology in the biological world.
For the purposes of this presentation, we take selected-effects theories for granted, although we will provide some motivation for them by appeal to certain response-dependent meta-normative views about value, more specifically, views according to which value is generated by evaluative responses. While most selected-effects theories concentrate on natural selection (for an exception, see Garson 2017), our goal is to argue that there are other types of selective processes in biology and that such processes should be seen as giving rise to distinctive types of evaluative standards. More specifically, we suggest that biological self-regulation (the mechanisms by which organisms monitor and regulate their own behavior and that has been the object of careful study in the biological sciences) can be seen as a selective process.
In general, biological organisms include dedicated regulatory mechanisms that compensate for possible perturbations and keep the state of the system within certain ranges (Bich et al 2015). The pressures that such self-regulatory submechanisms exercise on the states of the organism are a form of discriminatory reinforcement, as a result of which certain tendencies are inhibited while others are promoted. It is reasonable, therefore, to characterize biological self-regulation as a selective process.
So, those who accept selected-effects theories of teleology should also grant that biological self-regulation is a source of teleology – at least to the same extent that selected-effects theories are taken to vindicate the view that biological teleology is generated by natural selection. The purposes and evaluative standards introduced by self-regulation are independent of -and arguably sometimes conflicting with- the standards associated with natural selection. Given that self-regulation is ubiquitous in the biological world, it is to be expected that the evaluative standards generated by it will play a prominent role in our explanations of biological phenomena. We think that the approach sketched in this paper offers an appealing integrative picture of the evaluative dimension of biology.
Bich, L., Mossio, M., Ruiz-Mirazo, K., & Moreno, A. (2015). Biological regulation: controlling the system from within. Biology & Philosophy, 31 (2): 237–65
Garson, J. (2017). A generalized selected effects theory of function. Philosophy of Science, 84(3), 523-543.
Griffiths, P. E. (1993). Functional analysis and proper functions. BJPS, 44(3), 409-422.
Neander, K. 1991. Functions as Selected Effects: The Conceptual Analyst's Defense. Philosophy of Science, 58 (2), pp. 168-184.
ABSTRACT. Organisms as Situated Models
Rachel A. Ankeny & Sabina Leonelli
Proposal for CLMPS Prague August 2019
The philosophical literature now abounds on the use of organisms as model organisms (for an early contribution, see Ankeny & Leonelli 2011), and draws heavily on historical and sociological work which tends to focus on the standardisation of organisms as a critical stage in related research processes. This paper builds on this literature while taking up a new philosophical angle, namely that the environment, experimental set-ups, and other conditions in which the organism is situated are critical to its use as a model organism (for an early discussion of this approach in historical context, see Ankeny et al. 2014; cf. Nelson 2017 on scaffolds). In such cases, the organism itself is only one component in a more complex model. Hence we explore how material systems can ground inferences, extrapolations, and representations made using these organisms, using a series of examples based on recent historical cases including the use of cages and other housing systems with rodents and farm animals (Ramsden 2009; Kirk & Ramsden 2018), and various types of field experiments and stations (drawing on Kohler 2002; de Bont 2015; Raby 2015 among other historical work).
We argue that this type of situatedness is a critical part of what constitutes the repertoires connected with model organisms and other uses of experimental organisms (Ankeny & Leonelli 2016), and that the materiality of this situatedness is an essential feature which shapes the way in which such organisms are used in scientific research practices. In addition, this analysis assists us in making sense of the variety of modelling activities associated with the use of organisms in laboratories and elsewhere across various fields within biology (e.g., Meunier 2012; Love & Travisano 2013; Germain 2014; Gross & Green 2017 among many others), while at the same time clarifying why organisms themselves have such an important ‘anchoring’ role for these practices (see especially Rheinberger 2006).
References
Ankeny RA, Leonelli S (2011). What’s So Special about Model Organisms? Studies in History & Philosophy of Science 41: 313–23
Ankeny RA, Leonelli S (2016). Repertoires: A Post-Kuhnian Perspective on Scientific Change and Collaborative Research. Studies in History and Philosophy of Science 60: 18–28
Ankeny RA, Leonelli S, Nelson NC, Ramsden E (2014). Making Organisms Model Human Behavior: Situated Models in North American Alcohol Research, since 1950. Science in Context 27: 485–509.
de Bont R (2015). Stations in the Field: A History of Place-Based Animal Research, 1870-1930. Chicago: University of Chicago Press.
Germain P-L (2014). From Replica to Instruments: Animal Models in Biomedical Research. History & Philosophy of the Life Sciences 36: 114–28.
Gross F, Green S (2017). The Sum of the Parts: Large-scale Modeling in Systems Biology. Philosophy and Theory in Biology 9: 1–26.
Kirk RGW, Ramsden E (2018) Working across Species down on the Farm: Howard S. Liddell and the Development of Comparative Psychopathology, c. 1923–1962. History and Philosophy of the Life Sciences 40: 24 (online first).
Kohler R (2002). Landscapes and Labscapes: Exploring the Lab-Field Border in Biology. Chicago: University of Chicago Press.
Love AC, Travisano M (2013) Microbes modeling ontogeny. Biology and Philosophy 28: 161–88.
Meunier R (2012). Stages in the Development of a Model Organism as a Platform for Mechanistic Models in Developmental bBology: Zebrafish, 1970–2000. Studies in History and Philosophy of Biological and Biomedical Sciences 43: 522–31.
Nelson NC (2017). Model Behavior: Animal Experiments, Complexity, and the Genetics of Psychiatric Disorders. Chicago: University of Chicago Press.
Raby M (2015). Ark and Archive: Making a Place for Long-term Research on Barro Colorado Island, Panama. Isis 106: 798–824.
Ramsden E (2009). Escaping the Laboratory: The Rodent Experiments of John B. Calhoun and their Cultural Influence. Journal of Social History 42: 761–92.
Rheinberger, H-J (2006). An Epistemology of the Concrete: Twentieth-Century Histories of Life. Chapel Hill, NC: Duke University Press.
10:00
Adrian Stencel (Jagiellonian University, Faculty of Philosophy,, Poland)
Fitness incommensurability and evolutionary transitions in individuality
ABSTRACT. The world of living objects possesses a hierarchical nature. Genes are nested within chromosomes, chromosomes within cells, cells within organisms, organisms within groups, etc. This hierarchical attribute of the natural world is currently considered a consequence of the fact that evolution is a process that not only selects individuals, but also leads to the emergence of higher-level individuals. These events, called evolutionary transitions in individuality (ETIs), consist of mergers of autonomously reproducing units, to the extent that, after an ETI, such units no longer reproduce independently, but jointly, as a single entity. One of the most outstanding examples of an ETI is endosymbiosis, a process during which a host engulfs a free-living bacterium and subsequently (on an evolutionary time scale) transforms it into a part of its body, thus rendering it incapable of a free-living lifestyle. Although this might seem to be a rare event, it is currently established among biologists and philosophers that endosymbiosis has had a tremendous effect on the evolutionary history of species. For instance, the mitochondrion, one of the most important organelles within cells, has an endosymbiotic origin. Due to its extraordinary role in the evolution of species, endosymbiosis has recently been the object of careful study. Specifically, its genetic aspect has been studied intensively. However, the ecological aspect of endosymbiotic events is still poorly understood, especially the question of whether endosymbiosis is a kind of parasitism or, perhaps, mutualism for the endosymbiont. In other words, figuring out whether endosymbiosis reduces or enhances the fitness of the bacterium in comparison to its free-living relatives is a hard nut to crack. Therefore, the popular approach is to argue that endosymbiosis is a kind of slavery, i.e. the endosymbiont is a slave of the host. Although metaphorically this analogy sounds interesting, it has not provided much illumination. The aim of my speech is to show that science can obtain a more precise understanding of the ecological aspects of endosymbiosis, one that transcends shallow analogies. I will do this by using an idea of fitness incommensurability which basically states that it is not always possible to compare the fitness of two objects. As a case study, I will analyse the origin of aphids’ endosymbiotic bacteria, Buchnera sp., and show that, in this symbiotic system, inquiring about the fitness benefits to the endosymbiont is not theoretically justified. As a result, I will argue that asking whether endosymbiosis is beneficial or harmful to the bacteria is not always appropriate, and thus, before we start looking for an answer to such a question, we should first determine whether, in a given symbiotic system, it makes sense to pose it at all.
Tatiana Levina (Higher School of Economics (National Research University), Russia)
In Defense of Abstractions: Sofia Yanovskaya between Ideology and Cybernetics
ABSTRACT. Sofia Yanovskaya (1896-1966), who was a Professor at Lomonosov Moscow State University, made a significant contribution to the philosophy of mathematics in the USSR at the time of Marxist-Leninist ideology. Educated as a “Red Professor”*, she began with criticizing 'bourgeois' types of thought, namely all forms of idealism, inculcating the ideology of the dialectic of Marxism-Leninism in the field of mathematics. But suddenly Yanovskaya reversed her strategy. The later period of her life was dedicated to the defense of mathematical logic and philosophy of mathematics from the bashing by those in favor of dialectical materialism. In that time she used different strategies, including a dialectical one, to defend abstract objects and other conceptions of mathematical Platonism.
At the beginning of 1930 one applied mathematician asked Yanovskaya, ‘What are you going to do?’ and she answered, ‘Mathematical logic’. ‘How could you study mathematical logic at this period of time?’ he said. This story gives us an opportunity to understand the ideological context of science in the Soviet Union that time. Josef Stalin was a “leading intellectual” who controlled the state and science as well. In his speech “On the questions of the agricultural policy” (December 27th, 1929) he said that ‘practical success lag theoretical thought far behind’. His interpreters extended his words to the necessity to turn ‘theoretical thought’ into the practice of ‘socialist construction’. The purge in Soviet science began with a repression of several leading scientists of that time. Ernest Kolman, mathematician-ideologist, wrote in his article ‘Sabotage in science’ (1931), “There is no more impenetrable curtain as the curtain of mathematical abstraction. Mathematical equations… are letting hostile theses appear as if they are of objective, unprejudiced, precise character, hiding their true existence”. In the ideological journal for philosophy, “Under the Banner of Marxism”, Professor Mitin wrote on the disjuncture between form and content in the theoretical work “From the living method of knowledge, dialectic has been transformed to the aggregate of the abstract formulas”.
Stalin’s fight with formalism in mathematics was not as heavy as it was in the arts and in 1947 Yanovskaya continued to develop mathematical logic, publishing a Russian translation of the Grundzüge der theoretischen Logik by Hilbert and Ackermann along with organizing a seminar on Mathematical Logic (1944) and the History of Mathematics at the Mechanico-Mathematical Faculty. A year later Alfred Tarski’s “Introduction to logic and to the methodology of deductive sciences” was published with a foreword by Yanovskaya. In this period of her professional life, Yanovskaya concentrated herself on justifying the usage of abstract objects in philosophy (while dialectical materialists criticized abstractions) in the article “The fight of materialism and idealism in mathematics”. She defends abstract objects and mathematical logic by appealing to the practice in the paper “The role of practice in the history of the emergence of pure mathematics”. She develops dialectical criticism of idealism and Platonism in mathematics from her own position, at the same time proving that abstract objects are necessary in the research of computability and computing. In the foreword to the volume “Can Machines think?” (Russian translation, 1960) she, after Alan Turing, discusses the character of the human mind. She asks whether the human mind is based on the rules of logic such that we could construct rules, or an algorithm, following which, a ‘machine’ will imitate the cognitive activity of a human. Could human cognition be understood as algorithmic? Describing the position of Turing, she gives the negative answer, but notes that computability and the development of cybernetics has a crucial importance for Soviet society. This was her argument to prove the necessity for the abstract formulas of mathematical logic.
*The Institute of Red Professors of the All-Union Communist Party (Bolsheviks), an institute of graduate level education, has worked from 1921 to 1938 in Moscow.
Lakatos’ philosophy of mathematics and „political ideologies”
ABSTRACT. Imre Lakatos (1924-1973) left Hungary in December 1956 after 12 rather adventurous years in politics and science. His intention was to leave politics and to work in mathematical analysis. But his career turned into a slightly different direction: it was philosophy of mathematics. His political interests were apparently asleep.(1) But he wrote in a footnote of Proofs and Refutations: „The analogy between political ideologies and scientific theories is more far-reaching than is commonly realized…” (2) My paper will show how this claim applies to Lakatos’ own work.
In the files of the Lakatos Archive of the London School of Economics, there are a lot of little fragments, scraps, glosses coming from Lakatos’ first years in Britain displaying some thoughts motivating his interest in the philosophy of mathematics. A recurring pattern in his remarks is the opposition of system and method – a topos coming from the Marxist criticism of Hegel. In this opposition system stands on the wrong, reactionary, stereotyped side and method takes the revolutionary, ever-changing, good side. Lakatos uses this opposition to characterize by analogy both the opposition between the Stalinist system of Hungary and the revolutionists of ’56 (to whom he counts himself) and at once the opposition between the classical, „formalist” philosophies of mathematics and his ideas.
Let me present here three of these notes in my translation (except of some words that are in English in the original).
"Twofold expectations against proof theory:
[On the left side of the scrap:] system/ constancy/ foundation/ (formal logic, definition, as you like it [ in English])
[On the right side:] method/ change/ opening of doors/ The main task of philosophy is to bring over to this. (revolutionary change [in English]) "
"It seems that philosophy occurs in mathematics and in the natural sciences always as a gendarme: Vienna Circle, dialectics, etc. "
"The concept of the philosopher in the Vienna Circle and in the Soviet: the gendarme. "
This analogy is present in several more notes and remarks and it is perhaps a fundamental motivation to his work. It elucidates the mission Lakatos ascribes to his philosophy of mathematics that was in making in the years these notes were written.
(1) Cf. Lee Congdon, "Lakatos’ Political Reawakening”, in G. Kampis, L. Kvasz, M. Stöltzner (eds.), Appraising Lakatos. Kluwer, 2002.
(2) Imre Lakatos, Proofs and Refutations (ed. by J. Worrall and E. Zahar), CUP, 1976, p. 49.
Pseudoscience within science? The case of economics
ABSTRACT. In this contribution, the multicriterial approach to demarcation, predominant over the last decades (Mahner 2007, Hansson 2017), is applied in order to assess whether some common practices in economics should be considered pseudoscientific. To distinguish science from pseudoscience within economics, special attention is payed to the following interconnected features: a priori endorsement of assumptions refuted by experience, no independent testability of auxiliary hypothesis, and lack of assessment of alternative theories.
The paper is focused on an important example of pseudo-scientific practice in economics, namely, the endorsement and hidden a priori justification of assumptions refuted by experience. The highly ambiguous and variable nature of social phenomena makes that under-determination of theory by observation affects social science more dramatically than natural science, so that pseudoscientific assumptions may be more easily kept in the former —despite the fact that disconfirming evidence clearly suggests their falsity. There is no doubt that the highly complex and intertwined nature of social interactions requires their theoretical decomposition by means of idealizing assumptions, whose purpose consists in isolating causal variables (Mäki 2009: 78). However, it becomes obvious that, in some cases, certain false assumptions are kept even though they do not contribute to the isolation of any real causal variable. In this vein, a pseudoscientific use of idealizing assumptions can result from a lack of connection between them and real systems —as when using ‘substitute models’ as if they were ‘surrogate models’, in Mäki’s terms—, but also from imposing isolations precluded in real systems (Mäki, 2009. P. 85). For example, excluding the role of institutions when representing economic systems may dramatically limit the explanatory capacity of the corresponding representation.
The same combination of inadequate idealization and disregard of refuting information is manifested in the economists’ reluctance to depart from certain normatively appealing principles from the expected utility theory (Starmer 2005). Such reluctance is clear in the types of revisions which have been proposed to EUT —which have typically retained normatively attractive principles such and monotonicity and transitivity in spite of contrary evidence.
Mäki, Uskali (2009) “Realistic Realism about Unrealistic Models”, in Harold Kincaid and Don Ross: The Oxford Handbook of Philosophy of Economics, Oxford: Oxford University Press, 68-98.
Hansson, Sven Ove, (2017) "Science and Pseudo-Science", The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.), URL = .
Starmer, Chris (2005), “Normative notions in descriptive dialogues”, Journal of Economic Methodology, 12(2), 277-289.
Mahner M (2007) “Demarcating Science from Non-science”. In Kuipers T (ed.) Handbook of the Philosophy of Science Vol. 1, General Philosophy of Science –Focal Issues. Amsterdam: North Holland: 515-575.
External Validity and Field Experiments in Economics
ABSTRACT. In social science, external validity is used to mean the extent to which inferences drawn from observational or experimental studies can be generalized to other contexts of interest. Generalizing inferences from a study sample to another population or context includes an inductive gap: what may hold in the model may not hold in the target. Thus, external validity is a concern for any experimental or observational study that aims at results and inferences that are applicable outside the experimental setup.
In economics, analysis of 'external validity' varies within subdisciplines, but focuses, broadly speaking, on detecting the factors that affect the generalizability of inferences as well as developing methods for reliable generalization (cf. List and Levitt 2007). Philosophy of social science usually takes external validity as synonymous with extrapolation and focuses on conceptual and methodological analysis in addition to constructing theoretical frameworks for extrapolation (Guala 2005; Jímenez-Buedo 2011; Steel 2007, 2010). Some argue that the concept of external validity should not be used at all (Reiss 2018).
Because the concept is regularly used by social scientists, abandoning it is not a fruitful choice. Instead, extrapolation should be understood as the inductive process of transporting inferences from the model to the target, and external validity as a set of methodological criteria that are used to evaluate the validity of this process. These criteria vary within different fields’ epistemic and practical concerns. In econometrics, external validity is quantitatively measured to evaluate the accuracy of causal predictions. In behavioural and experimental economics, on the other hand, external validity is instead used as a guiding principle in designing experimental studies.
I use field experiments as cases to illustrate the ways in which the concept of external validity is used in experimental and behavioural economics. Studying external validity and extrapolation in specific fields highlights the relationship between the domain-specific concerns and the general issue of extrapolating causal inferences. It shows how methodological practices are used to understand and handle general issues of inductive reasoning. I conclude that external validity is a useful, if incomplete criterion of assessing extrapolation, and that a better understanding of extrapolation complements the analysis of external validity in philosophy of science.
Guala, F. (2010). Extrapolation, analogy, and comparative process tracing. Philosophy of Science, 77(5), 1070-1082.
Guala, F. (2005). The methodology of experimental economics. Cambridge: Cambridge University Press.
Jiménez-Buedo, M. (2011). “Conceptual tools for assessing experiments: some well-entrenched confusions regarding the internal/external validity distinction”. Journal of Economic Methodology, 18(3), 271–282.
Levitt, S. D., & List, J. A. (2007). "What do laboratory experiments measuring social preferences reveal about the real world?" Journal of Economic Perspectives, 21(2), 153–174.
Reiss, J. (2018). "Against external validity". Synthese, 1-19.
Steel, D. (2010). “A new approach to argument by analogy: Extrapolation and chain graphs”. Philosophy of Science, 77(5), 1058-1069.
Steel, D. (2007). Across the boundaries: Extrapolation in biology and social science. Oxford: Oxford University Press.
10:00
Ricardo Crespo (IAE (Universidad Austral) and CONICET, Argentina)
Economic sciences and their disciplinary links
ABSTRACT. For Mäki (2002: 8) the notion of economics constitutes a dangerous mélange of notions: ‘there is no one homogeneous economics’. Backhouse and Medema claim that ‘economists are far from unanimous about the definition of their subject’ (2009: 223). However, John Stuart Mill, Carl Menger and Neville Keynes, ‘methodological precursors’ of economic science, have developed useful definitions and classifications of economic sciences.
Mill distinguishes:
- The ‘Teleological’ art of definition of the ends of economic actions, a normative discipline (1882: 657);
- ‘Political Economy’, an ‘abstract’ science considering only economic motives: ‘the desire of obtaining the greatest quantity of wealth with the least labor and self-denial’ ([1844] 2006: 323).
- The ‘art of economic practice’, considering all motives influencing actual economic phenomena.
Menger’s classification is this (1960 and [1883] 1985):
1. The ‘historical sciences of economics’: economic statistics and economic history that investigate concrete economic phenomena.
2. ‘The morphology of economic phenomena, whose function consists in classifying economic facts’ (1960: 14).
3. ‘Economic theory, which has the task of ‘investigating and establishing the laws of economic phenomena, i.e., the regularities in their coexistence and succession, as well as their intrinsic causation’ (1960: 14). It has the role of demonstrating (Darstellung) and understanding (Verständnis) (1960: 7) economic phenomena. It has two orientations: the ‘realistic-empirical’ and the ‘exact’. The former uses the Baconian induction that cannot reach at universal laws, but general tendencies ([1883] 1985: 57). The later seeks ‘to ascertain the simplest elements of everything real’ arriving at forms qualitatively strictly typical ([1883] 1985: 60).
4. Practical or applied economics, with its specific method (1960: 16, 21-22).
Concerning Keynes, he distinguishes ‘positive science’, ‘normative or regulative science’ and ‘an art’ ([1890] 1955: 34-35), respectively dealing with ‘economic uniformities, economic ideals and economic precepts’ ([1890] 1955: 31, 35).
This paper will construct a classification of economic sciences based on the previous, it will characterize the different disciplines arising from it, and will analyze their disciplinary relations, whether they are multidisciplinary, crossdisciplinary, interdisciplinary or transdisciplinary, according to Cat’s (2016) systematization of these concepts.
References:
Backhouse, R. E. and S. Medema, (2009). ‘Retrospectives: On the Definition of Economics’, Journal of Economic Perspectives 23/1: 221-233.
Cat, J. (2017). ‘The Unity of Science’. In: E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/fall2017/entries/scientific-unity/.
Keynes, J.N. ([1890] 1955). The Scope and Method of Political economy. New York: Kelley and Millman.
Mäki, U. (2002). ‘The dismal queen of the social sciences’. In: U.Mäki (ed.), Fact and Fiction in Economics. Cambridge: Cambridge University Press.
Menger, C. ([1883] 1985). Investigations into the Method of the Social Sciences with Special Reference to Economics, Ed. Louis Schneider, Transl. Francis J. Nock. New York and London: New York University Press.
Menger, C. (1960). ‘Toward a Systematic Classification of Economic Sciences’. In: L.Sommer (ed.) Essays in European Economic Thought. Princeton: van Nostrand.
Mill, J.S. (1882). A System of Logic, Ratiocinative and Inductive, Eighth Edition. New York: Harper & Brothers.
Mill, J.S. ([1844] 2006). Essays on Some Unsettled Questions of Political Economy (Essay V: ‘On the Definition of Political Economy; and on the Method of Investigation Proper to It’). In Collected Works of John Stuart Mill, Volume 4. Indianapolis: Liberty Fund.
ABSTRACT. One of the most serious problems confronting the project of scientific metaphysics is the fact that some of our best scientific theories are compatible with several fundamentally different ways the world could be, and that there are no agreed-upon principles for identifying one of them as the actual way the world is. A striking instance of this is the disagreement between what is known as the “primitive ontology” approach and the “wave function realist” approach to quantum mechanics. Each of the sides can adduce plausible reasons for preferring its own approach, such that the debate seems to have reached a stalemate. I propose a way forward in this kind of debate by first making a small historical point, which I will then show to have significant systematic implications.
The debate on quantum ontology is sometimes framed with reference to Wilfrid Sellars’s distinction between the manifest and the scientific image: While primitive ontologists argue that their approach (by virtue of postulating matter in physical space) brings the scientific image closer to the manifest image and thereby facilitates the reconciliation of the two, wave function realists insist that we can very well achieve this reconciliation without the postulates of primitive ontology. My historical point is that the notion of the manifest image in play here is very different from the one employed by Sellars. The former amounts to little more than the naïve physics of macroscopic bodies, whereas Sellars’s manifest image is a sophisticated and comprehensive view of man-in-the-world. Taking Sellars’s notion seriously implies that reconciling the manifest and the scientific image involves more than just fitting together the common sense conception of physical objects with that of theoretical physics (as the quarrel between primitive ontologists and wave function realists would have it). In particular, it requires reflection on human intentions and interests, including those operative in constructing scientific or metaphysical theories.
To some extent, this point has already been appreciated in the recent debate on scientific metaphysics, when it was recognized that metaphysical positions should not be understood as mere doctrines but as stances, which, apart from beliefs, include commitments, values and so on. However, this recognition has not yet led to much progress with respect to the kind of stalemate described above. Indeed, Anjan Chakravartty (“Scientific Ontology”, OUP 2017) has argued that such stalemates are inevitable, because there is no stance-neutral criterion on which a non-question-begging case for the superiority of one stance over another could be based (as long as both of them are internally coherent). This is correct if one discusses (as Chakravartty does) the very general level of “the empiricist stance” versus “the metaphysical stance”. By contrast, if stances are understood as competing frameworks for the reconciliation of the scientific and the manifest image in the full (Sellarsian) sense, they will no longer fulfil the (stance-neutral) criterion of internal coherence to an equal degree. This opens a new way to evaluate metaphysical positions and therefore has the potential to transform the debate on scientific metaphysics.
Underdetermination and Empirical Equivalence: The Standard Interpretation and Bohmian Mechanics
ABSTRACT. The problem of underdetermination in general is that if two or more different theories are empirically equivalent, then we have no empirical reason to believe one but not the other. I define empirical equivalence as two or more theories or formulations are empirically equivalent with respect to all possible data (either predictive or non-predictive), within or outside its own domain of application. I take empirical equivalence to be a broader notion than predictive equivalence. An underdetermination is also seemingly present between the standard interpretation of quantum mechanics, the standard interpretation and a non-local hidden variables theory, Bohmian mechanics. They make the same predictions for cases such as statistical correlations in EPR-type experiments, interference in a double-slit experiment, quantum tunneling, etc. But the two are contrary theories because they differ in important ontological aspects. For example, the standard interpretation obeys indeterministic laws and claims a nonexistence of particle trajectories, while Bohmian mechanics obeys deterministic laws and claims an existence of particle positions and trajectories (Cushing, 1994, p. 203)[1]. It is important to note that quantum phenomena do not directly tell us what the world is like, so we need to give our interpretation of it. The two theories are commonly referred to as interpretations of quantum mechanics, but they are in fact two different theories. A detailed discussion on this particular case of Bohmian mechanics and the standard interpretation provides new understanding to the underdetermination problem that the scientific realist faces. Unlike it is often suggested that the two theories are empirically equivalent (and if they are, we will face a problem of underdetermination), I deny that belief and my argument is three-folded. (a) The two theories are not predictively equivalent when we restrict our discussion to the domain of non-relativistic quantum mechanics, where Bohmian mechanics makes predictions about particles having trajectories and has gained empirical support for it [2]. (b) The predictions for domains outside non-relativistic quantum physics are relevant empirical evidence if they flow from the fundamental part of a quantum theory and those predictions may be also confirmed or disconfirmed by theories or empirical findings in other domains. The two theories, Bohmian mechanics and the standard interpretation are not empirically equivalent when we consider their implications for the relativistic domain. (c) I also argue that there is non-predictive evidence that provides empirical support for Bohmian mechanics but not for the standard interpretation. A coherence with empirical data about macroscopic systems, such as a walker system that consists of an oil droplet and the wave it creates exhibits quantum-like phenomena and resembles the Bohmian wave-particle model, gives an empirical reason to favor Bohmian mechanics [3]. This empirical support cannot be accommodated in a theory to be entailed by it.
Footnote:
[1] See Cushing, J. T. (1994) “Quantum mechanics: historical contingency and the Copenhagen hegemony”.
[2] See Kocsis, S., Braverman, B., Ravets, S., Stevens, M. J., Mirin, R. P., Shalm, L. K., & Steinberg, A. M. (2011) “Observing the average trajectories of single photons in a two-slit interferometer”.
[3] See Faria, L. M. (2017) “A model for Faraday pilot waves over variable topography".
Can Conventionalism safe the Identity of Indiscernibles?
ABSTRACT. In Hacking's 1975 paper, The Identity of Indiscernibles, he argues that spatiotemporal examples, like Kant’s two indiscernible drops of water, are inconclusive in establishing or refuting a metaphysical principle like the Leibnizian Principle of the Identity of Indiscernibles [I/I]. This principle postulates that somehow there has to be some difference between two objects, otherwise they are identical to each other and therefore only one object. In cases like Kant’s two drops of water, which appear to violate the I/I, Hacking re-describes the possible world until there is only one object left which now obeys the
I/I. So, his main thesis is that “there is no possible universe that must be described in a manner incompatible with I/I” (p. 249).
My aim is to show that Hacking’s argumentation is based on a Poincaré-like conventionalism. With this underlying framework, it becomes easier to see how Hacking’s argumentation works and how it can be properly used regarding physical questions, like how we should deal with seemingly indistinguishable quantum particles. Instead of the prior two drops of water, I will apply his argument
to two indistinguishable classical particles. As I will discuss, Hacking’s re-description works well for such permutation variant particles with well-defined trajectories. But his explanation fails with regard to quantum mechanical objects, which are permutation invariant and have untraceable trajectories, since they have none. Furthermore, under certain circumstances there is no definite description of a twoparticle state, if they are in a so-called mixed state. In such situations there is no way to give different descriptions of the particle states, which means there only exists a single description with no proper re-description.
Confronted with this quantum mechanical situation, Hacking’s conventionalist approach cannot hold. Such indistinguishable particles violate the I/I – which Hacking rejects as impossible – and there is no possible world description which re-describes the situation in a way the I/I could be defended.
Therefore, we must conclude that Hacking’s (original) argument, that every possible world can be described in such a way that the I/I is preserved, holds only for classical particle systems, but not for quantum mechanical ones. Nevertheless, there still seems to be a possibility to defend Hacking’s claim against the quantum mechanical challenge by going beyond quantum mechanics. If we take conventionalism seriously, it is challenged by quantum descriptions of mixed states. But following
Hacking’s line, we can consider whether Quantum Field Theory, which provides a different description of bosonic particles by treating them as field excitations instead of indistinguishable particle states, could be seen as a possible way to re-describe a given system. This would imply that there is in fact a possible re-description of states where quantum mechanics fails to give satisfying explanations. Furthermore, the I/I could be saved again along with Hacking’s argument.
ABSTRACT. According to contemporary physics, there are given many different pictures about what is spacetime. For example, general relativity deals with spacetime as if it were a dynamically interacting and interacted structure rather than an entity (Dorato 2000). This idea is structural realism applied to spacetime and not spacetime points but metric relationally given by them is fundamental in realism of spacetime. That is to say, it is not relata but relations which are ontologically fundamental. This picture has a new understanding of ontology about theoretical entities even classical physics can provide (Esfeld &Lam 2006). This structuralist picture of macro spacetime resembles that of micro quantum particles (Fujita 2018). It's because in quantum mechanics, while each of micro particles composing a macro matter can be regarded as an individual, particles are also interpreted as a physical field. Ontological structural realism can solve this underdetermination by asserting there are only structures which exist in our world (Ladyman and French 2003). Structural spacetime realism is a third position resolving traditional discussions between substantivalism and relationism (Dorato 2000).
In addition, quantum theory of gravity can give other pictures. String theory regards spacetime as a fixed background not as a dynamical structure. Moreover, loop quantum gravity predicts that spacetime consists of more fundamental parts, which can be given by quantizing metric itself, through so called a spin network. This idea implies spacetime is substance-like and has some parts in micro regions. Even if these pictures quantum theories of gravity provide are not compatible with a structural interpretation in macro regions, it wouldn't be strange in a sense. For general relativity and quantum theories of gravity are different physical theories. General relativity may be an imcomplete and approximate theory of another comprehensive one. If so, macro structural interpretation from philosophical viewpoints would be obliged to be revised by the comprehensive one.
But loop quantum gravity directly quantizes Einstein-Hilbert action calculated from metric field. This quantum theory admits general relativity in macro regions and describes quantum effects in micro regions. Hence an interpretation about micro spacetime derived from loop quantum gravity should be consistent with one about macro spacetime. These quantum effects are very important in discussing a very early small universe resulted from the standard big bang cosmology. Based on a structural viewpoint, at the beginning of our universe, which was ever as small as planck scale, there were the only structure of spacetime and it doesn’t mean elastic space was very small that time (Fujita 2017). Structural spacetime never has substance-like parts. This picture apparently conflicts with spin networks, in which three dimensional space is described with some discrete nodes connected by edges in a new phase space. Nodes and edges can be interpreted to represent three-dimensional volumes and two-dimensional surfaces in physical space respectively meaning our actual space has the minimum size and consists of many atoms respectively. However, the correspondence from an abstract phase space to real physical space involves some emergency (Huggett Wuthrich 2018). I want to assert that in micro regions, spin networks are not space itself despite that they must be ingredients of space and that a structural interpretation about macro spacetime never conflicts with a micro substance-like (meaning with a minimum atomic unit) picture. Spacetime is a structure without no primitive identity on spacetime points and doesn't have a size nor a shape, but a picture of "atom of space" is independent from the status of spacetime.
If the above two macro and micro interpretations hold water, structural spacetime is emergent from fundamental spin networks. This means substance-like entities bear a structural spacetime. Surely whether the atom of space can be regarded as substance or not needs a deep metaphysical discussion as well as quantum particles. Likewise, the structural interpretation may also be applied to the phase space described by nodes and edged. I am willing to consider how structural spacetime is based on more fundamental entities.
Reference
Mauro Dorato (2000). Substantivalism, Relationism, and Structural Spacetime Realism. Foundations of Physics.30,1605-1628 Plenum Publishing Corporation.
Michael Esfeld and Vincent Lam (2006) Moderate structural realism about space-time, Springer Science+Business Media B.V.
Steven French and James Ladyman (2003) Remodelling Structural Realism: Quantum Physics and the Metaphysics of Structure. Synthese Volume 136, Issue 1
Nick Huggett and Christian Wüthrich. (2013) Emergent spacetime and empirical (in)coherence. Studies in the History and Philosophy of Modern Physics, Volume 44
Sho Fujita (2017). An Interpretation on Structural Realism to Spacetime Used in
Big-Bang Cosmology:Is Space Really Expanding? Japan Association for
philosophy of science written in Japanese
ABSTRACT. The concepts of indiscernibility and rigidity in Banach spaces acquire several formal meanings in the different branches of mathematics. We will discuss some of these possibilities in the context of set theory and of Banach spaces and how they interact.
Matteo De Benedetto (Ludwig-Maximilians-Universität München (Munich Center for Mathematical Philosophy), Germany)
Explicating ‘Explication’ via Conceptual Spaces
ABSTRACT. Recent years have witnessed a revival of interest in the method of explication (Carnap, 1950) as a procedure for conceptual change in philosophy and in science (Kuipers, 2007; Justus, 2012), especially in connection with the metaphilosophical nest of positions that goes under the name ‘conceptual engineering’ (Cappelen, 2018).
In the philosophical literature, there has been a lively debate about the different desiderata that a good explicatum has to satisfy (Shepherd and Justus, 2015; Brun, 2016; Dutilh Novaes and Reck, 2017).
It is still difficult to assess the usefulness of explication as a philosophical method, though. If it is indeed true that many different criteria of adequacy have been proposed and discussed, thereby offering a plethora of recipes for any wannabe explicator, it is nevertheless difficult to judge these proposals due to the vagueness and ambiguity in which they are (almost) always framed.
The main aim of this work is to explicate ‘explication’, providing a precise bridge-theory into which the explicandum and the explicatum can be represented, thereby allowing an exact framing of the different readings of explication desiderata and therefore a more precise judgment of the adequacy of a given explication. In order to frame my proposal, I am going to rely then on the theory of conceptual spaces (G ̈ardenfors, 2000; G ̈ardenfors and Zenker, 2013; Douven et al., 2013).
Specifically, I will show how various readings of explication desiderata (e.g. similarity, fruitfulness, exactness, simplicity, and others) can be precisely framed as geometrical or topological constraints over the conceptual spaces related to the explicandum and the explicatum.
Moreover, with the help of two case studies of successful explications from the history of science (the scientific concept of temperature and the propensity interpretation of fitness), I show how the richness of the geometrical representation of concepts in conceptual spaces theory allows us to achieve fine-grained readings of explication desiderata. For instance, I will show how we can read similarity as a quasi-isometry, exactness as a measure of boundary regions, fruitfulness as convexity, and many others.
I will also argue that these tools allow us to overcome some alleged limitations (Brun, 2016; Reck, 2012) of explication as a procedure of conceptual engineering such as the so-called “paradox of adequate formalization” (Dutilh Novaes and Reck, 2017).
References
Brun, G. (2016): “Explication as a Method of Conceptual Re-Engineering”. Erkenntnis 81 (6), 1211-1241.
Cappelen, H. (2018): Fixing Language: An Essay on Conceptual Engineering. Oxford University Press: Oxford.
Carnap, R. (1950): Logical Foundations of Probability. The University of Chicago Press, Chicago.
Douven, I. et al. (2013): “Vagueness: A Conceptual Spaces Approach”. Journal of Philosophical Logic 42 (1), 137-160.
Dutilh Novaes, C. and Reck, E. (2017): “Carnapian explication, formalisms as cognitive tools, and the paradox of adequate formalization”. Synthese 194 (1), 195-215.
G ̈ardenfors, P. (2000): Conceptual Spaces: The Geometry of Thought. The MIT Press, Cambridge (MA).
G ̈ardenfors, P. and Zenker, F. (2013): “Theory change as dimensional change: conceptual spaces applied to the dynamics of empirical theories”. Synthese 190 (6), 1039-1058.
Justus, J. (2012): “Carnap on Concept Determination: Methodology for Philosophy of Science”. European Journal for Philosophy of Science 2, 161-179.
Kuipers, T.A.F. (2007): “Introduction. Explication in Philosophy of Science”. In Handbook of the Philosophy of Science: General Philosophy of Science - Focal Issues, Kuipers, T.A.F. (v. ed.), Elsevier, vii-xxiii.
Reck, E. (2012): “Carnapian Explication:w a case study and critique”. In Wagner, P. and Beaney, M. (Eds.) (2012), Carnap’s Ideal of Explication and Naturalism, Palgrave Macmillan, UK., 96-116.
Shepherd, J. and Justus, J. (2015): “X-Phi and Carnapian Explication”. Erkenntnis 80, 381-402.
New versions of the mathematical explanation of the cicada case - ad hoc improvements with uncertain outcomes or the way to a full explanation?
ABSTRACT. Platonism in mathematics is not often supported by explicit means. The so-called Enhanced Indispensability Argument is in that sense an exception. It is given in the form of a kind of modal syllogism, in which the second premise is an assertion of the indispensable explanatory role of mathematical objects in science. One example illustrating this claim is the “Cicada case”. Taking into account philosophical literature, this example of mathematical explanation in science has been used, already, in Baker(2005). It seems that the disadvantages in this explanation have been an incentive for the emergence of new versions of the mathematical explanation of this scientific phenomenon. We will analyze suggestions for improving the explanation that are given in Baker(2016) and Dieveney(2018). Both versions of the explanation are an attempt to eliminate the shortcomings of previous versions in an improved mathematical context (new mathematical claims), using the results of modern biological research. We want to show that, however, at the same time, both new versions leave some drawbacks of the original version unresolved by opening additional space for the new controversial questions about the reliability of the explanation.
Although the new version of Baker's explanation (Baker(2016)) alleviates the blade of criticism that goes to the original explanation, it fails to respond to a significant remark - the explanation relies on an unproved empirical assumption of the existence of a predator with periodic life cycles. In addition, there are some other shortcomings in the new explanation. Namely, it uses a new unproved empirical hypothesis (that the ecologically viable range for periodical cicadas is between 12- and 18-years) that was later only made more probable by the “4n + 1” hypothesis (Baker(2017, pp.782-787)). Also, certain technical details on the basis of which the central mathematical statement, used in the explanation, is derived remain unexplained. For example, the lower bound of one of the intervals referred to in Lemma 2 (Baker(2016), p.338), which is the formal basis of optimization described in the paper, is the fixed number, while all other bounds are variables. In addition, no argument has been given in favour of why the lower limit of this interval, intended for the approximation of the lifetime of the predator, would not be regarded as a variable. We shall show that in such a case (if all bounds were seen as variables), a heuristic consideration that is an intuitive introduction to the mentioned lemma would not be correct.
Improving the explanation given in Dieveney(2018) is an attempt to keep the power of the mathematical explanation from controversial empirical hypotheses. However, we are not sure if the author succeeded in this way. Namely, Dieveney has presented two relatively independent variants of explanatory improvements that are based on the same theory of number theory. In the first variant, an explanation given in Baker(2016) was improved using a weaker empirical requirement regarding viable cicada periods, but the author did not reject the unproven assumption about periodical cycles of predators. In another variant, he tried to use the observations from Koenig and Liebhold(2013) to link the primeness of life cycle periods of cicada with non-periodical avian-predator life cycles. We shall point out two things. Firstly, the mentioned biologists do not indicate the possibility of such a connection. They do not see anything relevant in the fact that the length of the cicada life is expressed by a prime. Secondly, it is true that avian-predators in the research of these biologists do not appear in periodic cycles, but their most massive appearance occurs in periodic cycles, expressed mainly by primes. On this basis, it can be said that Dieveney did not fully succeed in confirming the thesis, he had set, about the influence of non-periodical predators on the cicada life.
Using the above-mentioned but also some other arguments, we shall show that the two analyzed texts (Baker(2016) and Dieveney(2018)) increased more than they reduced the number of questions regarding mathematical explanation in the Cicada case.
Literature:
Baker, A.(2005).Are there genuine mathematical explanations of physical phenomena? Mind, 114,223–238.
Baker, A.(2016).Parsimony and inference to the best mathematical explanation. Synthese, 193,333–350.
Baker, A.(2017).Mathematical spandrels. Australasian Journal of Philosophy, 95,779–793.
Dieveney, P.(2018).Scientific explanation, unifying mathematics, and indispensability arguments (A new Cicada MES). Synthese, DOI: 10.1007/s11229-018-01979-9.
Koenig, W., Ries, L., Beth Kuser Olsen, V., and Liebhold, A.(2011).Avian predators are less abundant during periodical cicada emergences, but why? Ecology, 92(3), 784–790.
Koenig, W., & Liebhold, A.(2013).Avian predation pressure as a potential driver of periodical cicada cycle length. The American Naturalist, 181, 145–149.
ABSTRACT. If statements about a kind of object feature in our best explanation of a phenomenon, does this mean that those objects exist? Some realists in ontological debates think so, arguing for their view on the grounds that the objects in question are indispensable for our best explanations. Others object that even if those objects did not exist, statements about them could still feature in our best explanations. This paper argues that there is some merit to the objection, but that explanations nevertheless have ontological implications.
§1 argues that even if we accepted inference to the best explanation, which is typically used to justify indispensability arguments, this would still not be sufficient reason to think that the objects involved in our best explanations exist. For, inferences to the best explanation are usually inferences from the explanatory power of a hypothesis to its truth, not the existence of the objects involved. So even if we assumed the legitimacy of inference to the best explanation, this only validates indispensability arguments for realism about truth in a domain, not indispensability arguments for the existence of objects.
This does not mean, however, that explanations are completely without ontological implications—§2 argues that explanations put constraints on our overall ontology. Indispensability arguments would be valid insofar as we can infer the existence of an object from the truth of statements concerning it (call such inferences truthmaker inferences). Which truthmaker inferences we accept seems to be affected by our ontological commitments: we expect someone who accepts an ontological commitment to one kind of object to also affirm the validity of truthmaker inferences for objects whose existence is no more contentious (to us) than that kind of object. Therefore, if an indispensability argument for one kind of object is sound, this implies the validity of indispensability arguments for objects whose existence we accept at least as readily.
§3 suggests a further implication that explanations have on our overall ontology. Our readiness to accept the existence of an object seems to vary with its epistemic accessibility to us—that is, if understanding of an object (were it to exist) would be more available to us, then claims about its existence seem less strange. At the same time, for us to recognise an explanation, the explanans has to play such an epistemic role for us that it elicits in us understanding of the explanandum, so objects in the explanans have to be at least as epistemically accessible to us as objects in the explanandum. Therefore, if we see that a statement about X-objects explains a statement about Y-objects, then we should judge an ontological commitment to X-objects as being at least as acceptable as an ontological commitment to Y-objects. So when certain objects are indispensable for our best explanations, this implies that given the existence of objects in the explanandum, those objects would exist.
The upshot for indispensability arguments is that they should be seen as arguments for realists about one kind of object to accept the existence of another.
ABSTRACT. The arguments for establishing the compatibility of determinism and chance usually turn on different levels of description of reality. Different levels of description delineate different sets of possible worlds with their associated state spaces and chance functions. I try a different route to argue for the compatibility of some form of determinism with indeterminism by explicating these ideas in terms of truth-maker semantics, as developed by Kit Fine. I believe the appeal of this approach is that the point can be made without having recourse to a linguistic framework to delineate different levels of description. Instead we can model different levels of reality in a more direct manner. To that end, I start with a situation space (S, ⊑), where S (the set of situations) is non-empty and ⊑ (part) is a partial order on S. Informally, the situation of my tasting a piece of chocolate is part of my tasting and finishing the chocolate bar. The situation space is also endowed with a fusion operator, which enables us to talk about extensions of a situation. The maximal extensions of a situation make up possible worlds (called world-states), but we need not define determinism at that global level. Situations can be actual or possible, and we discern possible situations, a subset of S, by S◊, so that we can define compatible or incompatible situations. My tasting the chocolate is compatible with the situation of my finishing it and also compatible with the situation of my not finishing it.
We call a situation s in S deterministic iff whenever some s’ and t in S are such that s’⊑s, and s’⊑t, then either s⊑t or t⊑s. That is, s is deterministic iff it is part of the unique extension of its sub-situations.
Suppose we aim to model a micro-level reality grounding the situations of S. Let us assume we have another situation space (Sm, ⊑m), with its set of possible situations
denoted by Sm◊, and fusion operator ⊔m, satisfying the mereological conditions
sketched above. Assume for each s in S◊, there exists a subset Sm(s) of Sm such that the fusions of the elements of Sm(s) make up any part of s. More precisely, if s’⊑s, then s’ is equivalent to some ⊔msm,i, where {sm,i} is a subset of Sm(s). What I have described so far does not preclude the possibility that the micro-level is identical with the macro-level. That is not a short-coming, as it can very well be the case in some possible worlds. The interesting possibilities, however, are when the levels differ, and allow us to find examples when the situations are deterministic at the macro-level and indeterministic at the micro-level or vice versa. That possibility is what I illustrate in my talk. I also dwell on the advantages of this approach for its avoidance of a chance function to represent indeterminism, invoking Norton's indeterministic dome example.
ABSTRACT. Despite rather general consensus among the philosophers of mathematics that the objects of mathematics are structures (patterns, structural possibilities) (cf. e.g. Hellman 2005, Horsten 2012, Landry 2016, Shapiro 2016), there seems to be some disagreement about how mathematical structures are determined.
The traditional view reflects that since all concepts of an axiomatized mathematical theory are defined implicitly, by their mutual relations specified by the axioms, the whole theory is nothing but a general structure determined by these relations. The meaning of all the concepts being strictly internal to the theory, their names being arbitrary tokens. As Hilbert neatly put it, "every theory is only a scaffolding or schema of concepts together with their necessary relations to one another, and ... the basic elements can be thought of in any way one likes," (cf. e.g. Reck and Price 2000, Shapiro 2005).
The potentially competing view is connected with the modern abstract algebra of category theory. Some categories may be interpreted as domains of objects sharing some particular mathematical structure (e.g. group structure) taken together with their mutual ("structure-preserving") morphisms. Despite their original motivation, within any category, the objects are primitive concepts, lacking any "internal properties", determined strictly by their mutual relations in the form of the category morphisms. Given a category, say, of all groups and group homomorphisms, we can define the group structure wholly in categorial terms, without ever invoking the "group elements" or the "group operations". The advantage is that by avoiding to mention its "underlying substance", structures are determined without any unnecessary "non-structural" connotations. Moreover, in this way the relevant notion of isomorphism is also automatically obtained (cf. e.g. Awodey 1996, Landry and Marquis 2005).
Exponents of the respective views regard them as competing (Shapiro 2005, p. 68), even "radically different" (Awodey 2013, p. 4). I want to argue that the difference is rather in the question asked than in the actual answer provided. There are two levels on which we can consider determination of mathematical structures. First, any theory determines a structure. There exist many models of a theory -- actual mathematical systems satisfying the structural requirements of the theory. These models are not mutually isomorphic and each exhibits some properties irrelevant vis-a-vis the given theory. It is on us, human mathematicians, to ascertain that a given concrete system matches the structural requirements of the given theory. To do this, we have to step outside of the theory proper; in relation to this theory, we enter the realm of meta-mathematics. Second, if we want mathematically to speak about structures, we need to stay within some theory which means we have to embed them into its structure. As objects of a theory, they are determined strictly by their positions within the whole structure of the theory. Although being particularly apt for this purpose, category theory does not differ in this sense from other theories (c.f. Resnik, 1997, p. 201--224). To describe a structure as a position within another mathematical structure, without invoking other underlying structures, and without overstepping the limits of mathematics proper, constitutes a laudable exercise. Category theory in particular is instrumental in this. Yet, to start determining structures using a theory, we need to grasp the structure of the theory to begin with. To determine its structure cannot, ultimately, be relegated to another mathematical theory. Moreover, the theory can only determine structures of the same or lesser complexity: we have to be operating within a more complex structure than the one we want to mathematically determine.
Awodey, S. (1996). "Structure in Mathematics and Logic: A Categorical Perspective", In "Philosophia Mathematica", 4(3).
Awodey, S. (2013). "Structuralism, Invariance, and Univalence", In "Philosophia Mathematica", 22(1).
Hellman, G. (2005). "Structuralism", In "The Oxford handbook of philosophy of mathematics and logic".
Horsten, L. (2012). "Philosophy of Mathematics", In "The Stanford Encyclopedia of Philosophy".
Landry, E. and Marquis, J-P. (2005). "Categories in Context: Historical, Foundational, and Philosophical", In "Philosophia Mathematica", 13(1).
Landry, E. (2016). "Mathematical Structuralism", In "Oxford Bibliographies Online Datasets".
Reck, E. H. and Price, M. P. (2000). "Structures and structuralism in Contemporary Philosophy of Mathematics", In "Synthese", 125(3).
Resnik, M. D. (1997). "Mathematics as a~science of patterns"
Shapiro, S. (2005). "Categories, Structures, and the Frege-Hilbert Controversy: The Status of Meta-mathematics", In "Philosophia Mathematica", 13(1).
Shapiro, S. (2014). "Mathematical Structuralism", In "The Internet Encyclopedia of Philosophy".
10:00
Andrea Sereni (School for Advanced Studies IUSS Pavia, Italy) Luca Zanetti (School for Advanced Studies IUSS Pavia, Italy)
Modelling Minimalism and Trivialism in the Philosophy of Mathematics Through a Notion of Conceptual Grounding
ABSTRACT. Minimalism [Linnebo 2012, 2013, 2018] and Trivialism [Rayo 2013, 2015] are two forms of lightweight platonism in the philosophy of mathematics. Minimalism is the view that mathematical objects are thin in the sense that “very little is required for their existence” [Linnebo 2018, 3]. Trivialism is the view that mathematical theorems have trivial truth-conditions, in the sense that “nothing is required of reality in order for those conditions to be satisfied” [Rayo 2013, 98].
Minimalism and trivialism can be developed on the basis of abstraction principles, universally quantified biconditionals stating that the same abstract corresponds to two entities of the same type if and only if those entities belongs to the same equivalence class (e.g. Hume’s Principle, HP, which states that the cardinal number of the concept F is identical to the cardinal number of G if and only if F and G can be put into one-to-one correspondence; cf. Frege [1884, § 64]). The minimalist claims that the truth of the right-hand side of HP suffices for the truth of its left-hand side. The trivialist claims that for the number of F to be identical with the number of G just is for F and G to stand in one-to-one correspondence. Moreover, the minimalist and the trivialist alike submit that the notion of the notion of sufficiency, on one side, and the ‘just is’-operator, on the other, cannot be identified with, or are not be interpreted as, a species of grounding or metaphysical dependence. More precisely, Linnebo [2018, 18] requires that “any metaphysical explanation of [the right-hand side] must be an explanation of [the left-hand side], or at least give rise to such an explanation”; the notion of grounding, by contrast, would fail to provide an explanation where one is required. Rayo [2013, 5] argues that ‘just is’-statements are not understood in such a way that “[they] should only be counted as true if the right-hand side ‘explains’ the left-hand side, or it is in some sense ‘more fundamental’”; the notion of grounding, by contrast, would introduce an explanation where none is expected.
In this paper we argue that both minimalism and trivialism can be modelled through an (appropriate) notion of grounding. We start off by formulating a ‘job description’ for the relevant notion(s). As for minimalism, grounding must be both non-factive – viz. a claim of grounding, ‘A grounds B’, must not entail that either A or B are the case – and non-anti-reflexive – viz., it must not be the case that for any A, it is not the case that A grounds A. As for trivialism, the relevant notion of grounding must be non-factive and reflexive – viz., for any A, it is the case that A grounds A. Alternatively, trivialism can be formulated by introducing the notion of portion of reality, consisting in a (possibly fundamental) fact and whatever is grounded by that fact, and arguing that the two sides of a ‘just is’-statement represent (different) facts belonging to the same portion of reality. We then suggest some definitions of both the minimalist’s notion of sufficiency, on one side, and of the trivialist’s ‘just is’ operator, on the other, in terms of (weak) ground. Finally, we point out that a suitable elaboration of the notion of conceptual grounding [Correia & Schnieder 2012, 32], which takes into account the relations of priority among the concepts by which the two sides of HP are described, effectively responds to Linnebo’s and Rayo’s objections.
David Stump (University of San Francisco, United States)
Poincaré Read as a Pragmatist
ABSTRACT. Although there are scant direct connections between Poincaré and the pragmatists, he has been read as one from early on, for example by René Berthelot (1911). Berthelot’s idea was to present Poincaré as the most objective of the pragmatists, while presenting Nietzsche as the most subjective. The idea of a book on pragmatism based on two authors neither of whom are typically put in the cannon of pragmatism may seem bizarre, but there is a compelling logic to looking at the extremes in order to define what pragmatism is and to find common themes throughout the movement. Poincaré certainly shares some themes with the pragmatists, especially the idea of a human element in knowledge that can be seen in his theory of the role that conventions play in science. Poincaré also emphatically rejects a metaphysically realist account of truth as correspondence to an external reality. Perhaps wisely, he does not specify precisely what he does mean by truth, but he frequently uses the language of “useful” or “convenient” theories. Of course, for Poincaré there are limits to conventions. First, he holds that conventions are guided by experience so that we are more likely to choose certain alternatives. Second, he directly and forcefully rejects LeRoy’s interpretation that conventions are found everywhere in science. Poincaré insisted that there are empirical facts, along with conventions. His position is easily comparable to Dewey’s insistence that science is objective even if we reject the metaphysical realist account of representation and hold that values and aims play a role in defining scientific knowledge. Besides clarifying Poincaré’s philosophy of science, reading him as a pragmatist puts his writings into a larger context. The development of 20th century philosophy was influenced heavily by dramatic developments in mathematics and physics. Poincaré was a pioneer in incorporating these developments into philosophy of science and his pragmatic attitude towards the development of non-Euclidean geometries and relativity in physics was a profoundly influential contribution to the philosophy of science. The development and professionalization of the philosophy of science is often seen as part of the eclipse of pragmatism. In fact, pragmatic ideas were used in many areas of the philosophy of science and continue to provide guidance in current debates. Indeed pragmatism was always a form of scientific philosophy, maintaining a connection to scientists and philosophers of science. From our current perspective, the pragmatists were right on several issues where they disagreed with the logical positivists. Pragmatists advocated the continuity of inquiry into values and natural science, a type of holism, thoroughgoing fallibilism, and focused on the practice of science, rather than its logical reconstruction. Reading Poincaré as a pragmatist will give us a new perspective on the development of the philosophy of science.
Berthelot, René. 1911. Un Romantisme Utilitaire: Étude Sur le Mouvement Pragmatiste; le Pragmatisme chez Nietzsche et chez Poincaré. Paris: Félix Alcan.
ABSTRACT. The inductive thinking of Whewell has been underestimated for a long time, and it seems that there is trend which simply treat his theory as a typical version of hypothetico -deductivism. However, our research found that, besides the classical hypothetico -deductivism interpretation, Whewell's thoughts also can be interpreted as coherence theory, or the best explanatory inference. As a result, these three ways of interpretations are mutually intertwined. In our opinion, existing many ways to understand Whewell's inductive thoughts shows that, on the one hand, his ideas are very complicated and profound, therefore deserved to study in-depth; on the other hand, it shows that our understanding of the nature of the induction is still rather poor, and we must take into account that the coherence standard is indispensable for inductive logic system.
References
[1]Aleta Quinn. Whewell on classification and consilience. In Studies in History and Philosophy of Biol & Biomed Sci August 2017 64:65-74.
[2] Cobb, Aaron D.History and scientific practice in the construction of an adequate philosophy of science: revisiting a Whewell/Mill debate. Studies in history and philosophy of science. Amsterdam ; Jena [u.a.]. 42 (2011),1, S. 85- 93.
[3] Dov M. Gabbay. Handbook of the History of Logic. Volume 4. Elsevier’s Science & Technology Rights Department in Oxford, UK,2008.
[4]Gilbert H. Harman. The Inference to the Best Explanation. The philosophical Review,Vol.1(Jan.,1965)pp.88-95.
[5]Gregory J. Morgan. Philosophy of Science Matters. Oxford university press.2011.
10:00
Artur Koterski (Dept. of Logic and Cognitive Science, WFiS, UMCS, Poland)
The Nascency of Ludwik Fleck’s Polemics with Tadeusz Bilikiewicz
ABSTRACT. The debate between Fleck and Bilikiewicz—a historian and philosopher of medicine—took place shortly before the outbreak of WWII and remained virtually unnoticed until 1978; some slightly wider recognition of their exchange was, however, possible only when English (Löwy 1990) and German (Fleck 2011) translations appeared.
Basically, the polemics concerns understanding of the concept of style and the influence of the environment on scientific activity as well as its products and it starts as a review of Bilikiewicz’s book (1932) where the historical account of the development of embryology in early and late Baroque was interwoven with (at times) bold sociological remarks. The commentators of the debate were quick to notice that the claims made by Fleck at that time are crucial for understanding of his position—esp. because they support its non-relativist reading. While the importance of the controversy was univocally acknowledged, its assessments so far have been defective for two reasons. First, for decades the views of Bilikiewicz were known only from the short and rather critical presentation given by Fleck and this put their discussion into an inadequate perspective. Second, for over 40 years it remained a complete puzzle how this symposium originated. This paper aims to close these gaps.
Thus, on the one hand, it is to indicate the gist of the disputation between Fleck and Bilikiewicz within the context of Bilikiewicz’s views; on the other one—and more importantly—it is to show its genesis basing on recently discovered and unpublished archival materials. For the preserved correspondence of theirs gives an opportunity to advance some hypotheses about the aims and hopes tied to the project but also about its failure.
Bibliography
Bilikiewicz, Tadeusz (1932). Die Embryologie im Zeitalter des Barock und des Rokoko, Leipzig: Georg Thieme Verlag.
— (1990a). “Comments on Ludwik Fleck’s ‘Science and Social Context’”, in: Ilana Löwy (ed.). The Polish School of Philosophy of Medicine. From Tytus Chalubinski (1820–1889) to Ludwik Fleck (1896–1961), Dordrecht: Kluwer, pp. 257–266.
— (1990b). “Reply to the Rejoinder by Ludwik Fleck”, ibid., pp. 274–275.
Fleck, Ludwik (1990a). “Science and Social Context”, ibid., pp. 249–256.
— (1990b). “Rejoinder to the Comments of Tadeusz Bilikiewicz”, ibid., pp. 267–273.
— (2011). Denkstile und Tatsachen. Gesammelte Schriften und Zeugnisse, Sylwia Werner, Claus Zittel (Hrsg.), Suhrkamp Taschenbuch Verlag, Berlin, pp. 327–362.
Mutually inverse implication inherits from and improves on material implication
ABSTRACT. The author constructs mutually-inversistic logic with mutually inverse implication ≤-1 as its core. The truth table for material implication correctly reflects the establishment of A being a sufficient but not necessary condition of B. The truth table for material equivalence correctly reflects the establishment of A being a sufficient and necessary condition of B. A≤-1B denotes A being a sufficient condition of B, its truth table of establishment combines the two truth tables: The first, third, and fourth rows of both truth table are T, F, and T respectively, so the first, third, and fourth rows of the truth table of establishment for ≤-1 are T, F, and T respectively; the two truth tables differ on the second row, so the second row of the truth table of establishment for ≤-1 is n (n denotes "need not determine whether it is true or false").
After an implicational proposition has been established, it can be employed as the major premise to make hypothetical inference. In classical logic, the affirmative expression of hypothetical inference is made in this way: both A materially implying B and A being true is the fourth row of the truth table for material implication, in which B is also true. The author argues that this is incorrect. There is a fundamental principle in philosophy: human cognition is from the known to the unknown. There is a fundamental principle in mathematics: the evaluation of a function is from the arguments to the value, if we want to evaluate from the value to the argument, then we should employ its inverse functions. In order to mathematize human cognition, we let the known be the arguments, let the unknown be the value, so that the human cognition from the known to the unknown become the evaluation of the function from the arguments to the value. The truth table for material implication is a truth function, in which A and B are the known, the arguments, A materially implying B is the unknown, the value, therefore, it can only be employed to establish A materially implying B from A and B. After A materially implying B has been established, it becomes known, becomes the argument. While in the generalized inverse functions of the truth table for material implication, A materially implying B is the known, the argument, therefore, we can employ its generalized inverse functions to make hypothetical inference. Following this clue, the author constructs two generalized inverse functions for the truth table of establishment for ≤-1, one for the affirmative expression of hypothetical inference, the other for the negative expression of hypothetical inference.
Mutually inverse implication is free from implicational paradoxes.
Reference
Xunwei Zhou, Mutually-inversistic logic, mathematics, and their applications. Beijing: Central Compilation & Translation Press, 2013
ABSTRACT. Da Costa's paraconsistent logic C$\omega $ is axiomatized by adding to
positive intuitionistic logic H$_{+}$ the \textquotedblleft Principle of
Excluded Middle\textquotedblright\ (PEM), $A\vee \lnot A$, and
\textquotedblleft Double Negation Elimination\textquotedblright\ (DNE), $%
\lnot \lnot A\rightarrow A$ (cf., e.g, [2]). Richard Sylvan (\textit{n\'{e} }%
Routley) notes that \textquotedblleft C$\omega $ is in certain respects the
dual of intuitionistic logic\textquotedblright\ ([4], p. 48) due to the
following facts (cf. [4], pp. 48-49). (1) Both C$\omega $ and intuitionistic
logic H expand the positive logic H$_{+}$; (2) H rejects PEM but accepts the
\textquotedblleft Principle of Non-Contradiction\textquotedblright\ (PNC), $%
\lnot (A\wedge \lnot A)$; and (3) H accepts \textquotedblleft Double
Negation Introduction\textquotedblright\ (DNI), $A\rightarrow \lnot \lnot A$%
, but rejects DNE. Sylvan adds ([4], p. 49) \textquotedblleft This duality
also takes a semantical shape: whereas intuitionism is essentially focused
on evidentially incomplete situations excluding inconsistent situations, the
C systems admit inconsistent situations but remove incomplete
situations.\textquotedblright
The aim of this paper is to define an unreduced Routley-Meyer semantics for
a family expansions of minimal De Morgan relevant logic B$_{\text{M}}$ with
a basic dual intuitionistic negation in Sylvan's sense.
In order to fulfill the aim stated above, we shall proceed as follows. First
of all, it has to be remarked that it is not possible to give an
RM-semantics to logics weaker than (not containing) Sylvan and Plumwood's
minimal De Morgan logic B$_{\text{M}}$ (cf. [1]). Consequently, the minimal
dual intuitionistic logic in this paper is the logic Db, which is the result
of expanding B$_{\text{M}}$ with a basic dual intuitionistic negation in
Sylvan's sense (\textquotedblleft D\textquotedblright\ stands for
\textquotedblleft dual intuitionistic negation\textquotedblright ; and
\textquotedblleft b\textquotedblright\ for basic). Once Db is defined, we
shall built a family of its extensions included in a dual intuitionistic
expansion of positive (i.e., negationless) G\"{o}delian logic G3 (cf. [5]),
which can be here named G3$^{\text{D}}$. All logics in this family are given
an unreduced RM-semantics w.r.t. which they are sound and complete. Also,
all logics in this family are shown paraconsistent in the sense that there
are non-trivial inconsistent theories definable upon each one of them.
Finally, it will be proved that the dual intuitionstic negation introduced
in this paper and the De Morgan negation characteristic of relevant logics
are independent in the context of G3$^{\text{D}}$.
It has to be noted that Sylvan's extension of C$\omega $, CC$\omega $, does
not include Db and, consequently, neither does it contain any of the logics
defined in the paper.\bigskip
ACKNOWLEDGEMENTS: Work supported by research project FFI2017-82878-P of the
Spanish Ministry of Economy, Industry and Competitiveness.
\begin{thebibliography}{9}
\bibitem{} Brady, R. T. (ed.). (2003). \textit{Relevant Logics and Their
Rivals}, vol. II. Ashgate, Aldershot.\noindent
\bibitem{} Da Costa, N. C. A. (1974). On the theory of inconsistent formal
systems. \textit{Notre Dame Journal of Formal Logic}, 15(4), 497--510.
\bibitem{} Routley, R. Meyer, R. K., Plumwood, V., Brady R. T. (1982).
\textit{Relevant Logics and their Rivals}, vol. 1. Atascadero, CA: Ridgeview
Publishing Co.
\bibitem{} Sylvan, R. (1990). Variations on da Costa C Systems and
dual-intuitionistic logics I. Analyses of C$\omega $ and CC$\omega $. Studia
Logica, 49(1), 47--65.
\bibitem{} Yang, E. (2012). (Star-Based) three-valued Kripke-style semantics
for pseudo- and weak-Boolean logics. \textit{Logic Journal of IGPL}, 20(1),
187--206.
\end{thebibliography}
Basic quasi-Boolean expansions of relevant logics with a negation of intuitionistic kind
ABSTRACT. Let L be a logic including the positive fragment of Anderson and Belnap's
First Degree Entailment Logic, the weakest relevant logic (cf. [1]). In [5]
(cf. also [3] and [4]), it is shown that Boolean negation (B-negation) can
be introduced in L by adding to it a strong version of the \textquotedblleft
conjunctive contraposition\textquotedblright\ ($(A\wedge B)\rightarrow \lnot
C\Rightarrow (A\wedge C)\rightarrow \lnot B$, in particular) and the axiom
of double negation introduction (i.e., $\lnot \lnot A\rightarrow A$).
Nevertheless, it is not difficult to prove that B-negation can equivalently
be axiomatized by adding to L the \textquotedblleft Ex contradictione
quodlibet axiom\textquotedblright\ (ECQ, i.e., $(A\wedge \lnot A)\rightarrow
B$) and the \textquotedblleft Conditioned Principle of Excluded Middle
axiom\textquotedblright\ (Conditioned PEM, i.e., $B\rightarrow (A\vee \lnot
A)$).
From the point of view of possible-worlds semantics, the ECQ-axiom can be
interpreted as expressing the thesis that all possible worlds are consistent
(no possible world contains a proposition and its negation). The conditioned
PEM, in its turn, would express that all possible worlds are complete (no
possible world lacks a proposition and its negation). Thus, the ECQ-axiom
and the conditioned PEM-axiom are the two pillars upon which B-negation can
be built in weak positive logics such as the positive fragment of Anderson
and Belnap's First Degree Entailment logic.
This way of introducing B-negation in relevant logics suggests the
definition of two families of quasi-Boolean negation expansions
(QB-expansions) of relevant logics. One of them, intuitionistic in
character, has the ECQ-axiom but not the conditioned PEM-axiom, the other
one, dual intuitionistic in nature, has the conditioned PEM-axiom, but not
the ECQ-axiom. The aim of this paper is to define and study the basic
QB-expansions of relevant logics built up by using the former type of
negation, the one of intuitionistic sort. We shall provide an unreduced
Routley-Meyer type semantics (cf. [2] and [5]) for each one of these basic
QB-expansions.
B-negation extensions or expansions of relevant logics are of both logical
and philosophical interest (cf. [5], pp. 376, ff.). For example, the logic
classical R, KR, the result of extending relevant logic R with the
ECQ-axiom, plays a central role in the undecidability proofs for relevant
logics by Urquhart (cf. [6]). It is to be expected that quasi-Boolean
negation expansions of relevant logics (not considered in the literature, as
far as we know) will have a similar logical and philosophical
interest.\bigskip
ACKNOWLEDGEMENTS: Work supported by research project FFI2017-82878-P of the
Spanish Ministry of Economy, Industry and Competitiveness.
\begin{thebibliography}{9}
\bibitem{} \noindent Anderson, A. R., Belnap, N. D. Jr. (1975). \textit{%
Entailment}. The Logic of Relevance and Necessity, vol. I. Princeton, NJ:
Princeton University Press.
\bibitem{} Brady, R. T. (ed.). (2003). \textit{Relevant Logics and Their
Rivals}, vol. II. Ashgate, Aldershot.
\bibitem{} Meyer, R. K., Routley, R. (1973). Classical relevant logics. I.
\textit{Studia Logica}, 32(1), 51--66.
\bibitem{} Meyer, R. K., Routley, R. (1974). Classical relevant logics II.
\textit{Studia Logica}, 33(2), 183--194.
\bibitem{} Routley, R. Meyer, R. K., Plumwood, V., Brady R. T. (1982).
\textit{Relevant Logics and their Rivals}, vol. 1. Atascadero, CA: Ridgeview
Publishing Co.
\bibitem{} Urquhart, A. (1984). The Undecidability of Entailment and
Relevant Implication. \textit{Journal of Symbolic Logic}, 49(4), 1059--1073.
\end{thebibliography}
Towards a model theory of symmetric probabilistic structures
ABSTRACT. Logic and probability bear a formal resemblance, and there is a long history of mathematical approaches to unifying them. One such approach is to assign probabilities to statements from some classical logic in a manner that respects logical structure.
Early twentieth century efforts in this direction include, as a partial list, work of Lukasiewicz, Keynes, Masukiewicz, Hosiasson and Los, all essentially attaching measures to certain algebras. Carnap goes somewhat further in his influential 1950 treatise “Logical foundations of probability”, where he considers a limited monadic predicate logic and finite domains.
The key model-theoretic formalisation is due to Gaifman, in work that was presented at the 1960 Congress of Logic, Methodology and Philosophy of Science held at Stanford — the first in the present conference series — and that appeared in his 1964 paper “Concerning measures in first order calculi”. This work stipulates coherence conditions for assigning probabilities to formulas from a first order language that are instantiated from some fixed domain, and shows the existence of an assignment fulfilling these conditions for any first order language and any domain. Shortly thereafter, Scott and Krauss extended these results to an infinitary setting that provides a natural parallel to countable additivity.
In his 1964 paper Gaifman also introduced the notion of a symmetric probability assignment, where the measure given to a formula is invariant under finite permutations of the instantiating domain. When the domain is countable, such an assignment is an exchangeable structure, in the language of probability theory, and may be viewed as a symmetric probabilistic analogue of a countable model-theoretic structure. There is a rich body of work within probability theory on exchangeability — beginning with de Finetti in the 1930s and culminating in the representation theorems of Aldous, Hoover and Kallenberg — and this can be brought to bear on the study of such symmetric probabilistic structures.
A joint project of Nathanael Ackerman, Cameron Freer and myself, undertaken over the past ten years, investigates the model theory of these exchangeable structures. In this talk I will discuss the historical context for this project, and its current status.
ABSTRACT. Accessible categories were introduced by M. Makkai and R. Paré as a framework
for infinitary model theory. They have turned out to be important in algebra, homotopy theory, higher category theory and theoretical computer science.
I will survey their conections with abstract elementary classes and discuss how model theory of abstract elementary classes can be extended to that of accessible categories. In particular, I will present a hierarchy beginning with finitely accessible categories and ending with accessible category hsving directed colimits.
ABSTRACT. We discuss the emerging characterization of large cardinals in terms of the closure of images of accessible functors under particular kinds of colimits. This effects a unification, in particular, of large-cardinal compactness and colimit cocompleteness, bringing the former somewhat closer to the structuralist heart of modern mathematical practice. Mediating these equivalences is the phenomenon of tameness in abstract elementary classes, which, not least for historical reasons, has provided an indispensable bridge between the set-theoretic and category-theoretic notions, beginning with work of myself and Rosicky, Brooke-Taylor and Rosicky, and Boney and Unger. We summarize the current state of knowledge, with a particular focus on my paper "A category-theoretic characterization of almost measurable cardinals" and forthcoming joint work with Boney.
Johan Blok (Hanze University of Applied Sciences, Netherlands)
Did Bolzano Solve the Eighteenth Century Problem of Problematic Mathematical Entities?
ABSTRACT. During the eighteenth century, mathematics was widely regarded as the paradigmatic example of apodictic knowledge, especially by the influential philosophers Wolff and Kant. The new mathematical inventions of the seventeenth century, like infinitesimals, were considered as tools to solve problems in the natural sciences rather than as proper mathematical objects that might be the starting point of new mathematical subdisciplines. While mathematics was slowly developing itself into a field independent of its applications, the philosophy of mathematics was still dominated by Wol 's mathematical method, which is modeled after Euclid’s Elements. At the end of the eighteenth century, several minor figures in the history of mathematics and philosophy, such as Michelsen, Langsdorf and Schultz, attempted to reintegrate the mathematical developments of the seventeenth and eighteenth century into the philosophy and epistemology of their time. An important part of their publications is devoted to the manner in which problematic mathematical entities such as infinitesimals and complex numbers should become part of mathematics.
In his early Contributions to a better founded presentation of mathematics of 1810, Bolzano responded to these issues by proposing a much wider conception of mathematics by rejecting the traditional definition of mathematics as the study of quantities. Notes and manuscripts of 1811 and 1812 confirm this radical departure from the tradition. Three decades later, Bolzano returned to the traditional conception of mathematics and offered a solution to the problem of problematic mathematical entities by allowing objectless ideas. While the early Bolzano responded radically to this problem by developing a quite general conception of mathematics, the later Bolzano much more carefully formulates a slightly broader conception of quantity and combines this with a quite advanced epistemology that allows objectless ideas to be meaningful under certain conditions. As a result, the later Bolzano seems to hold that a scientific (sub)discipline can have objectless ideas as its topics of study.
In this paper, I will attempt to answer the following question: why and how did Bolzano change his position and in what manner does this change relate to other developments in the philosophy of mathematics during the first decades of the nineteenth century? To this end, I will first summarize the issues concerning several problematic mathematical objects as they were discussed at the end of the eighteenth century. Subsequently, I will sketch Bolzano’s early and late solutions to these problems. Most of the paper will be devoted to an investigation into Bolzano’s notes and manuscripts in order to attain an understanding of why and how Bolzano’s changed his solution. Finally, I will compare Bolzano’s approach to responses of his contemporaries.
Bernard Bolzano and the part-whole principle for infinite collections
ABSTRACT. The embracing of actual infinity in mathematics leads naturally to the question of comparing the sizes of infinite collections. The basic dilemma is that Cantor’s Principle (CP), according to which two sets have the same size if there is a one-to-one correspondence between their elements, and the Part-Whole Principle (PW), according to which the whole is greater than its part, are inconsistent for infinite collections [2].
Contemporary axiomatic set-theoretic systems, for instance, ZFC, are based on CP. PW is not valid for infinite sets. Bernard Bolzano’s approach primarily described in his Paradoxes of the Infinite from 1848 [4] relies on PW.
Bolzano’s theory of infinite quantities is based on infinite series of numbers. PW leads to a special way of their treatment. They can be added, multiplied and sometimes we can determine their relationship. If we interpret infinite series as sequences of partial sums and factorize them by the Fréchet filter, then all properties determined by Bolzano will hold [6]. We obtain thus a partially ordered commutative non-Archimedean ring of finite, infinitely small and infinitely great quantities, where we can introduce the so-called „cheap non-standard analysis” [5].
The size of collections with regards to the multitude of their elements is another topic. Bolzano rejects CP as a sufficient criterion for equality of infinite multitudes. Further conditions are necessary; Bolzano refers to the need for having the same „determining ground”. As to natural numbers Bolzano does not determine explicitly the relationship between their multitude and the multitudes of their subsets. Nevertheless, it is evident how to express these multitudes with help of Bolzano’s infinite quantities. This is related to the Bolzano’s special notion of a sum [3]. Similarly, infinite series express consistently multitudes of a union, of an intersection, and of a Cartesian product of natural numbers and their subsets. This extended conception of Bolzano is similar in its results to the theory of numerosities [1].
In Paradoxes of the Infinite Bolzano also investigates relationships among multitudes of points of segments, lines, planes and spaces. In this case, the same “determining ground” means for Bolzano the existence of a one-to-one correspondence which is simultaneously an isometry. This part of Bolzano’s work could be also interpreted in the theory of numerosities.
References
[1] Benci, V., Di Nasso, M., Numerosities of labelled sets: a new way of counting, Adv. Math. 143, 2006, 50-67.
[2] Mancosu, P., Measuring the sizes of infinite collections of natural numbers: Was Cantor set theory of infinite numbers inevitable? The Review for Symbolic Logic, 2/4, 2009, 612-646.
[3] Rusnock, P., On Bolzano’s concept of sum, History and Philosophy of Logic 34, 2013, 155-169.
[4] Russ, S., The Mathematical Work of Bernard Bolzano, Oxford University Press, 2004.
[5] Tao, T., https://terrytao.wordpress.com/2012/04/02/a-cheap-version-of-nonstandard-analysis/
[6] Trlifajová, K., Bolzano’s infinite quantities, Foundation of Science, 23/4, 2018, 681–704.
ABSTRACT. Bernard Bolzano introduced the notion of variable quantities ω in his work on the binomial theorem as an alternative to the “selbst widersprechenden Begriffe unendlich kleiner Grössen” (1816: XI). The latter, he wrote, postulated quantities that were de facto smaller instead of quantities that could be smaller than another, and it compared those quantities with any alleged or conceivable and not merely with any given ones (cf. 1816: V). Because of such notion and the procedures associated with it, the Rein analytischer Beweis (1817) of Bolzano has been traditionally considered as an “epoch-making paper on the foundations of real analysis” (Ewald, 1999: 225). That way, his definition of a continuous function as that for which “der Unterschied f(x+ω)-fx kleiner als jede gegebene Grösse gemacht werden könne, wenn man ω so klein, als man nur immer will, annehmen kann” (1817: 11-12), is usually interpreted as equivalent to the later definitions of Cauchy and Weierstrass.
In this paper we will examine Bolzano’s mathematical diaries written until 1818 in order to provide a better understanding of his careful definition of quantities ω, which he used in his published mathematical works from 1816-1817. As will be shown, despite the fact that Bolzano’s mathematics hinted at some ground-breaking features and concerns, there is an intrinsic difference between his notion and the Weierstrassian ε. In particular, there seems to be enough evidence to sustain that his alternative concept was rooted in a certain distinction made between actual and potential infinite. That way, in a note dated around December 1814, he stressed that quantities ω should be understood as the assertion “dass man zu jedem schon angenommenen [Grösse] ein noch kleineres (grösseres) annehmen kann” (BGA 2B7/1: 79). As Bolzano’s Reine Zahlenlehre shows, his later developments involved a “major change” (Russ & Trlifajová, 2016: 44) on the notion of infinity, as he went on to accept what he described in a note from 1818 as the “noch zweifelhaft” concept of infinitely small quantities (BGA 2B9/2: 126).
References
Bolzano, Bernard (1816). Der binomische Lehrsatz, und als Folgerung aus ihm der polynomische, und die Reihen, die zur Berechnung der Logarithmen und Exponentialgrößen dienen, genauer als bisher erwiesen, C.W. Enders, Prague.
Bolzano, Bernard (1817). Rein analytischer Beweis des Lehrsatzes, dass zwischen je zwey Werthen, die ein entgegensetztes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung liege, Gottliebe Haase, Prague.
Bolzano, Bernard (1977ff.). Miscellanea Mathematica. In: Anna van der Lugt and Bob van Rootselaar (Eds.), Bernard Bolzano Gesamtausgabe (BGA), Reihe II, Band 2B2/1 to B9/2, Frommann-Holzboog, Stuttgart-Bad Cannstatt.
Ewald, William (Ed.) (1999). From Kant to Hilbert, Vol. I. Oxford University Press, Oxford.
Russ, Steve & Trlifajová, (2016). Bolzano’s measurable numbers: are they real?. In: Maria Zack and Elaine Landry (Eds.), Research in History and Philosophy of Mathematics, The CSHPM 2015 Annual Meeting in Washington D.C., Birkhäuser, pp. 39-56.
In Defense of a Contrastivist Approach to Evidence Statements
ABSTRACT. This paper addresses a problem pointed out by Jessica Brown (2016) for contrastivism about evidential statements (Schaffer 2005), where evidential support is understood quantitatively (as increase of subjective probability).
Consider Situation 1: it's Friday afternoon and I need to deposit my paycheck, but nothing much hangs on that. Suppose the following Question Under Discussion (QUD) is relevant for me: “Will the bank will be open on Saturday?” and my evidence is that I drove past it two weeks ago and it was open. The sentence “My recent visit to the bank is evidence that it will be open on Saturday” is intuitively true, as well as true according to contrastivism. Consider now Situation 2: I have higher stakes. Unforeseen circumstances causing the bank to close become relevant for me. It seems that in this case the sentence “My recent visit to the bank is evidence that it will be open on Saturday” is false under the following contrastivist interpretation: “My recent visit to the bank is evidence that it will be open [rather than closed due to unforeseen circumstances] on Saturday”. Yet, Brown says that this is the wrong prediction, at least under a quantitative construal of the relation of evidential support, for having driven past the bank increases the subjective probability that the bank will be open on Saturday (which is the QUD she associates with Situation 2). This seems to be a problem for contrastivism.
I wish to argue that Brown’s argument is not conclusive. Think back of low-stakes Situation 1. The contrastivist can argue that, upon reflection, the QUD should be formulated in the following contrastive way: “Will the bank will be open on Saturday as per its regular schedule, rather than closed on Saturday as per its regular schedule?” It is compatible with everything Brown says that the QUD has this more sophisticated structure; indeed one could argue that careful consideration of the scenario prompts us to this formulation. A similar reformulation can be given of the QUD of Situation 2, where the stakes are higher: “Will the bank will be open on Saturday as per its regular schedule, rather than closed on Saturday due to an unforeseen change of hours?” This allows one to vindicate contrastivism. In Situation 2, the sentence “My recent visit to the bank is evidence that it will be open” is false relative to the QUD of that context, as predicted. For having driven by the bank two weeks earlier does not increase the probability that the bank will be open on Saturday as per its regular schedule rather than closed on Saturday due to an unforeseen change of hours. The contrastivist can therefore resist Brown's accusation.
References
Brown, Jessica (2016). Contextualism about Evidential Support. Philosophy and Phenomenological Research 92 (2):329-354.
Schaffer, Jonathan (2005). Contrastive knowledge. In Tamar Szabo Gendler & John Hawthorne (eds.), Oxford Studies in Epistemology 1. Oxford University Press.
Concepts and Replacement: What should the Carnapian model of conceptual re-engineering be?
ABSTRACT. Many concepts, it seems, are deficient. One response to conceptual deficiency is to, in some sense, refine the problematic concept. This is called 'conceptual re-engineering'.
We can think of Carnapian explication as a model of conceptual re-engineering. On this approach, conceptual re-engineering consists of the replacement of one concept, the 'explicandum', with another concept, the 'explicatum'. One advantage of the approach is that Carnapian explication promises us something approaching a step-by-step guide for conceptual re-engineering.
For such a model to be helpful, however, we need an account of ‘conceptual replacement’. For modelling purposes, then: How should we understand ‘concepts’? And what should we understand to be involved in their ‘replacement’?
I will consider and reject two answers, before recommending an alternative.
1. The naïve view
Concepts are word meanings, and have a definitional structure (or are constituted by rules). Replacement involves changing the meaning of a word from the explicandum to the explicatum, or using a word whose meaning is the explicatum in place of a word whose meaning is the explicandum.
Initial objection. This theory of concepts is problematic. (E.g. Williamsonian worries about analyticity.)
Principal objection. Conceptual re-engineering is a theory-neutral methodology, and so we want a model that is as theory-neutral as possible. Given the initial objection, the naïve view fails to meet this desideratum.
2. A Cappelen-inspired view
Talk of concepts is problematic. Think instead in terms of intensions and extensions. In particular: replacement involves changing a word’s intension; the explicandum is the old intension; the explicatum is the new intension.
Advantage 1. Intensions/extensions theoretically much less weighty and controversial than many theories of ‘concepts’.
Advantage 2. That being said, intensions are perhaps a plausible model of concepts anyway.
Objection. Changing a word’s intension/extension is hard – it doesn’t appear to be achievable by simply stipulating a new definition/rules of use for a term. So this view doesn’t seem a good model of conceptual re-engineering.
3. The speaker-meaning view
Distinguish between speaker-meaning and semantic-meaning. Speaker-meaning is closely tied to intentions – very roughly, one speaker-means that which one intends to convey. Semantic-meaning is tied to linguistic conventions. Model both speaker-meaning and semantic-meaning using intensions/extensions. Then (for w1, w2 not necessarily distinct):
The explicandum is the semantic-intension of w1. The explicatum is a speaker-intension of w2. Replacement consists of using w2 to speaker-mean the explicatum, instead of using w1 to speaker-mean the explicandum.
Advantages 1 and 2 as above.
Advantage 3. We are in control over speaker-meaning (because we are in control over our intentions).
Advantage 4. Makes sense of why conceptual re-engineers typically specify explicata by definitions: they are displaying their communicative intentions. (Challenge: what about rules of use?)
References
Cappelen, H. 2018. Fixing Language. Oxford University Press.
Carnap, R. 1950. Logical Foundations of Probability. Routledge.
Williamson, T. 2007. The Philosophy of Philosophy. Blackwell.
ABSTRACT. A distinction can be made between three kinds of systematic investigations into explication: (i) Some investigators are interested in the puzzles sourrounding explication, e.g. the analysis paradox ([13], [12]). (ii) Other investigators are interested in >getting explication straight< by means of a structural account ([1], [8]) or a systematic distinction between different kinds of explication ([3]). They sometimes need to presuppose, that certain puzzles of explication are solved in one way or another. (iii) Finally, some investigators are interested in making methodological contributions in the sense that they either concentrate on setting up an explicative method which is able to instruct potential explicators in performing an explication or they enhance a preexisting method of explication by governing the performance of subsidiary activities (e.g. explicandum clarification). These scholars are providing a (partial or full) procedural account of explication. In most cases these investigators rely on a more or less explicit structural account of explication and thus, by extension, on certain answers to some puzzles of explication ([2], [5], [7], [9, sect. II.3.c], [10], [11]). -- The proposed ternary distinction is not a strict trichotomy since some investigations fall into more than one category.
In my talk I will first sketch this distinction by means of some examples. This will include some remarks on how the three kinds relate to one another and a brief outline of the tradition associated with the third kind of investigations. Special attention will be given to the purposes of explication in philosophy and in other contexts (including, but not limited to, the sciences and humanities). In particular, I will illustrate to what extent different purposes of explication were recognized in procedural accounts of explication. Evidently, investigations of the third kind have a normative dimension -- explicators can see themselves as being directed by procedural accounts of explication. This internal normativity is affected by the external normativity of language in general, which is one main theme in the recent debate on different kinds of conceptual engineering ([6, ch. 13]). Thus, the question arises as to how various purposes of explication relate to the normativity of these accounts and to the normativity of language in general.
The considerations will allow for a conscious approach to the task of establishing a method of explication which takes into account explicative purposes and the normative pressure exerted on explicators. While the elaborations could be converted for the application to other kinds of conceptual engineering, in the presentation this possibility will only be hinted at. Instead I will assess one specific procedural account of explication, which can be framed in formal or informal terms. It will be put forward in conjunction with a classical example of an explication ([4]) in order to avoid an assessment which is too remote from reality because of idealized perspectives on explicative purposes and the normativity of explication. The aim is to arrive at a procedure which is applicable and which can be thought of as having been applied.
REFERENCES
[1] Georg Brun. Explication as a method of conceptual re-engineering. Erkenntnis 81(6):1211-1241, 2016.
[2] Rudolf Carnap. Logical Foundations of Probability. University of Chicago Press, Chicago, 1950.
[3] A. W. Carus. Engineers and drifters: The ideal of explication and its critics.
In Pierre Wagner (ed.), Carnap's Ideal of Explication and Naturalism, 225-239, Palgrave Macmillan, Houndsmill, Basingstoke, Hampshire, 2012.
[4] Gottlob Frege. The Foundations of Arithmetic. A logico-mathematical enquiry
into the concept of number. Basil Blackwell, Oxford, 1953.
[5] Joseph F. Hanna. An explication of `explication'. Philosophy of Science, 35(1):28-44, 1968.
[6] Sally Haslanger. Resisting Reality: Social Construction and Social Critique. OUP, Oxford, 2012.
[7] James Justus. Carnap on concept determination: Methodology for philosophy
of science. European Journal for Philosophy of Science, 2:161-179, 2012.
[8] Michael Martin. The explication of a theory. Philosophia, 3(2-3):179-199, 1973.
[9] Arne Naess. Interpretation and Preciseness. A Contribution to the Theory of
Communication. I Kommisjon Hos Jacob Dybwad, Oslo, 1953.
[10] Erik J. Olsson. Gettier and the method of explication: a 60 year old solution
to a 50 year old problem. Philosophical Studies, 172(1):57-72, 2015.
[11] Mark Pinder. Does experimental philosophy have a role to play in Carnapian
explication? Ratio, 30(4):443-461, 2017.
[12] Mark Pinder. On Strawson's critique of explication as a method in philosophy.
Synthese, S.I. PhilMethods, 2017. Online First, DOI: 10.1007/s11229-017-1614-
6.
[13] Peter F. Strawson. Carnap's views on constructed systems versus natural lan-
guages in analytic philosophy. In Paul Arthur Schilpp (ed.), The Philosophy
of Rudolf Carnap, 503-518. Open Court, La Salle, Ill., 1963.
While philosophers of mathematics usually focus on notions such as proof, theorem, concept, definition, calculation, and formalization, historians of mathematics have also used the notion of “style” to characterize the works of various mathematicians (from Euclid and Archimedes through Riemann, Brouwer, Noether, and Bourbaki to Mac Lane and Grothendieck). One question is, then, whether that notion should be seen as having significance from a philosophical point of view, and especially, for epistemological purposes. The notion of “style” is quite ambiguous, however, both in general and as applied to mathematics. In the present context, it is typically used in a sense close to “methodology” or “general approach”, i.e., a characteristic and distinctive way of investigating, organizing, and presenting mathematical ideas (geometric, algebraic, conceptual, computational, axiomatic, intuitive, etc.); but it has also been used in a personal/psychological sense (individual style), a sociological/political sense (e.g., national styles), a literary or more broadly aesthetic sense (writing style, stylish,), and as indicating a brand (an easily recognizable, influential, and visible style).
The seven talks in this session will explore this topic in a broad and multi-faceted way. They will investigate supposed differences in style within ancient and medieval mathematics (not just in ancient Greece but also China), early and late modern mathematics (into the 19th and 20th Centuries, e.g., Boole, Riemann, and Dedekind), and current mathematics (especially category theory, but more computational approaches too). A particular focus in several of the talks will be the “structuralist” style that has dominated much of mathematics since the second half of the 19th century. But “stylistic” issues concerning logic, on the one hand, and more popular presentations of mathematics, on the other, are also considered. In addition, several more general discussions of the notion of style in science, e.g., by Ludwig Fleck, G.-G. Granger, and Ian Hacking, are addressed and related to mathematics, as are questions about the dynamics of styles, i.e., the ways in which they get modified and transformed over time. Overall, it will become evident that the notion of “style” should, indeed, be seen as significant philosophically, but also as being in need of further disambiguation and sharpening.
Chair:
Karine Chemla (SPHERE-CNRS & University Paris Diderot, France)
Erich Reck (University of California at Riverside, United States)
Dedekind, Number Theory, and Methodological Structuralism: A Matter of Style?
ABSTRACT. In describing the contributions of Richard Dedekind to nineteenth-century number theory, the mathematician and historian of mathematics H.M. Edwards has attributed a distinctive "style" of doing mathematics to him, one that stands in contrast to that of his contemporary Kronecker and had a strong influence on twentieth-century mathematics. Other historians and philosophers of mathematics have credited Dedekind with contributing to a characteristically "conceptual" approach to mathematics, as inaugurated by his teachers and mentors Gauss, Dirichlet, and Riemann, as opposed to a more "computational" alternative, again often identified with Kronecker and his followers. And more recently, some authors have talked about a "structuralist style" as exemplified by Dedekind's work, as related to but also distinguishable from a "structuralist view of mathematical objects" one can find in that work as well.
The present talk has three main goals: The first is to describe Dedekind's contributions to number theory briefly so as to make its "structural" nature evident, primarily in a methodological but secondarily also in a metaphysical sense. This will involve contrasting his approach to mathematics with that of his contemporary Kronecker, as well as exploring his influence on twentieth-century set theory, especially via Zermelo's work; and it will lead to his impact on later "structuralist" mathematicians, such as Noether, Artin, and Bourbaki. Second, the question will be addressed in which sense one can, and should, talk about a "style" here, i.e., how exactly that notion should be understood in this context. More particularly, can it be spelled out in a philosophically significant way, and especially, in an epistemological sense? The argument will be that it can, although the notion of "style" is in need of clarification, which will also be provided to some degree.
The third main goal, which builds on the first two, will be to explore a dynamic change in this connection, involving several stages: from the "conceptual style" exemplified by Dedekind's teachers, especially Dirichlet and Riemann, which Dedekind adopted early on as well; to Dedekind's own "structuralist style", with its added set-theoretic and category-theoretic elements; through a more maturely "structuralist style" in figures such as Noether, Artin, and Bourbaki; and finally, leading to the "category-theoretic style" of Mac Lane, Grothendieck, and others. An added outcome of this part of the talk will be that the notion of "style" is particularly helpful if one views mathematics not as a static, finished system of theorems and proofs, but as a developing practice, so that center stage is taken by ways in which novel research is undertaken, mathematical ideas and results are re-organized, and the field is pushed in new directions.
This talk is meant to be part of a symposium, entitled "Styles in Mathematics" (acronym: SIM, co-organized by E. Reck & G. Schiemer), in which some of the themes central to it will be explored in more general ways.
Structuralism as a mathematical style: Klein, Hilbert, and 19th-Century Geometry
ABSTRACT. Structuralism in contemporary philosophy of mathematics is, roughly put, the view that mathematical theories study abstract structures or the structural properties of their subject fields. The position is strongly rooted in modern mathematical practice. In fact, one can understand structuralism as an attempt to come to terms philosophically with a number of wide-ranging conceptual and methodological transformations in 19th and early 20th century mathematics, related to the rise of modern geometry, number theory, set theory, and abstract algebra. The aim in this talk is twofold. The first one is to focus on the geometrical roots of structuralism. Specifically, we will survey some of the key conceptual changes in geometry between 1820 and 1910 that eventually led to a “structural turn” in the field. This includes (i) the gradual implementation of model-theoretic techniques in geometrical reasoning, in particular, the focus on duality and transfer principles in projective geometry; (ii) the unification of geometrical theories by algebraic methods, specifically, by the use of transformation groups and invariant theory in Felix Klein’s Erlangen Program; and (iii) the successive consolidation of formal axiomatics and the resulting metatheoretic approach to axiomatic theories in work by Hilbert and others. The second aim in this talk is more philosophical in nature. It will be to characterize this structural turn and the structural methods developed in nineteenth-century geometry as a fundamental “change of style” in mathematical reasoning that brought with it a new conception of the nature of mathematical knowledge. Analyzing the “methodological structuralism” underlying 19th and early 20th century mathematics as a mathematical style in this sense will draw both from Ian Hacking’s work on different “styles of reasoning” in science (in particular Hacking 1992) as well as from Granger’s analysis of collective styles in mathematics in his Essai d’un Philosophie du Style of 1968. With respect to the former work, the focus will be in particular on Hacking’s discussion of “standards of objectivity“ related to particular scientific styles. According to Hacking, „every style of reasoning introduces a great many novelties including new types of: objects, evidence, sentences, new ways of being a candidate of truth and falsehood, laws, or at any rate modalities, possibilities“ (Hacking 1992, p.11). Based on the survey of the different methodological developments in geometry mentioned above, we will attempt to spell out the novel standards of objectivity, including a new conception of the very subject matter of geometrical theories, implied by the turn to a structuralist style in mathematics.
Designing the structuralist style: Bourbaki, from Chevalley to Grothendieck
ABSTRACT. Symposium: Styles in Mathematics (SIM)
Nicolas Bourbaki, the name under which a group of young French mathematicians decided to publish a series of volume on mathematics, had a profound influence on the development and presentation of the second part of 20th century mathematics. In this talk, I will argue that right from the start, that is in 1934-35, Bourbaki knew he was introducing a new style of doing, presenting and developing mathematics. This style is usually identified with the systematic usage of the axiomatic method. But this description comes short, for what Bourbaki is proposing is really a conceptual approach to mathematics which is, in the end, encapsulated in a specific form of structuralism. The main purpose of this talk is to explain the technical details underlying this approach as well as its philosophical consequences.
In 1935, one of the original members of Bourbaki, Claude Chevalley, published a paper in a philosophical journal entitled ”Variations du style mathematique”, in which he articulates some of the elements that will become Bourbaki’s vision. The latter is usually identified with the paper published by Bourbaki in 1950, but in fact written by another member of Bourbaki, Jean Dieudonne, and entitled ”The Architecture of Mathematics”. We will briefly present the main theses of these papers, but our target will rather be what one finds in the published volumes and the Bourbaki archives, in particular in the volume on logic and set theory and its various preliminary versions, which took 20 years to complete. It is in this volume that one finds the articulation of the axiomatic method and, in particular, its structuralist version. It turns out that Bourbaki built in the notion of isomorphism in the various mathematical theories he was interested in. We will discuss in what sense this is a version of mathematical structuralism.
However, it turns out that Bourbaki’s version, although essentially correct when it was formulated, faced a problem with the advent of category theory. One of its young members, Alexander Grothendieck, took the bull by the horns and kept Bourbaki’s spirit by using categories to construct new mathematical theories. Grothendieck left the Bourbaki group, since some of the original members disagreed with the way he was using categories in mathematics. I will argue that he was nonetheless faithful to the original project launched by Bourbaki and that he was in fact adding a stone to the structuralist edifice erected by his predecessors, an edifice which is still under construction as I write.
With this special episode at hand, we will suggest a characterization of the structuralist style of abstract mathematics. We will contrast our analysis with those proposed by Gilles- Gaston Granger, Paolo Mancosu and David Rabouin.
ABSTRACT. Within the last two decades, plant science has increasingly sought to apply fundamental biological insights and new techniques developed through laboratory studies of model organisms to research on crops. This move was accompanied by a growth in efforts to(1) move research outside of the standard laboratory environment and into hybrid spaces (such as field stations, farm platforms and smart glasshouses) that are perceived to better capture features of the ‘natural environment’; (2) integrate agronomicresearch with ‘basic’ plant science, so as to harness cutting-edge insights into molecular mechanisms and related technologies to increase food security; (3) study plant species of economic and cultural interest to parts of the world other than Europe andthe United States, such as cassava and bambara groundnut; (4) increase knowledge about gene-environment interactions, using phenotypic traits as conduits to understand the impact of genetic modifications and/or environmental changes on plant structures and behaviors; and (5) produce ‘global’ infrastructures and venues where data, germplasm and knowledge about plant species used in different parts of the world can be shared and discussed. This paper will discuss the epistemic implications of these trends, focusingon the issues arising from attempts to share phenomic data about crops across different locations, and particularly between high-resourced and low-resourced research environments. In particular, I discuss thecase of the Crop Ontology and its efforts to document and link the diversity of tools, terminologies and variables used to describe widely diverse species in different parts of the world. I argue that such practices do not relate in straightforward ways to traditional taxonomic practices, and in fact defy existing understandings of systematisation in biology and beyond. Here is a case where reliance on a universal approach to identifyingand labelling traits has repeatedly proved problematic, and yet the attempt to articulate semantic differences is generating new ways to develop and communicate biological knowledge.
Paul Hoyningen-Huene (Leibniz University of Hannover, University of Zurich, Germany)
Do abstract economic models explain?
ABSTRACT. At least in the last two decades, the problem of the putative explanatory force of abstract economic models has been framed as follows. How can it be that these highly idealized, even contrafactual, thus literally false models can be explanatory of the real world? The explanatory bottleneck was thus taken to consist in the gap between models and real world (Sudgen 2000). Reiss 2012 even postulated an “explanation paradox” due to the literal falseness of the models.
In this paper, I shall claim that the really fundamental explanatory bottleneck of explanations by abstract models has been misidentified. Even if the transfer from abstract models to the real world were completely unproblematic, a model would not explain by itself some real world state.
Let us assume that we can model the state to be explained in the model world. For instance, if the explanandum is some kind of segregation, in a checkerboard model like Schelling’s the explanandum would be represented by some clustering of the white and the black chips, a “model-segregation state”. A suitable model like Schelling’s consists of a model dynamics that somehow represents a possible real dynamics, for instance by “model-weak-racial-preferences”. The model is then capable of generating the model-segregation state out of an initial “model-non-segregation state”, i.e., a random distribution of white and black chips. Is this generation of the final model-segregation-state out of the initial model-non-segregation-state an explanation of the former in the model world? This is clearly not the case because we can easily build models with different dynamics also leading to model-segregation as final states, e.g., models with model-strong-racial-preferences, or model-apartheid-laws, or model-race-related-income-differences, etc. Thus, if in the model world a final state with model-segregation is given, every particular model producing that final state only delivers a potential explanation for the emergence of model-segregation out of model-non-segregation. This result is quite general: models never produce actual explanations in the respective model world but only potential explanations. This is because alternative models may lead to the same final state.
To see how a potential model world explanation transforms into the real world, we continue to assume that the correspondences between model states and model dynamics on the one hand and real world states and real world dynamics on the other is unproblematic. Does the Schelling model with model-weak-racial-preferences then explain real world segregation states? Of course not, because already in the model world it produces only a potential explanation, so in the real world it also produces (at best) a potential explanation. In order to transform a potential explanation into an actual one, one has to exclude all alternative potential explanations. It is fundamental to note that the actual explanation is distinguished from other potential explanations primarily not by an intrinsic property such as high credibility (as Sugden assumes), but by its comparative advantage against competitors. This may be accomplished by showing that the empirical conditions necessary for alternative models to work do not obtain.
The extrapolator’s circle states, roughly speaking, that in order to extrapolate a model or causal relation it is necessary to know that it applies in the relevant new domain, but that in order to establish the latter it is necessary in turn to examine that new domain – thus negating the main benefit of extrapolation, which is precisely that we can avoid having to examine the new domain (Steel 2008). The extrapolator’s circle is an important problem. How might it be overcome? We argue for a previously unappreciated solution, namely the use of prediction markets (when they are available, which is currently mainly in social science cases).
Prediction markets are markets for placing bets on future or otherwise unknown events. The price signals arising in such markets, if interpreted as probability assignments, constitute implicit predictions. Prediction markets are attractive because they have a track record of predictive success (e.g. AUTHOR 2016). They work well when there are at least some informed traders on the market – indeed, going by the current empirical literature, this seems close to a sufficient condition for them to predict successfully. If so, then the only thing you need to know as a market maker applying a prediction market to some new domain is that, somewhere in the pool of traders you attract, there will be some who are informed. Crucially, what you don’t need to know is any particular theory about the new domain. Of course, individual traders on the market might make any number of theoretical assumptions, and (lucky guesses aside) those assumptions will usefully inform the market’s output only in so far as they lead to good predictions. But the market maker need presuppose almost no theory whatsoever.
In effect, prediction markets are mechanisms that extrapolate easily because they require unusually minimal assumptions. In particular, they require only that there exist some informed traders, plus that there is sufficient market liquidity, available data, legal infrastructure, and so forth.
The above only concerns prediction of actual events. But extrapolation also often concerns conditional predictions, e.g. about the result of possible or counterfactual interventions. Hitherto, there has been no evidence that guidance about such conditional predictions can be given by prediction markets because by definition in conditional cases no actual event ever occurs that settles market participants’ bets, at least not in the timeframe of interest. But recent experimental research now suggests that so-called self-resolving prediction markets, i.e. markets for non-actual events, may operate just as reliably as markets for actual events. We report on that research here. If it holds up, it will show that prediction markets can achieve all of the goals of extrapolation, namely prediction of both actual and non-actual events alike.
References
AUTHOR (2016). ‘Information Markets’, in Coady, Lippert-Rasmussen, and Brownlee (eds), The Blackwell Companion to Applied Philosophy (Wiley-Blackwell).
Daniel Steel (2008). Across the Boundaries (Oxford).
Tim Lyons (Indiana University-Purdue University Indianapolis, United States)
The Reach of Socratic Scientific Realism: From axiology of science to axiology of exemplary inquiry
ABSTRACT. This paper constitutes an effort to engage directly on the conference theme, “bridging across academic disciplines.” I will argue that a specific refined axiological scientific realism—that is, an empirical meta-hypothesis about the end toward which scientific reasoning is directed—can be extended across domains of inquiry. The ultimate focus here will be on those domains that do not generally fall under the rubric of “science.”
I will begin by clarifying the nature of axiological meta-hypotheses in general, defusing a set of concerns philosophers tend to express about them. I will then introduce the refined realist axiological meta-hypothesis and will emphasize that it is one asserted to be independent of its epistemic counterpart (i.e. the scientific realist's epistemic thesis that, roughly, we can justifiably believe successful scientific theories). The axiological meta-hypothesis I advocate specifies as the end toward which scientific theorizing is directed, not merely truth per se, but instead a particular sub-class of true claims, those that are experientially concretized as true. I will then identify a set of theoretical virtues that must be achieved were this end to be achieved; these in turn become desiderata required of the pursuit of the posited end. I will also point to a set of virtues the quest for the posited end encourages or promotes, even if those virtues are not required of its achievement. After showing that my axiological meta-hypothesis both explains and justifies these crucial and agreed upon aspects of theory choice in science, I will argue that it does so better than its primary live competitors—that it fares better at living up to what both it and its competitors, themselves, demand.
I will then turn to apply this axiological meta-hypothesis to disciplines beyond “science” to demonstrate its promise as a theory of inquiry in general, with a special emphasis on the humanities. I will focus on one of the theoretical virtues as pivotal here, one closely related to the familiar notion of “likelihood,” but, more specifically, the degree to which a theoretical system implies what it explains and, in the case of axiology, justifies. After showing how the axiological meta-hypothesis I embrace can be liberated from the epistemic baggage by which it is traditionally encumbered, and after indicating the ways in which myths about the scientific method and about demarcation criteria have led us away from seeing this axiological bridge, I will illustrate the prospects for this bridge with respect to history, focusing specifically on a set of issues in the history of science. I will also show how the axiological meta-hypothesis can be used to adjudicate between metaphysical theories as well as meta-ethical theories. I will close by noting the unique justificatory position afforded by the kind of axiological turn I propose—by appealing, not to an epistemic or ontic justificatory foundation, but, instead, to one that is purely axiological.
ABSTRACT. In today metaphysics, the debate between realist and antirealist is of central importance with respect to the truth value attributed to our best scientific theories. Scientific realism is a realism regarding whatever is described by our best scientific theories and it aims at dealing with the following questions: i) can we have compelling reasons to believe in the (approximate) truth of our scientific beliefs? ii) which are the criteria used to attribute truth value to scientific theories?
On the one hand it seems fair to admit that scientific realism is the best option to embrace in order to give a definitive answer to these questions; but on the other hand, scientific realism seems hard to defend because, unlike antirealism, it has to bear the burden of proof.
In our presentation we aim at presenting Stanford’s antirealism position put forward in his Exceeding our grasp: Science, history, and the Problem of Unconceived Alternatives (2006), and then at giving some realist’s replies inspired us by Chakravartty (2008), Magnus (2010) Forber (2008), Godfrey-Smith (2008), Ruhmkorff (2011), and Devitt (2011).
Besides the two main antirealist arguments (i.e. the “empirical underdetermination of theories by data” (EU); and the “pessimistic induction” (PI) also labelled “pessimistic meta-induction” (PMI)) there is a new challenges offered by Stanford in 2006 which has been labelled “the problem of unconceived alternatives” (PUA) or the “new induction” over the history of science (NI) which combines (EU) with (PMI) suggesting that:
P1) science is subjected to the problem of unconceived alternatives: plausible alternatives are
not conceived, thus our choice is not the best or the true one;
P2) recurrent, transient underdetermination maintains that by looking at the history of science there were scientifically plausible alternatives to the past accepted theories that were not conceived; some of them were accepted later, while the theories actually accepted at that time were later shown to be false;
C1 (new induction) our theories are not different from past ones. Thus, by induction, there are some plausible alternatives to our current theories that are not even imagined and entertained;
C2) there are no compelling reasons to believe in our best theories.
Towards this latest antirealist argument there are many realist replies which aim at showing that Stanford’s argument is inappropriate and it diverts the attention from the main realist claim, namely the induction over scientific theories in Stanford becomes an induction over scientists and their cognitive abilities in exhausting the set of all plausible alternatives.
Assuming that PUA is similar to PI, we are going to prove that a possible reply to the classic PI can be also used as a reply to PUA because they are both based on a historical induction.
First, a principle of continuity can be established between the different formulations of a theory in order to see which elements of the theory are retained over its historical development. This allow to save at least a partial version of scientific realism.
Secondly, Stanford’s argument does not work as a real argument against scientific realism because relies on a distinction between community level proprieties and individual level properties; in fact, Stanford appeals to the cognitive limits of scientist’s without focusing the attention on the real realist’s claim: the comparison between the content of our best scientific theories and the physical reality they aim at describing, explaining and predicting.
Third, the cognitive limits of past scientists would not necessary be the limits of future theorizers because history teaches us that science has been getting better and better. Let us just take for example the last century physics which has undergone to an exponential growth if compared with the centuries before.
Finally – as Devitt suggests – to undermine the PUA challenge we can appeal to the methodological and technological improvements shown in our scientific life. In fact, this version (with respect to the classic realist reply, namely that there is a difference in the breadth, precision, novelty, or other important features of the predictive and explanatory accomplishment of past and present theories) explains why present theories are more successful and hence removes the whiff of ad-hoc-ery.
ABSTRACT. In early XXI century the Estonian philosopher of science and chemistry Rein Vihalemm initiated a new account of science that he called practical realism. Vihalemm claimed having the influence of the realist or practice based approaches of Rom Harré, Joseph Rouse, Ilkka Niiniluoto and Sami Pihlström but most notably of the criticism of modern science of Nicholas Maxwell (Vihalemm 2011). Vihalemm’s main idea is that science is, although based on theories, a practical activity that cannot be totally value free and independent of the cultural-historical context. In the course of practical research, we get access to an aspect of the world that becomes available to us through the prism of the theories we apply. This is a limited access, not the God’s Eye view but according to Vihalemm it is the real world that offers some access to the researcher. Still, there may be something more that is necessary, in order to make a proper sense of science and its progress that we can hardly deny. According to Nicholas Maxwell, scientists consistently presume that the universe is comprehensible and prefer unified theories to disunified ones and simple theories to complicated ones, although the latter are often empirically more successful than the former ones. This means that science actually includes assumptions that cannot be empirically tested, i.e. they are metaphysical in the Popperian sense. The acknowledgement of metaphysical assumptions in science is an inherent part of a novel approach to science of Nicholas Maxwell that he calls aim-oriented empiricism (see for instance Maxwell 2017). Rein Vihalemm accepts Maxwell’s critique of the prevalent common understanding of science that the latter calls standard empiricism although Maxwell’s approach is not necessarily realist and does not emphasize the practical side of research. Vihalemm likes the normative side of Maxwell’s account. The latter agrees that science cannot be and has not to be value free. Vihalemm seems to be positive concerning aim-oriented empiricism as well. However, he rejects the need for acknowledging the metaphysical assumptions in science. Just the opposite is true. Vihalemm’s intention is that practical realism has to be free of metaphysics. However, this puts us face to face with the problem in what respect is practical realism actually realism and in what respect is it different from standard empiricism. The solution can be combining practical realism with the idea of adding the metaphysical assumptions to the approach. By this move, practical realism would obtain a necessary foundation that enables to understand why scientific research remains a systematic quest for truth and does not limit itself just to a special kind of practical activity. However, this way we would rather get aim-oriented empiricism than practical or any other kind of realism.
Vihalemm, Rein. “Towards a Practical Realist Philosophy of Science.” Baltic Journal of European Studies, 1, no. 1 (2011): 46-60.
Maxwell, Nicholas. Understanding Scientific Progress. Aim-Oriented Empiricism. Paragon House, St. Paul, Minnesota, (2017).
Metaphysics and Physics of Consciousness as a problem of modern science
ABSTRACT. Тhe synthesis of physics and metaphysics is possible at the approach to consciousness as to a certain cut (to level, a layer) of an information reality.
A special question — the ontological status of the information. The majority of researchers consider it in the spirit of K. Shennon as a system of distinctions and relations in abstract logic space. In this sense the information is everywhere where «there is a casual and remembered choice of one variant from several possible and equal in rights» (G. Kastler). N. Winer asserted that «the information is information, not a matter and not energy».
It is necessary to recognize that nobody has yet found out the information in the pure state, without the material carrier, though all agree that the sense and value of the information do not depend on the character of the carrier. But it is real now as science knows only physical fields and matter-energy interactions, instead of «information threads», «information codes», «information fields», etc.
In numerous interpretations of quantum mechanics scientists also involve the concept of consciousness for an explanation of corpuscular-wave dualism and a reduction («collapse») of wave function (so-called «many-worlds» interpretation of quantum mechanics» with G. Everett, J. Willer and De Vitt). Can we indicate that consciousness is an absolutely fundamental property of physical reality, one that needs to be brought in at the very most basic level? They have appealed especially to the role of the observer in the collapse of the wave function, i.e., the collapse of quantum reality from a superposition of possible states to a single definite state when a measurement is made. Such models embrace a form of quasi-idealism, in which the very existence of physical reality depends upon its being consciously observed.
The informative cognitive events are traditionally represented in the form of modules of consciousness — thinking, sensuality, memory and will. These events have correlation communications with physical fields, cellules of brain and all cellular structures of an organism - society - and if to consider a physical principle of the Mach, with the Universe as a whole. Each mental event as a unique subjective experience is information-synergetic unity which is reflected «condensed» in speech. Using physics language it is possible to assume that through «speaking» and «writing» there occurs a reduction («collapse») mental event to its concrete material fixing (analogy to measuring procedures in the quantum mechanics).
For deepening of the philosophical analysis of a problem of consciousness and to bridge the psycho-physical gap, it is necessary to comprehend the specific essence of the information as the primary, initial reality, uniting the matter-energy (substance-power) carrier and the ideal-semantic reality, and to try to express it, using a mathematical apparatus of theoretical physics.
It is important to underline that metaphysical principles for such trains of thought of all the above considered theories of physicists are: 1. Representation about an initial fundamental reality as an information field, existing and developing under certain programs (by analogy to the already well-known Internet); 2. Representation of consciousness as the higher and necessary stage of evolution of life realizing thus through mathematical structures the possibility of direct communication with an information field of the Universe.
References
Wheleer J.A. “Information, physics, quantum: The search for link” in Zurek (ed.) Complexity, Entropy and the Physics of Information, Addison-Wesley, 1990. P. 370, 377.
Roger Penrose. The Emperor's New Mind: Concerning Computers, Minds, and The Laws of Physics (1989).
Floridi l. The philosophy of information: ten years later // Metaphilosophy / ed. by A.T. Marsoobian. – Oxford, UK. – Vol. 41. № 3. April 2010. – Р. 420–442. URL:
Menskii M.B. Kontseptsiya soznaniya v kontekste kvantovoi mekhaniki // Uspekhi fizicheskikh nauk. M., 2005. T. 175. № 4. Р. 414–435.
Iakovlev V.A. Informatsionnoe edinstvo bytiya: soznanie, zhizn', materiya // NB: Elektronnyi zhurnal «Filosofskie issledovaniya». — 2013.-№ 10.-Р. 1 – 57.
ABSTRACT. Despite the fact that the notion of a representation is a cornerstone of cognitive science, a definition of this central concept remains elusive. In this paper I would like to concentrate on the notion of ‘Structural representation’ (or ‘S-representation’), which has become a recent focus of attention in the specialized literature. In a nuthsell, a particular cognitive mechanism M is a structural representation of S iff (1) there is a homomorphism between M and S (roughly, a mapping between a relations in M and relation in S) and (2) some cognitive mechanism uses this homomorphism to deal better with the evironment (Cummins, 1989; O’Brien, 2015; Ramsey, 2007; Gladziejewski and Milkowski 2017). Crucially, the notion of S-representation has recently been employed to distinguish the set of genuine representations from the category of receptors, that is, those internal mechanisms that reliably correlate with certain evironmental features but which, according to these authors, should not qualify as proper representations. Thus, the concept of S-representation plays a fundamental role in recent attempts to defend a form of representationalism that can escape the objection of being too liberal, i.e. attributing representations to many states that intuitively lack them.
The goal of this paper is to argue that the notion of S-representation cannot fulfil this theoretical role. First of all, I will argue that the above definition can be undestood in at last two ways. On the one hand, if this notion is understood as merely requiring that there is a homorphism between a cognitive mechanism M and a structure S and that M is exploited to behave appropriately, then mere receptors seem to satisfy this requrement (Morgan, 2014). A mechanism that can be in two states, which reliably covary with certain world events exemplifies a homomorphism that the system employs to deal with the environment.
To avoid this result, one could provide a more restrictive interpretaton of the notion of S-representation, according to which the use of the homomorphism mentioned in (2) above necessarily implies using the relations between M off-line in order to learn about S. The problem with this interpretation, however, is that it is too narrow, since processes that clearly should be classified as representations (even as ‘structural representations’, in an important sense of the term) would be excluded, such as waggle dances produced by bees (Shea, 2014) or certain kinds of cognitive maps. As a conclusion, then, a broad understanding of S-representation is too liberal because it does not allow this notion to exclude mere receptors and a more restrictive interpretation is too narrow since it excludes clear cases of representations. Consequently, this notion cannot play the job it is supposed to perform.
This negative result, however, should not be taken to imply that there are no interesting differences between mere receptors and more complex forms of representation. The final part of the paper seeks to sketch some notions that, in combination, are in a better position the play this role. The suggested analysis will include the notions of ‘lexical productivity’ (the capacity to produce new meaningful tokens), structural productivity and off-line use. I will defend that these concepts are much more useful for cutting the nature of representation a its joints and that they can be employed to make fruitful and illuminating distinctions between different kinds of representational phenomena.
Intimate diary of an AIDS patient. An approximation to the "medical gaze" with Foucault and Guibert
ABSTRACT. The work of Michel Foucault, Naissance de la clinique. Une archeologie du regard medical is a point of support to introduce us in the emergence and the procedure of the "medical gaze" as a criterion of truth and rationality of modern medicine, from which to find an objective knowledge of the disease supported by the body. These processes of objectification and typification of diseases have a great importance in the medical sciences. The same happens in all applied sciences and also in the social and human sciences. Taking Foucault's analyses as starting point, the objective of this work is to illustrate, through the literary activity of Hervé Guibert with his diary as an AIDS patient, the process of the "medical gaze" in the objectification of the disease.
The literary works of Guibert illustrated how, in the 80s and 90s of the last centuries, a whole series of resources, efforts, and people are deployed in the process of objectifying AIDS as a disease. At that moment, the nature and functioning of this disease were unknown, involving AIDS in the most extravagant rumors. It became necessary, therefore, the establishment and grouping of symptoms, to obtain all objective knowledge with which to identify the disease, the diagnosis, and prognosis from which to establish a medical treatment. The process of objectification of the AIDS as disease turns into a paradigm significant of analyzing, from the perspective of bioethics and biopolitics, the procedure of the "medical gaze". The study of specific cases of biographical types, such as the one analyses in this paper through the works of Guibert, is an essential source of relevant evidence. This analysis allows us to enter in questions such as the inhuman and instrumental relationship of the doctor/patient, the objectification and stigmatization of the homosexual body and the social panic.
Bibliography:
Demme, Jonathan (1993) : Philadelphia. Film.
Foucault, Michel (1971) : L’Ordre du discours, Paris, Gallimard.
Foucault, Michel (1963) : Naissance de la clinique. Une archéologie du regard médical, Paris, Presses Universitaires de France.
Foucault, Michel (1976) : Histoire de la sexualité, Vol.1 : La volonté de savoir, Paris, Gallimard.
Guibert, Hervé (1990) : A l’ami qui ne me sauve pas la vie, France, Gallimard.
Guibert, Hervé (2013) : Cytomégalovirus, France, Le Seuil.
Guibert, Hervé (1992) : La pudeur o l’impudeur. Film.
Guibert, Hervé (1991) : Le Protocole compassionnel, France, Gallimard.
Fernández Agis, Domingo, (2015), “La mirada médica. Revisitando la interpretación de Michel Foucault”, Anales Médicos, Vol.60, nº 4, pp. 306-310.
Fortanet, Joaquin (2015), “La mirada clínica en el análisis arqueológico de Foucault”, Boletín Millares Carlo, nº31, pp. 78-100.
ABSTRACT. Food is socio-cultural carrier of value and philosophically important source of meaning that becomes ever more complex and fraught when approaching the end of life. This paper examines the intersection of food as a locus of meaning, identity, belonging, and community from the perspective of individuals whose experience of food is shaped by terminal illness (Andrews, 2008; Telfer, 2012). Specifically, I discuss how food, at the end of life, becomes instrumentalized as pure sustenance and how its medicalization can lead to changes in family dynamics, reflect the deprivation of sources of comfort, pleasure, and identity, and produce a profound sense of social loss.
Food, in this context, can reflect an embodied breakdown in connection, as a result of the breakdown in bodily autonomy, but also potentially serve as a way to exert agency against further treatment (Quill et al, 2000). It can also, however, in the last phase of life, be used to revive sources of pleasure, known as ‘pleasure feeds,’ where food reasserts itself as a source of enjoyment often with the participation of the family and friends in eating with the individual or helping them to do so.
In this talk, I draw on contemporary work on the philosophy of food, food and pleasure, food and sociality, food and identity, as well as the sociology of medicine, using a distinctly feminist lens, to make the case that this understudied subject can provide rich and significant insights into role food plays in our contemporary social milieu and how its loss and/or transformation can be dealt with at the end of life. In doing so I draw on the historical work of scholars that engage with the examination of hedonism, pleasure, virtue, sociality and food systems (Symons, 2007; Toussaint-Samat) as well as more contemporary studies of food that examine conceptions of liberal subjectivity, agency, medicalization, and community (Watson, 2018; Coveney, 2006).
Andrews, G. (2008). The slow food story: Politics and pleasure. Pluto Press.
Coveney, J. (2006). Food, morals and meaning: The pleasure and anxiety of eating. Routledge.
Symons, M. (2007). Epicurus, the foodies’ philosopher. Food and Philosophy: Eat, Think and Be Merry. Blackwell Publishing Ltd.
Toussaint-Samat, M. (2009). A history of food. John Wiley & Sons.
Telfer, E. (2012). Food for thought: Philosophy and food. Routledge.
Quill, T. E., et al (2000). Responding to intractable terminal suffering: the role of terminal sedation and voluntary refusal of food and fluids. Annals of Internal Medicine, 132(5), 408-414.
ABSTRACT. The account of delusions that is standard within psychiatry is doxastic; it sees delusions as pathological beliefs. As such, it resembles familiar projects in philosophy of science and epistemology that are concerned with justification. Where the philosophical tradition has sought for the properties that legitimate beliefs, theorists within psychiatry try to find that extra property of a false belief that converts it from a false belief into a delusion. There should be something that not only fails to legitimate a belief but also shows it to be pathological. It is also current wisdom in psychiatry that delusions are part of psychotic conditions that can be found everywhere in the world. One way in which that could be true is if there exists a universal human epistemology, part of our shared cross-cultural psychology that prescribes and polices epistemic norms.
Turri (forthcoming) has argued that folk epistemology is a human universal, built around a “core primate knowledge concept” that we have inherited from our simian ancestors. Turri argues on the basis of comparative psychological evidence that this core knowledge concept is that of “truth detection (across different sensory modalities) and retention (through memory) and may also include rudimentary forms of indirect truth discovery through inference. In virtue of their evolutionary heritage, humans inherited the primate social-cognitive system and thus share this core knowledge concept”. He concludes that “humans possess a species-typical knowledge concept”. Turri also appeals to experimental philosophy, adducing results (e.g Machery et al. 2015) that suggest the existence of cross-cultural agreement on knowledge judgements. Mercier and Sperber (2017) also argue that the faculty of reason is a human universal
If Turri is right, there is a universal knowledge concept common to members of all human communities in virtue of their shared primate ancestry. If that’s true we can expect the attribution of delusion to be very widely shared. The conjecture would be that every human group is made up of people who inherited the primate knowledge concept and that they have an interest in policing it. Concepts mean norms, and norms mean monitors. People who do not detect and transmit the truth might be corrected or at least marked out by the rest of the group, and perhaps incorrigible failures would be graded as pathological. So, if there is a shared human knowledge concept then we should expect to see a shared folk epistemology, and a shared interest in the people who fail to respect it.
I agree that some epistemic systems are likely to be universal parts of our heritage, potentially going wrong in ways that strike observers as deviant. However, the specific concept of belief that is tied to textbook accounts of delusion is very likely a function of particular patterns of cognitive development in modern societies, tied to culturally recent and local ways of legitimating belief.
Delusions are universal in the sense that they can occur anywhere but they are not universal in the sense of arising from a shared, evolved folk psychology. Psychiatry needs to widen its repertoire of concepts dealing with pathological epistemic states.
Christian J. Feldbacher-Escamilla (Duesseldorf Center for Logic and Philosophy of Science (DCLPS), University of Duesseldorf, Germany)
Simplicity in Abductive Inference
ABSTRACT. Abductive inference is often understood as an inference to the best explanation, where an explanation is better than another one if it makes the evidence more plausible and is simpler. The main idea is that given a set of alternative hypotheses {H_1,...,H_n} to explain some phenomenon E one ought to choose that H_i which has highest likelihood Pr(E|H) and is most simple/least complex in comparison to the other hypotheses.
It is clear what the epistemic value of making evidence plausible consists in. E.g., if one can establish even a deductive relation between H and E (as suggested by the DN-model of explanation), then the likelihood is maximal; if one cannot establish such a relation, then, whatever comes close to it or approximates it better, is epistemically valuable. However, regarding simplicity, it is debatable whether it bears epistemic value or not. According to the approach on simplicity of Forster & Sober (1994), one can spell out the truth-aptness of simplicity via constraints put forward in the curve fitting literature which are directed against overfitting erroneous data. Therein simplicity is measured via the number of parameters of a model. However, it remains open how the notion of simplicity spelled out in these terms relates to the notion of simplicity as is often used in abductive inferences, namely as the number of axioms or laws used in an explanation. In this talk we show how the latter notion is related with the former by help of structural equations. By applying an idea of Forster & Sober (1994) we show how probabilistic axioms or laws can be reformulated as structural equations; these in turn can then be used to assign numbers of parameters to such axioms or laws, and hence allow for applying established complexity measures which simply count the number of paramaters. By this, one can provide an exact translation manual for the "number of parameters approach" to the "number of axioms and laws approach"; this can be employed, e.g., in transferring the epistemic value of simplicity granted for the former domain to the latter one.
ABSTRACT. Mathematicians often use the term ‘deep’ to signal work of exceptional quality or importance. Recently, Maddy has prompted discussion of this concept in response to her claim that there are objective facts of mathematical depth that guide proper axiom choice. While various ways of understanding the concept of depth appear subjective in nature, there is some promise that mathematical explanation is associated with a kind of objective depth. In this paper, I develop this idea by discussing how mathematical explanation arises not only in the form of explanatory proof, but more generally in mathematical theory, and note several mathematical programs that may be thought of as explanatory. I then focus on Friedman’s project of Reverse Mathematics, which is shown to exhibit some very general explanatory features that are common to both science and mathematics. The framework for Reverse Mathematics is used to illustrate the concept of an explanatory basis, which in turn is used to further develop the idea that explanatory power constitutes a type of mathematical depth that possesses a number of the significant features that have previously been suggested as markers of depth.
Non-causal Explanations in Quantitative Linguistics
ABSTRACT. C.7 Philosophy of Humanities and Social Sciences
Non-causal Explanations in Quantitative Linguistics
For most linguists, there is a general concurrence that there are only two major plausible explanations in linguistics – formal explanation, typical for generativism and, broadly speaking, functional explanation, typical for not only cognitive linguistics (for current debates see Newmeyer, 2017, Egré, 2015, Haspelmath, 2004). Here, formal explanation is typically derived from internal structure, namely the system, of the grammar of the language, and functional explanation is mainly derived from the external non-linguistic, and so non-systemic, needs of speakers.
Although both of these explanations are considered to be non-causal, only the functional one is of interest for us, because we agree with Givón‘s criticism (1979) as to the non-predictiveness, and thus mere desriptiveness, of formal explanation. Functional explanation in its more specific form assumes its role in quantitative linguistics (QL, see Köhler 2012 and 1986).
In the philosophy of science there is still a lively debate concerning the possibility of the existence of separate non-causal models of explanation (for current debates see Reutlinger, 2018, Kostic, 2016, Huneman, 2010 etc.). Acordingly, it is crucial to study the usefulness of non-causal explanations for conceptualizing explanatory practices in the social sciences and humanities.
QL is a major useful tool for this. Accordingly the goals of this paper are:
First, to uphold the non-causality of functional explanation used in QL.
Second, to use another non-causal type of explanation – topological explanation (see Huneman, 2010) – for QL.
Third, to decide which of these two types of non-causal explanation is more productive for QL.
References:
Egré, P. (2015) Explanation in Linguistics, Philosophy Compass 10: 7, 451–462.
Givón, T. (1979) On Understanding Grammar, New York: Academic Press.
Haspelmath, M. (2004) Does Linguistic Explanation Presuppose Linguistic Description? Studies in Language 28: 3, 554–579.
Huneman, P. (2010) Topological Explanations and Robustness in Biological Sciences, Synthese 177: 2, 213–245.
Köhler, R. (2012) Quantitative Syntax Analysis, Berlin: De Gruyter Mouton.
Köhler, R. (1986) Zur linguistischen Synergetik: Struktur und Dynamik der Lexik, Bochum: Studienverlag Dr. N. Brockmeyer.
Kostic, D. (2016) Mechanistic and Topological Explanations: an introduction, Synthese 195: 1, 1–10.
Newmeyer, F. J. (2017) Formal and Functional Explanation, Roberts, I. (ed.) The Oxford Handbook of Universal Grammar, Oxford: Oxford University Press, 129–152.
Reutlinger, A., Saatsi, J. (2018) Explanation Beyond Causation. Philosophical Perspectives on Non-Causal Explanations, Oxford: Oxford University Press.
Kevin Kelly (Carnegie Mellon University, United States) Hanti Lin (University of California, Davis, United States) Konstantin Genin (University of Toronto, Canada) Jack Parker (Carnegie Mellon University, United States)
A Learning Theoretic Argument for Scientific Realsm
ABSTRACT. Scientific realism has long been tied to no-miracle arguments. Two different measurement technologies would not miraculously generate the same spurious result (Hacking 1981). If there were absolute motion, the moving magnet would not miraculously produce the same current as the moving coil (Einstein 1905). Independent mechanisms in the complete causal story would not miraculously cancel---else there is a meta-cause of the cancelation missing from the story (Spirtes et al. 1993). Other celebrated examples include Copernicus' argument against epicycles, Newton's argument against celestial magnetism, Darwin's argument against special creation, Lavoisier's argument against phlogiston, and Fresnel's argument against Newton's particle theory of light.
In each of those examples, there are two competing theories, a simple one and a complex one, and a miraculous tuning of the complex causes is capable of generating simple observations for eternity. Anti-realists (van Fraassen 1980) ask how a bias in favor of the simple theory could be truth-conducive, since one would receive exactly the same data for eternity regardless whether the truth is simple or miraculous.
A pertinent response is that favoring the simple theory until it is refuted fails to converge to the truth only over the set of all miraculous worlds, which is negligible given both the simple theory and given the complex theory, whereas favoring the complex theory no matter what fails in all simple possibilities, which is non-negligible given the simple theory.
But a gap remains in that argument, because one can still favor the complex theory over a refutable set of miraculous worlds, for which the empirically equivalent simple worlds are negligible given the simple theory. The failure set for that method is still negligible given both theories, so the ball is still in the realist's court.
But not all convergence to the truth is equal. Plato held that one finds the truth better if one finds it indefeasibly. We show, by a novel, topological argument, that any method that (1) favors even a single miraculous world, and whose failure set is negligible given both the simple and the complex theory, drops the truth in some complex world. Thus, the Pyrrhic reward for favoring complex worlds indistinguishable from simple worlds is either to fail non-negligibly given some theory or to drop the truth in a complex world. In contrast, the realist's favored Ockham method that always favors simplicity fails only negligibly given either theory, and never drops the truth. The argument is extendible to statistical inference. It provides a new foundational perspective on theoretical science in general, and on recent causal discovery algorithms in particular.
REFERENCES
Einstein, A. (1905) “Zur Elektrodynamik bewegter Körper”, Annalen der Physik, 17: 891-921.
Hacking, I. (1981) “Do We See Through a Micoscope?” Pacific Philosophical Quarterly, 64: 305-322.
Spirtes, P., Glymour C., and Scheines, R. (1993) Causation, Prediction, and Search, New York: Springer.
van Fraassen, B. (1980) The Scientific Image, Oxford: Oxford University Press.
ABSTRACT. It is a platitude that the relationship between a sign and the meaning it represents is arbitrary and based on convention. While W. V. Quine (1936) argued that this notion implies a vicious circle. Conventions depend on agreements, and, in order to make an agreement, we have to be able to communicate with each other through some kind of primary sign systems. While the emergence of sign systems is the very thing we want to explain.
David Lewis, whose PhD was advised by Quine, challenged Quine’s argument in his dissertation (1969). He argued that conventions emerge from social interactions between different agents and formalized the process as signalling games. Signalling systems in which information is communicated from the sender to the receiver in a signalling game are strict Nash equilibria in the game. Then, the problem of the emergence of meaning becomes the problem that how to converge to and maintain strict Nash equilibria in signalling games. The solutions provided by Lewis are common knowledge and salience. However, Brian Skyrms (1996, 2004) argues that Lewis’ solution cannot escape Quine’s critique. Instead, Skyrms proposes an evolutionary dynamic approach to the problem. The dynamic analysis of signalling games shows that signalling systems spontaneously emerge in the interacts between senders and receivers. Common knowledge and salience are unnecessary.
It is believed by many philosophers that Lewis-Skyrms signalling game theory brings fundamentally new insights to questions concerning with the explanation of meaning. As a result, studies of signalling games are prosperous in recent years. Nevertheless, this paper does not intend to discuss the technical problems of signalling game but concerns with the epistemic aspect. The question the paper tries to discuss is that whether the selection and maintenance of a strict Nash equilibrium in a signalling game mean the establishment of a signalling system.
In the case of a signalling game at a strict Nash equilibrium, the receiver plays the act proper to a state the sender perceives. In other words, the act causally maps the state correctly. According to the evolutionary approach to signalling games, strict Nash equilibria in a signalling game equal to signalling systems. That is to say when the causal relationship between the act and the state is established, a signalling system emerges. However, the causal relationship can be established without signalling systems in the case of mutual misunderstanding. There may be two orders of mutual misunderstanding in signalling games: the first-order between senders and receivers and the second-order between the observer and the signalling game s/he observes.
Mutual misunderstanding is the result of the absence of common knowledge between the sender and the receiver in a signalling game, and the observer of a game and the players in the game. Therefore, signalling games by evolutionary dynamics are insufficient to reject the common knowledge. The source of mutual misunderstanding is a long-standing confusion in information studies: confusing signal sequence and pragmatic effects of information with informational content. In the case of signalling game, philosophers take the success conditions of acts as the success conditions of communication while the mutual misunderstanding argument shows that they are different. In order to avoid mutual misunderstanding without appealing to common knowledge in signalling games, the distinction between signals, informational content and acts should be made clear first.
The configuration of signalling game and the evolutionary approach to it will be introduced in section 2. Section 3 analyzes a Chinese folk story, Magical Fight, as an exemplar of mutual misunderstanding. The story shows that although the interacts between two players in the magical fight are successful for both the players and audiences, there is no effective communication between the players due to no common knowledge sharing by them. Section 4 argues that the magical fight is a signalling game in which two orders of mutual misunderstanding happen. Possible objections to the mutual misunderstanding argument are considered in section 5. Section 6 investigates the source of mutual misunderstanding in the studies of signalling game: possible mismatches between signals, informational content and acts.
12:00
Eleonora Cresto (Universidad Torcuato Di Tella/ CONICET, Argentina)
A Constructivist Application of the Condorcet Jury Theorem
ABSTRACT. The Condorcet Jury Theorem (CJT) tells us (roughly) that a group deciding on a yes or no issue by majority voting will be more reliable than any of its members, and will be virtually infallible when the number of members tends to infinity, provided a couple of conditions on individual reliability and independence are in place. Normally, the CJT presupposes the existence of some objective fact of the matter (or perhaps moral fact) F, whose truth (or desirability) does not depend on the method used to aggregate different opinions on whether F holds/ should hold. Thus, the CJT has been vindicated by advocates of epistemic democracy (with some caveats), while this move has typically been unavailable to authors with sympathies for proceduralist or constructivist accounts.
In this paper I suggest an application of the CJT in which the truth/correctness of F is a direct result of the action of voting. To this effect I consider a n-person generalization of the stag hunt game, in which a stag is hunt only if there is a majority of hunters choosing stag. I show how to reinterpret the independence and competence conditions of the CJT to fit the example, and how to assess the import of the infallibility result in the present context of discussion. As a result of this we are able to identify both a selfish and a cooperative instance of the theorem, which help us draw some morals on what we may call ‘epistemic optimism’. More generally, the proposal shows that we can establish links between epistemic and purely procedural conceptions of voting; this, in turn, can lead to novel ways to understand the relation between epistemic and procedural democracy.
Jared Neumann (Indiana University - Bloomington, United States)
Deductive Savages: The Oxford Noetics on Logic and Scientific Method
ABSTRACT. In 1832, William Whewell reminded his friend, the political economist Richard Jones, that “if any truth is to be got at, pluck it when it has grown ripe, and do not, like the deductive savages, cut down the tree to get at it.” [1] He preferred a cautious ascent from particular facts to general propositions over the reckless anticipations of people like the Oxford Noetics. To Whewell, this was more than an epistemic preference; it was a moral one. [2] Progress in all fields should be slow and sure or it could potentially lead down atheistic, materialistic, even revolutionary paths.
A few major figures comprised the Noetics, including Edward Copleston, Nassau Senior, and Richard Whately. Despite Whewell’s conflation of their methods with the more radical political economists like David Ricardo, Jeremy Bentham, and James Mill, the foundation of their own programme was to champion Anglican theology on the grounds of its rationality. As part of this programme, Copleston engaged with Scottish scholars who considered the Oxford curriculum backward; Senior accepted a position as the first Drummond Professor of Political Economy; and Whately published his inordinately popular Elements of Logic (1826) and accepted a position as the second Drummond Professor. Most significantly for my paper, they revitalized syllogistic logic and divorced political economy from its “subversive” connotations. [3]
The Noetics were influential in a number of aspects of Victorian logic, philosophy, theology, and science. Yet their programme has gone underappreciated. First of all, Whewell’s depiction of them as “deductivists” is not exactly right. Christophe Depoórtere has already shown this in the case of Nassau Senior in the context of political economy, but I intend to do the same for the general programme. Second, their revitalization of syllogistic logic has been misinterpreted as a revitalization of scholastic logic despite their harsh criticisms of the scholastics. [4] Still other aspects of the their programme have been left virtually unexplored, like Whately’s notion of induction, or the relationship between logical and physical discovery.
In this paper, I will provide a more sympathetic account of the Noetic movement in the context of its positions on logic and scientific method. I will focus mostly on Whately, though Copleston and Senior will have their important parts to play. Instead of interpreting their scientific method as deductivist, I will show that they did not believe there was a single method suitable for all sciences; rather, they believed that deduction and induction (in Whewell’s sense) played variable roles in each according to their nature.
[1] William Whewell to Richard Jones, Feb. 19, 1832, in Isaac Todhunter, William Whewell: An Account of His Writings, Vol. II, London: Macmillan and Co, 1876: 140.
[2] William Whewell, Astronomy and General Physics, 2nd edition, London: William Pickering, 1833: esp. Book III, Chapter V and VI.
[3] Richard Whately, Review of Nassau Senior, An Introductory Lecture on Political Economy, in The Edinburgh Review 48 (1827): 171.
[4] See, for example: Marie-Jose Durand-Richard, “Logic versus Algebra: English Debates and Boole’s Mediation,” in A Boole Anthology, edited by James Gasser, Dordrecht: Kluwer Academic Publishers, 2000: 145.
11:30
Pascale Roure (Bergische Universität Wuppertal / Istanbul Üniversitesi, Germany)
Logical Empiricism in Exile. Hans Reichenbach's Research and Teaching Activities at Istanbul University (1933–1938)
ABSTRACT. The purpose of this paper is to shed new light on a less well-known stage in the development of Hans Reichenbach's thought, namely his research, output and teaching activities at Istanbul University (1933–1938). During his exile in Turkey, Reichenbach was able to continue his efforts to popularize and extend the program of "scientific philosophy," not only through the restructuring of the Department of Philosophy at Istanbul University, but also through different academic exchanges with European countries. Between the beginning and end of Reichenbach's exile in Istanbul, the Turkish reception of logical empiricism and scientific philosophy is characterized by a shift from a mere external interest in these fields to an effective implementation of the principles and methods that characterize Reichenbach's philosophical approach. The aim of this paper will be to show that Reichenbach's impact was not limited to an unilateral transfer of knowledge but that it has led to an active participation of the Turkish side in establishing a link between philosophy and particular scientific disciplines (Einzelwissenschaften) at Istanbul University. Therefore, the consideration of Reichenbach's output between 1933 and 1938 must necessarily be complemented with the study of the courses he gave at Istanbul University as well as the study of the work of his students, most of which were only completed after Reichenbach's departure to the United States. His students’ under-researched contribution to the development of this scientific philosophy may seem to have quickly faded away at the philosophy department of Istanbul University, but one hypothesis I will examine is whether its more durable impact did not in fact happen in other fields, for example the development of experimental psychology in the same university.
Secondary Literature
Danneberg, L., Kamlah, A., & Schäfer, L. (Eds.). (1994). Hans Reichenbach und die Berliner Gruppe. Braunschweig-Wiesbaden: Vieweg.
Irzik, G. (2011). Hans Reichenbach in Istanbul. Synthese 181/1 (2011), Springer, pp. 157–180.
Örs, Y. (2006). Hans Reichenbach and Logical Empiricism in Turkey, in: Cambridge and Vienna: Frank P. Ramsey and the Vienna Circle, Maria C. Galavotti (Ed.), Springer, pp. 189–212.
Stadler, F. (1993). Scientific philosophy: Origins and developments. Dordrecht, Boston, London: Kluwer.
Stadler, F (2011). The road to “Experience and Prediction” from within: Hans Reichenbach’s scientific correspondence from Berlin to Istanbul. Synthese 181/1 (2011), Springer, pp. 137–155.
12:00
Elena Sinelnikova (St. Petersburg Branch of S.I. Vavilov Institute for the History of Science and Technology, Russian Academy of Sciences, Russia) Vladimir Sobolev (St. Petersburg Branch of S.I. Vavilov Institute for the History of Science and Technology, Russian Academy of Sciences, Russia)
ABSTRACT. The multidimensional model of science created by Russian philosopher Vladimir Nikolayevich Ivanovsky at the beginning of the 1920s covered social, psychological, logical and methodological, ideological aspects of science, took into account the variety of types, methods and contents of various sciences, associated their development with both internal factors and interaction with other sciences and other areas of culture. Ivanovsky presented his conception in “Methodological Introduction to Science and Philosophy” (1923). Unfortunately, he and his works currently are unknown not only to the world scientific community, but also to Russian philosophers and historians of philosophy.
In the beginning of the XX century, however, V.N. Ivanovsky was famous in international academic community. He participated in the 1st International Philosophical Congress in Paris and represented Russia in the Bureau of Philosophical Congresses until the October Revolution. He studied in Moscow University, then in the main scientific European centers (Berlin, London, Oxford, and Paris). Ivanovsky taught philosophy, psychology, and history of pedagogy in universities of Kazan, Moscow, Samara, and Minsk. He also was a secretary both of journal “Voprosy filosofii i psikhologii” (Questions of philosophy and psychology) in 1893-1896 and of Moscow Psychology Society in 1893-1900.
The multidimensional model of science by V.N. Ivanovsky demonstrated socio-psychological, logical-methodological, and philosophical aspects of science. Socio-psychological aspects of science are due to the fact that living conditions affect the content of knowledge, as they give sciences a stock of experience, provide with analogies suitable for explaining the unknown, for formulating hypotheses. The recognition of scientists’ works in particular era depends on “life”, because a scientist always risks getting ahead of his time, being misunderstood, not appreciated, not recognized by contemporaries. Ivanovsky emphasizes that science is a system of views that is not only proven, but also recognized as the true known by many people.
Ivanovsky drew attention to the importance of psychological preferences and traditions in the development of social and natural sciences, the role of the scientific community.
Considering the influence of social and psychological factors on the form and content of scientific knowledge, V.N. Ivanovsky determined their social role and importance in opposite to the manifestations of Soviet Marxism in vulgar “class approach” to science in the post-revolutionary period.
Describing science as a multidimensional system, V.N. Ivanovsky expresses a number of ideas relating to the psychology of knowledge and creativity. He wrote about the “theoretical instinct of curiosity” as the psychological basis of the pursuit of truth.
Logic, by V.N. Ivanovsky, is the crucible through which thought must pass in order to become a science. Requirements of proof, truth, significance depersonalize all achievements of thought; tear them away from their subjective roots and motives. A researcher may be driven by purely theoretical curiosity, motivations of a socio-ethical, religious, aesthetic, or other nature. The “logical test” ensures the objectivity of science. V.N. Ivanovsky interpreted it quite widely, referring to this and the procedures for confirming by experience, i.e. empirical verification.
Systematic is a necessary property of science, which distinguishes it from a collection of disparate information. By Ivanovsky, science is always based on certain principles, general prerequisites. Information becomes scientific when it is included in a logical whole.
He stressed that not all the results of science are and must be put to practical use. Speaking against the narrow practicality in relation to science, V.N. Ivanovsky argued that the social value of genuine scientific creativity is immense.
The scale, depth and clarity of the concept of science developed by V.N. Ivanovsky, put it on par with the achievements of postpositivism.
References
Ivanovsky V.N., Methodological Introduction to Science and Philosophy (Minsk: Beltrestpechat, 1923).
Acknowledgment:
The reported study was funded by RFBR according to the research project № 17-33-00003.
Thomas Piecha (University of Tübingen, Department of Computer Science, Germany) Peter Schroeder-Heister (University of Tübingen, Department of Computer Science, Germany)
Abstract semantic conditions and the incompleteness of intuitionistic propositional logic with respect to proof-theoretic semantics
ABSTRACT. In [1] it was shown that intuitionistic propositional logic is semantically incomplete for certain notions of proof-theoretic validity. This questioned a claim by Prawitz, who was the first to propose a proof-theoretic notion of validity, and claimed completeness for it [3, 4]. In this talk we put these and related results into a more general context [2]. We consider the calculus of intuitionistic propositional logic (IPC) and formulate five abstract semantic conditions for proof-theoretic validity which every proof-theoretic semantics is supposed to satisfy. We then consider several more specific conditions under which IPC turns out to be semantically incomplete.
In validity-based proof-theoretic semantics, one normally considers the validity of atomic formulas to be determined by an atomic system S. This atomic system corresponds to what in truth-theoretic semantics is a structure. Via semantical clauses for the connectives, an atomic base then inductively determines the validity of formulas with respect to S, called 'S-validity' for short, as well as a consequence relation between sets of formulas and single formulas. We completely leave open the nature of S and just assume that a nonempty finite or infinite set of entities called 'bases' is given, to which S belongs. We furthermore assume that for each base S in such a set a consequence relation is given. The relation of universal or logical consequence is, as usual, understood as transmitting S-validity from the antecedents to the consequent. We propose abstract semantic conditions which are so general that they cover most semantic approaches, even classical truth-theoretic semantics. We then show that if in addition certain more special conditions are assumed, IPC fails to be complete. Here a crucial role is played by the generalized disjunction principle. Several concrete notions of proof-theoretic validity are considered, and it is shown which of the conditions rendering IPC incomplete they meet.
From the point of view of proof-theoretic semantics, intuitionistic logic has always been considered the main alternative to classical logic. However, in view of the results to be discussed in this talk, intuitionistic logic does not capture basic ideas of proof-theoretic semantics. Given the fact that a semantics should be primary over a syntactic specification of a logic, we observe that intuitionistic logic falls short of what is valid according to proof-theoretic semantics. The incompleteness of intuitionistic logic with respect to such a semantics therefore raises the question of whether there is an intermediate logic between intuitionistic and classical logic which is complete with respect to it.
References
[1] Piecha, Thomas, Wagner de Campos Sanz, and Peter Schroeder-Heister, 'Failure of completeness in proof-theoretic semantics', Journal of Philosophical Logic, 44 (2015), 321–335. https://doi.org/10.1007/s10992-014-9322-x.
[2] Piecha, Thomas and Peter Schroeder-Heister, Incompleteness of intuitionistic logic with respect to proof-theoretic semantics, Studia Logica 107 (2019). Available online at https://doi.org/10.1007/s11225-018-9823-7 and via Springer Nature SharedIt at https://rdcu.be/5dDs.
[3] Prawitz, Dag, 'Towards a foundation of a general proof theory', in P. Suppes et al. (eds), Logic, Methodology and Philosophy of Science IV, North-Holland, Amsterdam, 1973, pp. 225–250.
[4] Prawitz, Dag, 'An approach to general proof theory and a conjecture of a kind
of completeness of intuitionistic logic revisited', in L. C. Pereira, E. H. Haeusler, and V. de Paiva, (eds), Advances in Natural Deduction, Springer 2014, pp. 269–279.
11:30
Yaroslav Shramko (Kryvyi Rih State Pedagogical University, Ukraine)
First-degree entailment and structural reasoning
ABSTRACT. In my talk I will show how the logic of first-degree entailment of Anderson and Belnap Efde can be represented by a variety of deductively equivalent binary (Fmla-Fmla) consequence systems, up to a system with transitivity as the only inference rule. Some possible extensions of these systems, such es "exactly true logic" and "non-fasity logic" are briefly considered as well.
System Efde:
a1. A & B |- A
a2 A & B |- B
a3. A |- A v B
a4. B |- A v B
a5. A & (B v C) |- (A & B) v C
a6. A |- ~~A
a7. ~~A |- A
r1. A |-B, B |- C / A |- C
r2. A |- B, A |- C / A |- B & C
r3. A |- C, B |- C / A v B |- C
r4. A |- B / ~B |- ~A
System Rfde is obtained from Efde by replacing the contraposition rule r4 with the following De Morgan laws taken as axioms:
dm1. ~(A v B) |- ~A & ~B
dm2. ~A & ~B |- ~(A v B)
dm3. ~(A & B) |- ~A v ~B
dm4. ~A v ~B |- ~(A & B)
Efde and Rfde are deductively equivalent. Yet, the latter system has less derivable rules, and allows thus certain non-classical (and non-trivial) extensions, which are impossible with Efde.
Consider the following set of consequences:
(dco) A v B |- B v A
(did) A v A |- A
(das) A v (B v C) |- (A v B) v C
(dis2) A v (B & C) |- (A v B) & (A v C)
(dis3) (A v B) & (A v C) |- A v (B & C)
(dni) A v B|- ~~A v B
(dne) ~~A v B |- A v B
(ddm1) ~(A v B) v C |- (~A & ~B) v C
(ddm2)(~A & ~B) v C |- ~(A v B) v C
(ddm3) ~(A & B) v C |- (~A v ~B) v C
(ddm4) (~A v ~B) v C |- ~(A & B) v C
We obtain a system of first-degree entailment with conjunction introduction as the only logical inference rule (together with the structural rule of transitivity) FDE(ci) by adding this list to a1-a3, and taking as the inference rules r1 and r2.
Lemma1. Systems Efde, Rfde and FDE(ci) are all deductively equivalent.
Since the rule of disjunction elimination is not derivable in FDE(ci), it allows for some interesting extensions not attainable on the basis of Rfde. In particular, a Fmla-Fmla version of "exactly true logic" by Pietz and Rivieccio can be obtained as a straightforward extension of FDE(ci) by the following axiom (disjunctive syllogism):
(ds) ~A & (A v B) |- B.
A duality between the rules of conjunction introduction and disjunction elimination suggests a construction of another version of the logic of first-degree entailment FDE(de) with only one logical inference rule (accompanied by transitivity), but now for disjunction elimination. This system is obtained from FDE(ci) by a direct dualisation of all its axioms and rules.
FDE(de) is indeed an adequate formalization of first-degree entailment, as the following lemma shows:
Lemma 2. Systems Efde, Rfde, FDE(ci) and FDE(de) are all deductively equivalent.
Absence of conjunction introduction among the initial inference rules of FDE(de) enables the possibility of extending it in a different direction as compared with FDE(ci). Namely, it is possible to formalize Fmla-Fmla entailment relation as based on the set {T, B, N} of designated truth values, preserving thus any truth value among the four Belnapian ones, except the bare falsehood F. One obtains the corresponding binary consequence system of non-falsity logic NFL1 by extending FDE(de) with the following axiom (dual disjunctive syllogism):
(dds) A |- ~B v (B & A).
Another formalization of the first-degree entailment logic with the only (structural) inference rule of transitivity can be obtained by a straightforward combination of FDE(ci) and FDE}(de).
12:00
Claudia Tanús (Institute for Philosophical Research, National Autonomous University of Mexico (UNAM), Mexico)
The irrelevance of the axiom of Permutation
ABSTRACT. The axiom of Permutation (A→(B→C))→(B→(A→C)) is valid in many relevant logic systems such as R. Although Permutation has not been particularly problematic I consider that there are good relevantist reasons to distrust this axiom. Thus, I am interested in investigating if Permutation should be relevantly valid.
There has been previous research on the matter. In ``Paths to triviality", Øgaard shows how different principles could lead to triviality of naïve truth theories of paraconistent relevant logics. It is important to note that Øgaard's proofs regard rules and not axioms and therefore his results only assess the consequences of having an instance of the rule as part of the theory. Amongst the proofs one can find the combination of the principle of Excluded Middle and the rule of Permutation. In ``Saving the truth schema from axioms", Field characterices a logic by adding a conditional that avoid paradoxes such as Curry's to Kleene's logic and the resulting logic does not validate the axiom of Permutation.
Despite the counterexamples in the natural language and Field's results, Permutation still satisfies the usual relevantist properties.
Variable sharing property (VSP): A logic L has the VSP iff in any theorem of L of the form A→B, A and Bshare at least one propositional variable.
Effective Use in the Proof (EUP): In every theorem of the form A1→(...(An→B)...), each Ai is used to prove B.
Permutation also satisfies stronger versions of these properties. However, I think there is a way of understanding the VSP that does not validate Permutation. I want to suggest that one could state a relevantist principle such as the ones mentioned above in which Permutation is not valid. The principle is the following:
Non-Implicative Extremes Property (NIEP): In every theorem of the form X → (Y → Z), X and Z cannot be implicative formulas.
I want to show that NIEP is an interesting relevantist principle that recovers enough axioms to characterize a fairly expressive relevance logic. My plan is to start by explaining the three principles which I will focus on: VSP, EUP and NIEP. I will motivate these principles by showing some results that are valid in Classical Logic but are not relevantly valid. Due to the ambiguity of EUP, I will have to suggest an interpretation with which I will work throughout this paper and I will relate it to NIEP in order to highlight the differences between EUP and NIEP. Afterwards I will focus on pinpointing the logics that exist between B and R to investigate what can be recovered from B taking into account NIEP. The goal is to find a logic characterized by VSP, EUP and NIEP.
Forking and categoricity in non-elementary model theory
ABSTRACT. The classification theory of elementary classes was started by Michael Morley in the early sixties, when he proved that a countable first-order theory with a single model in some uncountable cardinal has a single model in all uncountable cardinals. The proof of this result, now called Morley's categoricity theorem, led to the development of forking, a notion of independence jointly generalizing linear independence in vector spaces and algebraic independence in fields and now a central pillar of modern model theory.
In recent years, it has become apparent that the theory of forking can also be developed in several non-elementary contexts. Prime among those are the axiomatic frameworks of accessible categories and abstract elementary classes (AECs), encompassing classes of models of any reasonable infinitary logics. A test question to judge progress in this direction is the forty year old eventual categoricity conjecture of Shelah, which says that a version of Morley's categoricity theorem should hold of any AEC. I will survey recent developments, including the connections with category theory and large cardinals, a theory of forking in accessible categories (joint with M. Lieberman and J. Rosický), as well as the resolution of the eventual categoricity conjecture from large cardinals (joint with S. Shelah).
(Submitted for the special session on model theory organized by John Baldwin, acronym modthy)
ABSTRACT. We shall present here a manuscript containing Bernard Bolzano’s written examination to become professor of elementary mathematics at Prague University. This examination took place in October 1804, and consisted of a written and an oral part. Only two candidates took part to it: Bernard Bolzano and Ladislav Jandera. The latter won. The committee asked three questions to the candidates : to find the formula of the surface and volume of a sphère, to find the formula which measures the speed of water filling a tank, to explain the proof of the law of the lever. In our talk, we shall analyze Bolzano’s answers, especially to the first question, in light of his later reflections on the foundations of mathematics. This document represents an important source to understand both the evolution of Bernard Bolzano’s mathematical thought and, more generally, an important source on the practice of teaching in early 19th Century Bohemia.
ABSTRACT. Large parts of Bolzano's mathematical manuscripts are today published in the Bernard Bolzano-Gesamtausgabe (BBGA), the most important of them being several volumes of the Grössenlehre (GL) containing Einleitung in die GL und Erste Begriffe, Reine Zahlenlehre, and Functionslehre. The manuscripts of GL also contain fragments of algebra and of the theory of series, and a beautifully written complete text Raumwissenschaft. A small volume Zahlentheorie appeared as Bolzano's Schriften, vol. 2, Prague 1931, which is in fact a part of the future volume 2A9 of the BBGA, Verhältniss der Theilbarkeit unter den Zahlen. Many of the manuscripts are preliminary sketches or auxiliary notes of later published works.
Bolzano's earlier manuscripts (1810-1817) are on the one hand a continuation of the Beyträge where appears the concept of the possibility of thinking together (Zusammendenkbarkeit), yielding the concept of whole or system, on the other hand similar contributions to the foundation of mathematics with the concepts of collection (Inbegriff), of number, of quantity (Grösse), of imaginary (=complex) number and of infinity, and those of analysis and of geometry (several developments about the theory of parallels). Bolzano returns to these subjects very often in his mathematical diaries, which are an exceptional source for the study of the state of mathematical knowledge in the first half of the 19th century.
Eventually, Bolzano's manuscripts contain important extracts, comments and annotations of the books he studied, e.g. those of Carnot (Géométrie de la position), Wallis, Wolff, Kästner, Legendre, Lagrange (64 pages of the summary of the Théorie des fonctions analytiques), Laplace, and Gauss among others.
Patrick Allo (Vrije Universiteit Brussel, Belgium)
Conceptual Engineering in the Philosophy of Information
ABSTRACT. Conceptual engineering is not only a growing topic of methodological interest in contemporary, primarily analytically oriented, philosophy. It also occupies a central place within the philosophy of information (Floridi 2011b). Yet, despite the agreement that conceptual engineering is the philosophical task (Floridi 2011a, Cappelen 2018) par excellence, we have two intellectual endeavours that have developed independently of each other, have shown little interest in each other, and whose interdependencies remain unclear. For the sake of terminological clarity, I will reserve the term conceptual engineering to refer to the project of Cappelen and others (the primary project targeted in this symposium), while I will use constructionism to refer to the importance that is accorded to making / engineering / designing in the philosophy of information.
My goal in this paper is to clarify how constructionism relates to conceptual engineering; both as a means to situate the philosophy of information vis-à-vis the mainstream debate and identify the defining differences, and as a means to identify fruitful convergences and opportunities for mutual influence.
As a starting point, I would like to present the constructionism within the philosophy of information as a convergence of three conceptual shifts:
1. A focus on a maker’s conception of knowledge as an alternative to the more traditional focus on user’s conceptions of knowledge (Floridi 2018). The key idea here is the view that we only know what we make. Here, constructionism is an epistemological thesis about what we can know and about the kind of knowledge we ought to pursue.
2. An account of philosophical questions as open questions whose resolution require the development of new conceptual resources (Floridi 2013). Here, constructionism is first and foremost a meta-philosophical thesis that addresses the question which kind of inquiries philosophers should engage in.
3. A view about the nature of and our responsibilities towards the infosphere and the project involved in “construction, conceptualization, semanticization, and finally the moral stewardship of reality” (Floridi 2011b: 23); especially in mature information-societies (Floridi 2014). Here, constructionism becomes a ethical and political thesis.
Once this stage is set, we can begin to identify a number of notable divergences between the project of conceptual engineering and constructionism. Here, I propose to focus on only three such divergences.
First, constructionism is best described as a pluralist project; not as a meliorative or revisionist project.
Second, constructionism is best understood relative to a restricted domain. Its focus is on the conceptual resources we need for specific purposes; tasks or questions that are always relative to a purpose, context, and level of abstraction. Global changes of our language are not an immediate concern; even if specific goals may often require the development of new terminologies or demand the re-purposing of existing terms for novel uses.
Third, constructionism does not engage in the design of concepts for representational purposes.
REFERENCES
Cappelen, H. (2018), Fixing language : an essay on conceptual engineering, Oxford University Press.
Floridi, L. (2011a), ‘A defence of constructionism: philosophy as conceptual engineering’, Metaphilosophy 42(3), 282–304.
Floridi, L. (2011b), The Philosophy of Information, Oxford University Press, Oxford.
Floridi, L. (2013), ‘What is A Philosophical Question?’, Metaphilosophy 44(3), 195–221.
Floridi, L. (2014), The Fourth Revolution How the Infosphere is Reshaping Human Reality., Oxford University Press, USA, Oxford.
Floridi, L. (2018), ‘What a maker’s knowledge could be’, Synthese 195(1), 465–481.
14:30
Georg Brun (University of Bern, Switzerland) Kevin Reuter (University of Bern, Switzerland)
The Common-Sense Notion of Truth as a Challenge for Conceptual Re-Engineering
ABSTRACT. Tarski’s semantic theory of truth is often considered one of the prime examples of an explication. Aiming to satisfy what Carnap called the criterion of ‘similarity to the explicandum’, Tarski claimed that his definition of truth is in line with the ordinary notion of truth, which in turn can broadly be interpreted in the sense of correspondence with reality.
In the first part of the talk, we present results of experimental studies which challenge the idea that – within the empirical domain – the common-sense notion of truth is rooted in correspondence. In these experiments, participants read vignettes in which a person makes a claim that either corresponds with reality but is incoherent with other relevant beliefs, or that fails to correspond with reality but is coherent with other beliefs. Perhaps surprisingly – at least from a philosopher’s perspective – a substantial number of participants (in some experiments up to 60%) responded in line with the predictions of the coherence account. These results put substantial pressure on monistic accounts of truth. However, they also seem to undermine their most popular alternative: scope pluralism. While scope pluralists acknowledge that the truth predicate picks out different properties in different domains, no one has yet, as far as we know, worked out a pluralistic account within the same domain.
In the second part of the talk, we explore the consequences of these results for the project of re-engineering truth. In particular, we discuss the prospects of (i) defending a unique explication of truth, of (ii) re-engineering truth as a non-classical concept (e.g. as a family resemblance concept), and of (iii) giving more than one explicatum for true. Whereas the first of these options might seem attractive for theoretical reasons, it performs surprisingly poor with respect to the similarity to the explicandum, given the results of our experimental studies. Adopting (i) would simply amount to dismissing a great deal of applications of the truth-predicate. In this respect, options (ii) and (iii) are more promising. However, the success of the second option would not only depend on whether a non-classical concept of truth could be theoretically fruitful while being in line with enough of the data, but first of all on whether an exact description of the structure of such a non-classical concept could be given in a convincing way. In contrast to the first two options, (iii) would substantiate the claim that ‘truth’ is ambiguous. While this looks perhaps most apt to account for our data, such a proposal requires us to know more about the mechanisms that play a role in ordinary discourses on truth. Merely specifying several explicata without an account of what the conditions are for using one rather than the other would not make for an adequate conceptual re-engineering.
Hourya Benis-Sinaceur (Institut d'Histoire et Philosophie des Sciences et des Techniques (IHPST), France)
Granger's Philosophy of Style
ABSTRACT. I shall be laying forth Gilles-Gaston Granger’s Essai d’une philosophie du style (Paris,
Armand Colin, 1968; second edition 1988). This book opened a brand new field for
scientific epistemology. According to the Aristotelian postulate scientific knowledge
grasps only general facts. Granger’s aim was to consider the following question through
all angles: is formal knowledge possible for individuals? Granger deemed the answer
positive, on condition of defining a new relationship between form and content for a
scientific piece of work, or for a certain collection of works at a certain time. This
relationship is what we call the “style” of said piece or pieces; it refers to a type of
rationality, which is explained neither a priori, nor ext post by historical or factual
causes. Between the content of a piece of thought (or simply: a book) and its formal
structure there is an intermediate level: that of its meaning. Granger’s book explores this
intermediate level through a comparison with works of art and an appeal to Peirce’s
semiotic.
14:30
Karine Chemla (SPHERE-CNRS & University Paris Diderot, France)
Comparing the geometric style and algebraic style of establishing equations in China, 11th-13th centuries
ABSTRACT. Chinese sources composed between the 11th and the 13th century attest to various styles of establishing algebraic equations to solve problems. Some documents like Liu Yi’s 劉益 11th century Discussing the source of the ancient (methods), which survived through quotations in Yang Hui, Quick methods for multiplication and division for the surfaces of the fields and analogous problems, 1275, attest to a geometric statement of quadratic equations and hence a diagrammatic work to establish an equation. These documents are in continuity, both conceptual and practical, with the type of treatment of equations to which the canonical work The Nine Chapters on Mathematical Procedures (九章算術) and its commentaries by Liu Hui 劉徽 (263) and Li Chunfeng 李淳風 (656) testify. Other documents, like Li Ye’s 李冶 Measuring the Circle on the Sea-Mirror 測圓海鏡, attest to a symbolic statement of algebraic equations, and they present symbolic modes of establishing equations. Interestingly, these other documents are also in continuity with the treatment of equations given in The Nine Chapters. In fact, the former types of documents represent an older practice, on the basis of which, I argue, the new style took shape. This corpus of text thus documents an important phenomenon with respect to mathematical styles, that is, how a style is supplanted by another one. My talk will focus on the historical process that led from one style to the other. What motivated the change of style? How did it occur, and how did the change affect working with equations? What new opportunities and losses did this change of style represent? These are the questions I plan to address.
ABSTRACT. In this paper I contrast two models of medicine, the Curative Model and the Inquiry Model, and favour the latter.
Medical traditions vary as widely as one can imagine in different times and places. Given the apparent variety of actual practices that are either claimed to be or regarded as medical, what, if anything, do they share—other than the fact we are inclined to refer all these wildly different practices as (at least partly) medical. Any adequate philosophy of medicine must respond to this question, and say something about what medicine is.
I consider a simple Curative Model, stating that the goal and business of medicine is to heal the sick. I distinguish the goal of medicine from its core business, that is, the exercise of some competence or skill that is characteristically medical and that is more than mere well-meaning but inexpert assistance, like bringing a sick relative a nice cup of tea and a pair of slippers. I defend the Curative Model’s implication that cure is the goal of medicine against various objections.
However, the curative record of medicine, considered across all times and places is dismal. Yet medicine has persisted in many times and places. Why? I argue that this persistence shows that the business of medicine – the practice of a core medical competence – cannot be cure, even if that is the goal. Instead, what doctors provide is understanding and prediction, or at least engagement with the project of understanding health and disease. I argue that this Inquiry Model explains the persistence of ineffective medicine, and defend it against various objections.
SILFS (Società Italiana di Logica e Filosofia della Scienza) is the Italian national organization devoted to fostering research and teaching in the fields of logic, general philosophy of science and philosophy of the special sciences. It comprises a large number of academics working in such areas, who are based in Italy as well as in other countries. This symposium proposes to explore philosophical and methodological issues concerning the foundations of our best scientific theories, with the aim of bridging across the diverse research trends characterizing the Italian community of logicians and philosophers of science. Specifically, the symposium focuses on the formal status of successful theories developed in various fields of science, most notably the life-sciences, the mathematical sciences and the social sciences. For this purpose, it brings together experts on the logic and philosophy of medicine, physics, computation and socio-economics, so as to jointly investigate from different perspectives a host of inter-connected questions that arise when facing the outstanding problem of how to formalize scientific theories.
More to the point, we plan to deal with the following issues: (1) how to provide a formal treatment of empirical evidence in medical research; (2) how to elaborate a computational notion of trust that can be applied to socio-economical contexts; (3) how to construct a rigorous framework for the logic of physical theories, with particular focus on the transition from classical to quantum mechanics; (4) how to develop a mathematical foundation for the concept of reduction between different theoretical systems. By addressing such specific questions with a systematic and inter-disciplinary approach, the symposium wishes to advance our general understanding of the relation between theories and formalization.
A game-theoretic approach to evidence standards in Medicine
ABSTRACT. Contributions in the philosophy of science pointing to the social character of the scientific enterprise (Goldman, 1999; Kitcher 2003; Solomon, 2001; Douglas, 2000, Steel and Whyte 2012) emphasize how the social dimensions of science are (intrinsically) intertwined with its standard epistemological goals. This is all the more evident in the health domain and the environment due to various implications for individual and public goods. In a society where science is also financed by public investments, and health and environment are constitutionally protected goods, individuals, interest groups, industry, as well as governmental agencies and institutions obviously create a complex network of dynamic, strategic interactions.
This complex web may characterize each scientific domain distinctively. I focus in the present talk on medicine and the pharmaceutical industry and take the recent debates on the reproducibility crisis, the “AllTrials campaign” and various appeals to transparency and “Open Science” as a case in point, while presenting a game-theoretic approach to such phenomena.
ABSTRACT. Many disciplines recognize the importance of trust for the emergence and development of collaborative behaviours. However, being a multi-faceted concept, trust always defiled a comprehensive analysis that could define its core features and thus identify it as a clear notion. This aspect is highly problematic when the concept has to be modelled and, successively, implemented in formal environments. Therefore, it comes with no surprise that there is little consensus, in the computer science literature, on the nature of computational trust and how to properly model it. Even though disagreements in scientific research are not rare and neither exceptionally troublesome in most cases, the lack of a unified conceptualization of the notion of trust is a big issue when it is realized that social interactions are gradually transitioning from the physical realm to the digital one. In digital environments, all the trust-relevant biological traits that human beings intuitively identify are missing. Trusting or not can’t be a matter of instinct anymore and effective mechanisms to establish trust relationships must be explicitly implemented in the design of the digital systems. Those mechanisms can then aid the users to consciously assess whether to trust or not another user during interactions. This talk is structured in two parts. In the first part, a conceptual analysis of the notion of trust is provided, in order to obtain general features common to socio-economical researches on trust. Starting from previous generalist analysis of trust, various discipline-specific studies of trust are summarized and then merged into a unified theory. This theory is then review with respect to laboratory experiments on trust. In the second part, the set of core features attributed to trust by the theory presented in part one is assessed and adjusted with respect to standard paradigms employed to build digital communities. In particular, focus will be put on the Beta paradigm and on the assumptions placed at the base of the EigenTrust algorithm. This adjusted set of features of trust will then help in defining a suitable computational notion of trust. This computational notion of trust is then, finally, used to build a logical language. The logical language will be a modal logic augmented with trust operators on formulas. The semantics of such language will be given in terms of augmented neighbourhood semantics with an added trust structure. Decidability results for the language will be proved and possibilities for computational implementations of the language are taken into consideration. Even excluding the actual model built and the proposals made, all researchers in the area of computational trust can greatly benefit from the methodological insights presented in the talk on how to construct computational versions of social notions. Therefore, the talk achieves two goals: i) it builds a methodology suited to build formal version of social notions; ii) it provides deep insights on the notion of trust; iii) it presents an actual logical language employable to reason about trust.
Yousuf Hasan (University of Western Ontario, Pakistan)
Carnap on the Reality of Atoms
ABSTRACT. In Empiricism, Semantics, and Ontology (1950), Carnap introduces a distinction between what he calls “internal” and “external” questions. The internal questions for Carnap are relatively straightforward since they arise within a language and are amenable to our ordinary methods of proof. In contrast, external questions are interpreted as practical questions that ask whether we should adopt a certain language based on its expected benefits. While Carnap had originally made this distinction to avoid metaphysical worries that the use of semantics posed to empiricists philosophers (1950), he later extended the application of the distinction to speak about theoretical entities as well (1966/1974). However, a straightforward application of the distinction to the Realism/Anti-realism controversy may be more problematic than what Carnap may think.
In recent scholarship, Penelope Maddy, made an objection to Carnap’s extended use of the distinction using the example of the atomic hypothesis and argued that not only the internal/external distinction was unsuccessful for talking about atoms, but that it should be dismissed altogether (2008). According to her criticism, Carnap’s distinction would make the reality of atoms a mere external question of adopting an “atom-language” for practical merits. This would undermine the remarkable significance of Perrin-Einstein experiments which decisively proved the existence of atoms. With the refinement of our ordinary methods of evidence based on Brownian motion, Perrin and Einstein settled the seemingly intractable debate between energeticists and atomists in favour of the latter. For Maddy, our acceptance of the atomic hypothesis gives us good reasons to dismiss Carnap’s distinction as being confused and overly simplistic since it makes the reality of atoms a matter of convenience only and undermines the novel achievement by Einstein and Perrin.
According to William Demopoulos, however, we can develop an understanding of the distinction that does not reduce the atomic hypothesis to a mere linguistic proposal (2011). Moreover, the external debate between the realists and anti-realists could still be understood as a dispute about a preference of language. Both the realist and anti-realist would agree that atoms are real as a matter of fact, but differ in their understanding of the “atom-language” which they adopt to speak about atoms. In this way, the significance of Einstein and Perrin analysis would still be preserved while the difference between realists and anti-realists would be seen in how they understand the truth of theories.
In my talk, I will use Crispin Wright’s pluralist account of truth (1992) to propose other semantic ways that realists and anti-realists could differ from each other beyond what Demopoulos has already suggested (2011). Both Wright and Carnap under my interpretation ought to share a common worry: “What is at stake between the realist and anti-realist if both agree the statements of the contested discourse are irreducibly apt for truth and falsity?” I will show that Wright’s various criteria of objectivity can be used to allow for a more nuanced and stable understanding of Carnap’s internal/external distinction.
Functional Ontologies and Realism: The Case of Nuclear Physics
ABSTRACT. The existence of enduring inconsistent models for the same scientific object can ruin attempts to interpret them realistically. One representative case is the proliferation of contradictory models of atomic nuclear structure that make radically different assumptions about the same thing: the nature of the nucleus. The conceptual tensions between the various models have prompted forceful anti-realist charges in recent literature. A close examination of the case, however, suggests that its antirealist import is exaggerated. I argue that the more successful nuclear models exemplify opportunities for crucial resources of selective realism, particularly functional ontologies.
One major complication in modeling nuclear structure is that atomic nuclei are both too large for direct calculation and too small for the use of statistical approaches. Treating nuclei as bound states of nucleons, physicists work out semi-experimentally an effective two-nucleon potential, which in principle allows for the calculation of the eigenstates and energies of multi-nucleon systems. In practice, however, the number of nucleons in nuclei with A> 3 makes the calculation intractable. So, guided by basic quantum mechanics and experimental findings, physicists from the 1930s on have constructed various models for nuclear structure, each with some significant (if limited) success. Most nuclear physicists think that considerable understanding has been achieved in this way. Still, nuclear physics is commonly regarded as an ugly duck in basic science because its approaches lack fundamentality, “united” only at a very abstract level to the “fundamental” theory of nucleon-nucleon interaction.
So, to what extent (if any) can those nuclear models receive realist interpretation? Taken as whole constructs, all current nuclear models are patently false. But then, as whole constructs, all empirical theories are false according to general good sense. Larry Laudan (1981) turned this insight into an ambitious anti-realist argument that realists forcefully challenge. Bringing the matter home to the nuclear case, some of the responses given to Laudan help to articulate realist reactions to the situation of nuclear models, or so I suggest in this contribution.
The realist strategy I suggest goes roughly as follows. Typically, in modern scientific fields, successor theories retain parts of earlier theories, deeming them prospectively truthful. This applies, in particular, to “functional” versions that bypass incommensurability charges by limiting their descriptive focus to abstract theoretical levels of lesser fundamentality and scope and greater coarse-graining than they had in the original proposal. Accordingly, on the selective approach I propose, realist commitment goes primarily to “functional” theories or parts thereof. The entities and regularities of realist import are identified by what they do rather than by what they “ultimately are” or are made of. A functional theory [T] comprises existential claims and laws (drawn from a fuller theory T but now more restricted, abstract, and coarse-grained). Realist selection of [T] out of a theory T amounts to asserting that the kinds of entities and regularities spelled out in [T] are real (i.e., they are at play in the intended domain). Functional models can originate in radically different ideas associated with different, incompatible ontological foundations—i.e., functional models are schematic and typically present multiple realizability. As such, they can be accepted as an accurate description of underlying stuff much more easily than their fuller base models, without contradiction. The abstraction associated with each [T]-version of a theory responds to specific interests and focuses on just some aspect of the world, but its application is objective. The very same object can belong in very different functional ontologies over different domains of interest. The “ordinary” description of an apple differs markedly from its molecular description (“Eddington's apple”), yet the two descriptions are “correct,” each over different concerns and regimes of size and energy. Both tell precise stories about the modal structure of an apple in their intended contexts.
A recent paper Margaret Morrison (2011) articulates well-structured complaints about the leading models of nuclear structure. She gives perceptive expression to the interpretive unease provoked by much of nuclear physics. Accordingly, I focus on her analyses and consider the main charges she raises, roughly in order of strength, and then suggest replies selective realists can offer, particularly to extant charges that the models’ lack unification, are incomplete, show explanatory failure, empirical underdetermination, and contain false central posits. If the analyses I propose are correct, then the noted antirealist allure of nuclear models is largely an artifact of an epistemological overvaluation of fundamentality in the philosophy of physics.
Some philosophical remarks on the concept of structure. Case of Ladyman’s and Heller’s view
ABSTRACT. In the perspective of historical development of science, especially physics, on the one hand can be noticed the essential role of the concept of structure (Landry 2011). On the other hand, however, it has never been proved that no theory never correctly described the nature of entities postulated by it or that such entities were never crucial for new predictions (Alai 2017). In todays’ dispute about structural understanding of theories some philosophers (particular focus will be on philosophical ideas proposed by James Ladyman and Michał Heller) pointed out metaphysical and mathematical consequences of ongoing discussions (Ladyman 2002b; 2008). Viewing objects in the structural/relational way in ontology of physics and mathematics rather than in a classic one, shows that the concept of structure today remains crucial for philosophy of physics and either for the philosophy of science (Heller 2006; 2016a). In this context however arises the question about the type of dependence of structuralism in philosophy of physics not only on the structuralism in philosophy of mathematics, but on some metaphysical assumptions regarding the concept of structure. Therefore the main question to be clarified is why science and the contemporary philosophy of science need the philosophy of structure and what could be the correlation between metaphysical and epistemological engagement of so far proposed concept (Ladyman 2002a; 2007).
According to Ladyman and Heller it is the mathematical structure of scientific theories that has the crucial role (e.g. the case of quantum mechanics and its different formulations) and, according to the latter, probably the category theory formalism offers new perspectives and at the same time widens the problematic of structures and their relationships (Heller 2014; 2015). Subsequently, one can ask whether different formulations of scientific theories are representations of the same ideal mathematical structure or rather of physical structure of the world. In the first case, raises up the question if this ideal mathematical structure assumes implicitly some Platonic vision of mathematical objects or rather reveals some epistemic strategy of our cognition. In the second case, indeed contemporary philosophy of science is replete with debate which do not merely concern the characteristics of a well-defined scientific theory, but also what scientific theories can say about the world, i.e. do entities postulated by them really exist or are they just useful fictions. Therefore it can be asked if the structural realism grant the laws of nature an ontic status of some real elements of entitative structure of the world. Or maybe the concept of structure remains as the heuristic instrument, because the retention of mathematical formalism is merely a comfortable and cost-saving pragmatic tool used among the community of researchers. As emphasised by Heller, seems that the problem of rationality/intelligibility of the world (in its epistemic and ontic aspect) is connected to the structuralist view (Heller 2006; 2016b). Probably among some philosophers such a great commitment to the structuralist view reveals some hidden metaphysical presuppositions about the world and the science.
Bibliography
Alai, M., 2017. “The Debates on Scientific Realism Today: Knowledge and Objectivity in Science”, in E. Agazzi (ed.), Varieties of Scientific Realism. Objectivity and Truth in Science. Dordrecht-Heidelberg-London-New York: Springer.
Heller, M. 2006. “Discovering the World structure as a goal of physics”, in Pontifical Academy of Sciences Acta (18), Paths of Discovery. Vatican City: 154-167.
–––. 2014. “The field of rationality and category theory”, in M. Eckstein, M. Heller, S.J. Szybka, Mathematical Structures of the Universe. Kraków: CCP, 441-457.
–––. 2015. “Category Theory and the Philosophy of Space”, in R. Murawski (ed.),
Filozofia matematyki i informatyki. Kraków: CCP, 185-200.
–––. 2016a. “Analogy, Identity, Equivalence”, in W. Arber, J. Müttelstrab, M.S. Sorondo, Complexity and Analogy in Science: Theoretical, Methodological and Epistemological Aspects, Pontificiae Academiae Scientiarum, Città del Vaticano, 257-267.
–––. 2016b. “Category free category theory and its philosophical implications”, Logic and Logical Philosophy, vol. 25: 447-459.
Ladyman, J., 1998. “What is Structural Realism?”, Studies in History and Philosophy of Science, 29: 409-424.
–––. 2002a. “Science, metaphysics and structural realism”, Philosophica, 67: 57–76.
–––. 2002b. Understanding Philosophy of Science, London: Routledge.
–––. 2008. “Structural Realism and the Relationship between the Special Sciences and Physics”, Philosophy of Science, 75: 744–755.
–––. and Ross, D., with Spurrett, D. and Collier, J., 2007. Every Thing Must Go: Metaphysics Naturalised, Oxford: Oxford University Press.
Landry, E. and Rickles, D. (eds.), 2011. Structural Realism: Structure, Object, and Causality, Dordrecht: Springer.
Regarding minimal structural essentialism in philosophy of spacetime
ABSTRACT. Main goals of my presentation are: a) to analyze and to criticize David Glick’s (2016) position dubbed “minimal structural essentialism” (henceforth: MSE), a recent contribution of his to the debate about the nature of spacetime and to a more general problem of metaphysical explanations; b) to draw some positive morals from MSE. My main claim in this context is that MSE points towards a possibility of in-world structural individuation of fundamental objects.
As Glick submits, MSE is a candidate for a viable account of metaphysical explanation of general permutability (GP) of objects in the domains of our best fundamental physical theories – GR and quantum mechanics (with respect to quantum statistics of indistinguishable particles). GP is an interpretative principle, according to which “[for] every permutation P of the a entities in S, R(a), R(Pa) represent the same possible state of the world” (Stachel 2002), R being an ensemble of relations. When it comes to GR the entities in question are spacetime points. The reason why it is important to keep GP connected to GR is that if spacetime points did not behave in accordance with GP, determinism in GR would be threatened, as the famous hole argument reveals (Earman, Norton 1987; Earman 1989; Stachel 2014).
MSE applied to GR draws inspiration from metrical essentialism (Maudlin 1988; 1990; Bartels 1996), one of the many responses to the hole argument, in order to explain metaphysically why GP obtains in GR. Central claim in MSE is that “for any relational structure S and any object a embedded in S, a has its place in S essentially whenever S obtains” (Glick 2016: 217). In my presentation, after elaborating more on MSE, especially on some further considerations about the notion of essentialism employed here and how it allows to explain GP, I will raise several objections against MSE.
I shall argue: i) why MSE buys into arbitral selectivism towards space of solutions of Einstein field equations; ii) why the distinction between actual and possible structure is effectively rendered useless in MSE; iii) why the explanation of GP provided by MSE is highly obscured by the ambiguity of what type of structures really count when it comes to the possibility of being “obtainable” in the world. Finally, I will formulate a moral from MSE – that structural individuation of spacetime points can be viewed as in-world individuation, yielding, as I claim, very special objects – a type of non-individuals Lowe called “quasi-individuals” (Lowe 2016: 59). I shall argue why non-essentialist approaches are more suitable in this context.
References:
Bartels, A. (1996), Modern Essentialism and the Problem of Individuation of Spacetime Points, Erkenntnis 45, 25–43.
Caulton, A., Butterfield, J., (2012), Symmetries and paraparticles as a motivation for Structuralism, The British Journal for the Philosophy of Science 63 (2), 233-285.
Earman, J. (1989), World Enough and Space-Time, Cambridge: MIT Press.
Earman, J., Norton, J., (1987), What Price Space-Time Substantivalism? The Hole Story, British Journal for the Philosophy of Science 38, 515–525.
Glick, D. (2015), “Minimal structural essentialism” [In:] A. Guay, T. Pradeu (Eds.), Individuals Across the Sciences, Oxford: Oxford University Press, 207 – 225.
Lowe, E.J. (2016), “Non-Individuals”, [In:] A. Guay, T. Pradeu (Eds.), Individuals Across the Sciences, Oxford: Oxford University Press, 49 – 60.
Maudlin, T., (1988), “The Essence of Space-Time” [In:] A. Fine and J. Leplin (Eds.), PSA,Vol. 2, Philosophy of Science Association, East Lansing, 82 – 91.
Maudlin, T., (1990), Substances and Space-Time: What Aristotle Would Have Said to Einstein, Studies in History and Philosophy of Science 21, 531–561.
Pooley, O. (2006). “Points, particles, and structural realism”. [In:] D. Rickles, S. French, and J. Saatsi (Eds.), The Structural Foundations of Quantum Gravity, Oxford: Oxford University Press, 83 – 120.
Stachel, J. (2002), “The relations between things versus the things between relations: the deeper meaning of the hole argument”. [In:] D. B. Malament (Ed.), Reading Natural Philosophy/ Essays in the History and Philosophy of Science and Mathematics, Chicago and LaSalle, IL: Open Court, 231 – 266.
Stachel, J. (2014), The Hole Argument and Some Physical and Philosophical Implications, Living Rev. Relativity, 17/1, http://www.livingreviews.org/lrr-2014-1,doi:10.12942/lrr-2014-1
One of the primary tasks of philosophers of physics is to determine what our best physical theories tell us about the nature of reality. Our best theories of particle physics are quantum field theories. Are these theories of particles, fields, or both? In this colloquium we will debate this question in the context of quantum field theory and in an earlier and closely related context: classical electromagnetism. We believe that the contemporary debate between particle and field interpretations of quantum field theory should be informed by a close analysis of classical electromagnetism and seek to demonstrate the fruitfulness of such a dialogue in this session.
Our first speaker will start the session by discussing the debate between Einstein and Ritz in the early 20th century over whether classical electromagnetism should be formulated as theory of particles interacting directly with one another or interacting via fields. They will discuss the technical challenges facing each approach as well as the role that philosophical and methodological presuppositions play in deciding which approach is to be preferred.
Our second speaker will defend a dual ontology of particles and fields in classical electromagnetism. They argue that the singularities which arise in the standard Maxwell-Lorentz formulation of electromagnetism are unacceptable. However, the standard equations of electromagnetism can be modified (as is done in the Born-Infeld and Bopp-Podolsky formulations).
Our third speaker will recount the problems of self-interaction that arise for a dual ontology of particles and fields in the context of classical electromagnetism and defend point particle ontology. They will go on to argue that attempts to formulate quantum field theory as a theory of fields have failed. They believe that it too should be interpreted as a theory of particles.
Our final speaker will defend a pure field ontology for quantum field theory. They will argue that quantum theories where the photon is treated as a particle are unacceptable. On the other hand, treating the electron as a field yields significant improvements over the ordinary particle interpretation.
Particles, fields, or both? A reevaluation of the Ritz-Einstein debate
ABSTRACT. According to its standard interpretation classical electrodynamics is a theory with a dualist ontology: a particle-field theory according to which electromagnetic fields mediate interactions among charged particles. Similarly, in quantum electrodynamics photons are treated as fields while electrons are treated as discrete particles. Even though the particle-field ontology is widely accepted, the dual ontology results in both conceptual and mathematical problems, chiefly among them the problem of self-interaction. In this talk I will review some of these problems as they emerge in classical electrodynamics and survey potential solutions to them. One pair of solution strategies, of course, involves getting rid of either particles or fields in favor of a monistic ontology (see the contributions by other speakers in this symposium). As a historical background for the other talks in this symposium I will reexamine a debate surrounding one particular attempt at developing an action-at-a-distance theory of classical electromagnetism: Walter Ritz’s theory and his debate with Albert Einstein in the first decade of the twentieth century. While Ritz’s theory ultimately was not successful, the debate sheds important light on some of the conceptual problems of field-theoretic frameworks, on the one hand, and action-at-distance theories on the other, and puts into focus how the debate between the competing frameworks is shaped by philosophical or methodological presuppositions as well as by the frameworks’ formal or mathematical problems.
ABSTRACT. Classical electrodynamics, which introduces the existence of particles and fields, has singularities. But not all singularities are created equal: some are good, some are bad, and some are really bad. The singularities in the Maxwell–Lorentz theory of classical electrodynamics are either bad or really bad. Both kinds of singularities manifest themselves in the self-interaction problem: one is coming from the near field and the other from the radiation field. Either singularity needs to be treated differently in order to yield equations of motion for charges. Dirac showed how to do that: while the far field singularity (a bad singularity) can be dealt with by Taylor approximations, the near field singularity is more severe and needs a mass renormalization procedure, which introduces an infinite electrodynamic mass.
In order to have a well-defined theory that doesn’t rely on approximations, Born and Infeld developed in the 1930s a classical electromagnetic theory with a dual ontology of particles and fields that doesn’t suffer from the self-interaction problem. But still this theory keeps a singularity for the electromagnetic field at the location of the charges. This singularity is a good one, though, because it (surprisingly) leads to well-defined equations of motion and to a finite self-energy. Since the Born-Infeld field theories are non-linear, it is very difficult to rigorously solve the equations of motion. Therefore, very little is actually known about this theory.
In the 1940s, Bopp, Podolsky, Landé, and Thomas, proposed an alternative theory of charges and electromagnetic fields that is linear, although of higher-order than the standard Maxwell equations. This theory also has singularities, but they are of a mild kind that lead again to well-defined self-interactions and to finite self-energies. Although the interaction between charges and the electromagnetic field poses mathematical and physical challenges, the Born–Infeld and the Bopp–Podolsky theory are promising, unfortunately rather unknown, proposals on how to construct a consistent classical theory.
Arto Mutanen (Finnish Naval Academy & Finnish National Defence University, Finland)
On Explanation and Unification
ABSTRACT. The notion of explanation is one of the most central notions in philosophy of science. The notion of explanation is closely connected to different other notions, like scientific theory and theory construction, interpretation of theories and observational-theoretical distinction, and scientific reasoning. The classical papers Hempel (1942) and Hempel and Oppenheim (1948) gave logically strict and philosophically interesting formulation of the notion (Hempel 1965). The model is known as the Covering Law Model which says that explanation is a logical argument in which, at least, one scientific law is used.
In science, there has been a tendency that new theories are more general than the older ones. A new theory unifies older theories. Analogous notion of unification in the theory of explanation is sometimes called reductive explanation in which the notions like superseding and screening off are of central importance; (a theory T2 supersedes a theory T1 in explaining g given e: P(g/T2 & e) > P(g/ T1 & e) > P(g/e) and a theory T2 screens off a theory T1 from g given e: P(g/T1 & T2 & e) = P(g/T2 & e) ≠ P(g/T1 & e)) (Niiniluoto 2016 and referents therein).
In the theory of explanation also a different notion of unification is used. For example, according to Kitcher (1989) unification is based on explanatory patterns (or explanatory stores). Explanatory patterns unifiy the underlying knowledge base (a set of accepted sentences). Kitcher says that explanation is a deductive argument. So, Kitcher’s approach is nicely connected to Covering Law Model. However, the notion of explanatory pattern (or explanatory store) need to be further specified (Psillos 2002). To do the task we will analyse closely scientific reasoning or, more precisely, explanatory reasoning. In the task we will use the Interrogative Model of Inquiry developed by Hintikka. Interrogative Model allows us to explicate where the explanatory power of explanations comes (Halonen & Hintikka 2005).
The Interrogative Model explicates reasoning process as a knowledge construction process. And detailed analysis of the explanatory reasoning process shows detailed logical and conceptual information about explanatory process. This is connected to pragmatics of explanation; our analysis can be called logical pragmatics. Especially we get important information about the role of general laws in explanation. Hence, we get better understanding of Covering Law Model. General analysis of the explanatory reasoning processes which allows us to consider more closely Kitcher’s notion of explanatory pattern and hence the notion of unification. Moreover, Interrogative Model throw some new light to the relationship between the two notions of unification referred above.
References:
Halonen, Ilpo & Hintikka, Jaakko, 2005, Toward a Theory of the Process of Explanation, Synthese 143(1), 5-61.
Hempel, Carl G., 1965, Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, NY: Collier-Macmillan Limited
Kitcher, Philip, 1988 (1981), Explanatory Unification, in Klemke E. D., Hollinger, R. & Rudge D. W. (eds.) Introductory Readings in the Philosophy of Science, NY: Prometheus Books
Niiniluoto, Ilkka, 2016, Unification and Confirmation, Theoria 31
Psillos, Stathis, 2002, Causation and Explanation, Chesham: Acumen
CANCELLED: Equilibrium theory and scientific explanation
ABSTRACT. General equilibrium theory and its concordant welfare theorems are often regarded as centerpieces
of neoclassical microeconomics. Attempts to prove the theory began in the 19th century by
Walras (1874) though it was proven canonically by Arrow and Debreu (1954) in the mid-20th century.
The theory asserts that under a set of mathematical assumptions about competitive markets, an
economy can achieve a Pareto-optimal state of equilibrium, where there is no excess demand for
goods as well as labor, all without the assistance of a central planner or government authority.
Such a condition appears to be highly desirable: since there is no excess demand for labor, every
agent is satisfactorily employed; moreover, since there is no excess demand for other goods, the
general economy does not contain agents who are hungry, malnourished, or otherwise in want. Hence,
the mathematical theory has sometimes been invoked in certain laissez-faire policy
recommendations where a government should not intervene into markets.
The scientific status of general equilibrium theory has been subject to much debate. I consider
general equilibrium theory in terms of philosophical theories of scientific explanation and
conclude that a new model of explanation would be required, which is called Analogical-Projective
Explanation. I argue this makes sense of claims by Sugden (2000) and Gibbard and Varian (1978) that
microeconomic theorizing involves making “caricatures” or “credible worlds.” (Cf. Maki (2002) for
more examples of fiction in economics.) I make this argument in three stages.
In the first stage, I consider other classical models of scientific explanation given by Hempel
and Oppenheim (1948) as candidates for the explanations purported by the theory. I argue that
Deductive-Nomological Explanation is not satisfactorily compatible with the sort of explanations
purportedly given by equilibrium theory, since it requires that scientific explanations consist in
deductions which identify the causes of events using universal natural laws (such as Newton’s Law
of Gravitation or Boyle’s Law for Gases). Equilibrium theory does not put forward any causal
explanations from universal and necessary laws of nature. Likewise, I argue that the explanations
purported by equilibrium theory are not Inductive-Statistical explanations: the claims of the
theory are not inductive generalizations from an accumulation of evidence.
In the second stage, I propose a generalization of Hempel’s Inductive-Statistical explanation which
I call Analogical-Projective Explanation, in which a feature or condition in a target model is
explained by appealing to a feature or condition in a analogical model. Such claims are
deductions made in analogical logic based upon a set of projective similarities between the two
models. We make many such explanations in science and everyday life: for example, we may make some
claims about an architectural structure (i.e. the target model) by investigating a scale model of
it (i.e. the analogical model).
In the third stage, I show how we can make use of A-P explanation in microeconomics like the
Solow-Swan growth model or Akerlof’s Market for Lemons. However, I also argue that the kind of
analogy required for equilibrium theory is highly problematic with respect to analogical inference,
since the target (i.e. the real economy) and analogical model (i.e. the mathematical equilibrium
economy) cannot be related in projective similarity to each other axiomatically or through a
structural homomorphism.
Arrow, K. and Debreu, G. (1954). Existence of an equilibrium for a competitive economy.
Econometrica, 22(3):265–290.
Gibbard, A. and Varian, H. R. (1978). Economic models. Journal of Philosophy, 75(11):664–677.
Hempel, C. G. and Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of
science, 15(2):135–175.
Mäki, U. (2002). Fact and fiction in economics: models, realism and social construction. Cam-
bridge University Press.
Sugden, R. (2000). Credible worlds: the status of theoretical models in economics. Journal of
Economic Methodology, 7(1):1–31.
Walras, L. (1954/1874). Elements of Pure Economics trans. William Jaffe. Richard D. Irwin.
Some formal and informal misunderstandings of Gödel's incompleteness theorems
ABSTRACT. G\"{o}del's incompleteness theorem is one of the most remarkable and profound discoveries in foundations of mathematics. G\"{o}del's incompleteness theorems have wide and profound influence on the development of mathematics, logic, philosophy, computer science and other fields. G\"{o}del's incompleteness theorems raise a number of philosophical questions concerning the nature of logic and mathematics, as well as mind and machine. However, there are ample misinterpretations of G\"{o}del's incompleteness theorems from the literature and folklore.
In this paper, we will focus on some misinterpretations of G\"{o}del's incompleteness theorems in mathematics and logic which are not covered in [1]. The motivation of this paper is to review and evaluate some formal and informal misinterpretations of G\"{o}del's incompleteness theorems and their consequences from the literature and folklore as well as to clarify some confusions based on the current research on G\"{o}del's incompleteness theorems in the literature.
In this paper, we focus on how recent research on incompleteness clarifies some popular misinterpretations of G\"{o}del's incompleteness theorems.
Firstly, we discuss some misinterpretations of G\"{o}del's first incompleteness theorem (G1). Especially, we will focus on the following interpretations or aspects of G1: the claim that there is a truth that cannot be proved; the metaphorical application of G1 outside mathematics and logic; the claim that any consistent formal system is incomplete; the claim that any consistent extension of PA is incomplete; the dependence of incompleteness on the language of the theory; the difference between the theory of arithmetic and the theory of reals;
the claim that G\"{o}del's proof is paradoxical due to the use of the Liar Paradox;
the difference between the notion of provability in PA and the notion of truth in the standard model; sentences of arithmetic independent of PA with real mathematical content; and the theory with the minimal degree of interpretation for which G1 holds.
Secondly, we discuss some misinterpretations of G\"{o}del's second incompleteness theorem (G2). Especially, we will focus on the following interpretations or aspects of G2: a delicate mistake for the proof of G2; the vagueness of the consistency statement; the intensionality of G2; the claim that G2 holds for any consistent extension of PA; the claim that there are arithmetic truths which can not be proved in any formal theory in the language of arithmetic; the claim that the consistency of PA can only be proved in a stronger theory properly extending PA; and the claim that G2 refutes Hilbert's program.
Finally, we discuss the popular interpretation that G\"{o}del's incompleteness theorems show that the mechanism thesis fails based on the current advances in this field.
Reference:
[1] T. Franzen. G\"{o}del's Theorem: an incomplete guide to its use and abuse. A.K.Peters, 2005.
14:30
Sourav Tarafder (St. Xavier's College, Kolkata, India) Benedikt Loewe (University of Amsterdam, University of Hamburg, Germany) Robert Passmann (University of Amsterdam, Netherlands)
Constructing illoyal algebra-valued models of set theory
ABSTRACT. The construction of algebra-valued models of set theory starts from an algebra
A and a model V of set theory and forms an A-valued model V(A) of set theory
that reflects both the set theory of V and the logic of A. This construction
is the natural generalisation of Boolean-valued models, Heyting-valued models,
lattice-valued models, and orthomodular-valued models [1, 2, 7, 5] and was
developed in [4].
Recently, Passmann introduced the terms “loyalty” and “faithfulness” while
studying the precise relationship between the logic of the algebra A and the
logical phenomena witnessed in the A-valued model of set theory in [6]. A
model is called loyal to its algebra if the propositional logic in the model is the same as the logic of the algebra from which it was constructed and faithful if
every element of the algebra is the truth value of a sentence in the model. The model constructed in [4] is both loyal and faithful to PS3 , which is a three-valued
algebra and can be found in [4] as well.
In this talk, we shall give elementary constructions to produce illoyal models by stretching and twisting Boolean algebras. After we give the basic definitions,
we remind the audience of the construction of algebra-valued models of set
theory. We then introduce our main technique: a non-trivial automorphisms
of an algebra A excludes values from being truth values of sentences in the A-valued model of set theory. Finally, we apply this technique to produce three classes of models: tail stretches, transposition twists, and maximal twists.
It will be shown that there exist algebras A which are not Boolean algebras
and hence its corresponding propositional logic is non-classical, but any sentence
of set theory will get either the value 1 (top) or 0 (bottom) of A in the algebra-
valued model V(A), where the sub algebra of A having domain {0, 1} is same as the two-valued Boolean algebra. This concludes that the base logic for the
corresponding set theory is not classical, whereas the set theoretic sentences act
classically in this case. This talk is based on [3].
Bibliography
[1] Bell, J. L. (2005). Set Theory, Boolean-Valued Models and Independence Proofs (third edition). Oxford Logic Guides, Vol. 47. Oxford: Oxford University Press.
[2] Grayson, R. J. (1979). Heyting-valued models for intuitionistic set theory.
In Fourman, M. P., Mulvey, C. J., and Scott, D. S., editors. Applications of
sheaves, Proceedings of the Research Symposium on Applications of Sheaf
Theory to Logic, Algebra and Analysis held at the University of Durham,
Durham, July 9–21, 1977. Lecture Notes in Mathematics, Vol. 753. Berlin:
Springer, pp. 402–414.
[3] Loewe, B., Passmann, R., & Tarafder, S. (2018). Constructing illoyal
algebra-valued models of set theory. Submitted.
[4] Loewe, B., & Tarafder, S. (2015). Generalized Algebra-Valued Models of Set
Theory. Review of Symbolic Logic, 8(1), pp. 192–205.
[5] Ozawa, M. (2009). Orthomodular-Valued Models for Quantum Set Theory.
Preprint, arXiv 0908.0367.
[6] Passmann, R. (2018). Loyalty and faithfulness of model constructions for
constructive set theory. Master’s thesis, ILLC, University of Amsterdam.
[7] Titani, S. (1999). A Lattice-Valued Set Theory. Archive for Mathematical
Logic, 38(6), pp. 395–421.
ABSTRACT. Since the historical turn, there has been a great deal of anxiety surrounding the relationship between the history of science and the philosophy of science. Surprisingly, despite six decades of scholarship on this topic, we are no closer to achieving a consensus on how these two fields may be integrated. However, recent work has begun to identify the crucial issues facing a full-fledged account of integrated HPS (Domski & Dickson (Eds.) 2010; Schickore 2011; Mauskopf & Schamltz (Eds.) 2011). We contend that the inability to deliver on a model of integrated HPS is partly due to an insufficient appreciation of the distinction between normative and descriptive accounts of science, an over-emphasis on individual case studies, and the lack of a general theory of science to mediate between historical data and philosophical conceptions of science.
In this paper, we provide a novel solution to this conundrum. We claim that the emerging field of scientonomy provides a promising avenue for how the philosophy of science may benefit from the history of science. We begin by showing that much of contemporary philosophy of science is ambiguous as to whether it attempts to answer normative or descriptive questions. We go on to argue that this ambiguity has led to many attempts to cherry-pick case studies and hastily draw normative methodological conclusions. Against this, we claim that these two features of our reasoning should be clearly separated so that we may show how descriptive history of science may benefit normative philosophy of science. Specifically, we show that a general theory of scientific change is required to mediate between individual findings of the history of science and normative considerations of the philosophy of science. Such a general descriptive theory is necessary to avoid the problem of cherry-picking and avoid the temptation to shoehorn historical episodes into the normative confines of a certain methodology. The main aim of scientonomy is to understand the mechanism that underlies the process of changes in both theories and the methods of their evaluation. We demonstrate how a gradual advancement of this general descriptive theory can have substantial input for the normative philosophy of science and turn scientonomy into a link between the descriptive history of science and the normative philosophy of science.
A Naturalized Globally Convergent Solution to the Problem of Induction
ABSTRACT. A Naturalized Globally Convergent Solution to the Problem of Induction
Abstract
The problem of induction questions the possibility of justification of any rule of inductive inference. Besides, avoiding the paradoxes of confirmation is a prerequisite for any adequate solution to the problem of induction. The confirmation relation has traditionally been taken to be a priori. In this essay a broader view of confirmation is adopted. It is shown that evidence itself must be interpreted on empirical grounds by bridge-hypotheses. Thus I develop an interpreted inductive scheme which solves both paradoxes of confirmation as well as the problem of induction.
Since distinct interpreted partitions corresponding to the same evidence can be related by means of a unique testable bridge-hypothesis, the confirmatory relations can be univocally determined by assessing the latter. Only the partitions corresponding to adequate hypotheses stabilize into a nomic chain which reflects the admissible bridge-hypotheses. A duality thesis is deduced, according to which any alteration in the relations of inductive support produced as a consequence of changes in some partition of the inductive basis can be neutralized by restating the inductive basis in terms of a corresponding dual partition. Therefore, the two paradoxes of confirmation are rendered solvable dual problems. The interpretative inductive schema also avoids Norton’s “no-go” results, because inductive strengths will not only depend on the deductive relations within the algebra of propositions but also on the semantic relations thereof.
I invoke the formal methods of lattice theory and algebraic logic to reformulate the problem of induction. Thus, the application of the interpreted inductive scheme to the data of experience yields a system of hypotheses with a lattice structure, the ordering of which is given by a relation of material inductive reinforcement. In this framework the problem of induction consists in determining whether there is a unique stable inductively inferable system of hypotheses describing the totality of experience. The proof of the existence of such system results from the application of Knaster-Tarski fixpoint theorem. Proving the existence of this fixed point is tantamount to a formal means-ends solution to the problem of induction.
In this approach induction is locally justified, i.e. based on matters of fact; and globally justified, i.e. in the sense of topological closure. This avoids the problem of regress of inductive justifications in material theories of induction. Finally, it is proved that interpretatively supplemented inductive schemes are globally convergent; that is, converge to an asymptotically stable solution without a priori knowledge of preferred partitions.
Bibliography
1. Achinstein, Peter, 2001, The Book of Evidence, Oxford: Oxford University Press.
2. Achinstein, Peter, 2010, “The War on Induction: Whewell Takes on Newton and Mill (Norton Takes on Everyone)”, Philosophy of Science, 77(5): 728–739.
3. Cleve, James van, 1984, “Reliability, Justification, and the Problem of Induction”, Midwest Studies In Philosophy: 555–567.
4. Davey B.A. 2002, [2012], Introduction to Lattices and Order, Cambridge: Cambridge University Press.
5. Hempel, Carl G., 1945, “Studies in the Logic of Confirmation I”, Mind, 54: 1–26.
6. Hempel, Carl G., 1943, “A Purely Syntactical Definition of Confirmation”, Journal of Symbolic Logic, 8: 122–143
7. Huemer, Michael, 2009, “Explanationist Aid for the Theory of Inductive Logic”, The British Journal for the Philosophy of Science, 60(2): 345–375.
8. Hume, David, 1739, A Treatise of Human Nature, Oxford: Oxford University Press.
9. Hume, D. 1748, An Enquiry Concerning Human Understanding, Oxford: Oxford University Press.
10. Kelly, Kevin T., 1996, The Logic of Reliable Inquiry, Oxford: Oxford University Press.
11. Garcia R. J. L., 2019, A Naturalized Globally Convergent Solution to Goodman Paradox. Notes for the GWP conference, Cologne.
12. Goodman, N., 1955, Fact, Fiction and Forecast, Cambridge, MA: Harvard University Press
13. Norton, John D., 2003, “A Material Theory of Induction”, Philosophy of Science, 70(4): 647–670.
14. Norton, J., 2018, “A Demostration of the Incompletness of Calculi of Inductive Inference”, British Journal of Philosophy of Science. (0): 1-26.
15. Reichenbach, H. (1940), On the Justification of Induction. Journal of Philosophy 37,97-103.
16. Reichenbach, H. 1966, The Foundations of Scientific Inference, Pittsburgh: University of Pittsburgh Press.
17. Salmon, W. C. 1966, The Foundations of Scientific Inference, Pittsburgh: University of Pittsburgh Press
18. Schulte, Oliver, 1999, “Means-Ends Epistemology”, British Journal for the Philosophy of Science, 50(1): 1–31.
19. Shafer, G., 1976, A mathematical theory of evidence. Princeton: Princeton Univ. Press.
ABSTRACT. A meta-level prediction method (as opposed to an object-level prediction method) is one that bases its predictions on the predictions of other prediction methods. We call a (meta-level) prediction method “access-optimal” in a given environment just in case its long-run predictive success rate is at least as great as the success rate of the most successful method (or cue) to which it has access it that environment (where access consists in knowledge of the present predictions and the past predictive accuracy of a respective method). We call a prediction method “universally access-optimal” just in case it is access-optimal in all possible environments. Universal access-optimality is obviously a very desirable feature. However, universal access-optimality is also rare, and we show: (1) There are no ‘one-reason’ prediction methods (i.e., methods that base each prediction on the prediction a single object-level method or cue) that are universally access-optimal, and (2) none of a wide range of well known weighting methods is universally access-optimal, including success weighting, linear regression, logistic regression, and typical Bayesian methods.
As shown in previous work, there is a prediction method known as Attractivity Weighting (AW) that is universally access-optimal, assuming accuracy is measured using a convex loss function (Cesa-Bianchi & Lugosi, 2006; Schurz, 2008; Schurz & Thorn, 2016). Although AW is universally access-optimal, there are other meta-level prediction methods that are capable of outperforming AW in some environments. In order to address this limitation of AW, we introduce two refined variants of AW, which differ from AW in having access to other meta-level methods. We present results showing that these refined variants of AW are universally-access optimal. Despite guarantees regarding long-run performance, the short-run performance of the variants of AW is a theoretically open question. To address this question, we present the results of two simulation studies that evaluate the performance of various prediction methods in making predictions about objects and events drawn from real world data sets. The first study involves predicting the results of actual sports matches. The second study uses twenty data sets that were compiled by Czerlinski, Gigerenzer, and Goldstein (1999), and involves, for example, the prediction of city population, the attractiveness of persons, and atmospheric conditions. In both simulation studies, the performance of the refined variants of AW closely matches the performance of whatever meta-level method is the best performer at given time, from the short run to the long run.
References
Cesa-Bianchi, N., & Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge: Cambridge University Press.
Czerlinski, J., Gigerenzer, G., & Goldstein, D. G. (1999). How good are simple heuristics? In G. Gigerenzer, P. M. Todd, & the ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 97–118). Oxford: Oxford University Press.
Schurz, G. (2008). The meta-inductivist’s winning strategy in the prediction game: a new approach to Hume’s problem. Philosophy of Science, 75, 278–305.
Schurz, G., & Thorn, P. D. (2016). The Revenge of Ecological Rationality: Strategy-Selection by Meta-Induction Within Changing Environments, Minds and Machines, 26(1), 31–59.
ABSTRACT. One perennial research question in the philosophy of mathematics is the one about the existence of mathematical entities. If they exist, they are abstract entities; but abstract entities are philosophically problematic because, among other reasons, of the difficulties involved in providing an account of how it is possible for us to know them. As a result, some authors contend that what is relevant to explain the role of mathematics in our intellectual lives is not whether mathematical entities exist, but whether mathematical statements are objectively true.
Hellman (1989) is one of them. He, following Putnam (1967), proposed a modal paraphrases of arithmetic and set theory (among other mathematical theories) that intends to guarantee the truth of mathematical statements, not to compromise with the existence of mathematical entities in this world and to provide an adequate epistemology. Linnebo (2018) contends that mathematics is true and should be read at face value. Nevertheless, he claims that we do not need to give up on mathematical entities to provide an appropriate epistemology; all we need is a thin account of mathematical entities, an account such that “their existence does not make a substantial demand on the world” (Idem, xi). To prove his point, he reconstructs arithmetic in abstractionist terms and set theory in abstractionist and modal terms.
Both reconstructions of set theory are modal though Hellman chooses Second Order S5 while Linnebo goes for Plural S4.2. These choices are motivated by the different ways in which they conceive of sets. Hellman understands them as positions in possible structures and defines them as the reification of the result of a selection process; Linnebo sees sets as a result of abstractions and introduces them by means of a predicative, plural, and modal version of Frege’s Law V. This allows him to accept non-predicative versions of the Comprehension Axiom for “∈”, while Hellman says that whether they are compatible with his definition is an open question. Nevertheless, both avoid compromise with infinite quantities while asserting that their respective proposals manage to reconstruct the most abstract levels of the set hierarchy.
As it is well known, Gödel established it is impossible to paraphrase arithmetic in trivial terms, hence, any reformulation of arithmetic (or of any mathematical theory that includes it) is going to be controversial in one aspect or other. It can be controversial (Rayo 2015) because of the linguistic resources it uses, because of the metaphysical assumptions that underlie it, or because of the subtracting strategy proposed.
Our purpose is to analyze in detail the two reconstructions of set theory provided, list out the logical tools used by each of them and see how their choices relate to the philosophical constraints each of them advocates.
HELLMAN, G. (1989) Mathematics without Numbers. Towards a Modal-Structural Interpretation. Oxford: Clarendon Press.
LIINNEBO, Ø. (2018) Thin Objects. An Abstraccionist Account. Oxford: OUP.
PUTNAM, H. (1967) “Mathematics without Foundations. ” Journal of Philosophy, LXIV(1): 5 – 22.
RAYO A. (2015). “Nominalism, Trivialism and Logicism.” Philosophia Mathematica, 23, 65–86, https://doi.org/10.1093/philmat/nku013.
Extensionalist explanation and solution of Russell’s Paradox
ABSTRACT. In this paper, I propose a way out from the aporetical conclusion of the debate about the, so-called, explanation of Russell’s paradox. In this debate, there are traditionally two main and incompatible positions: the "Cantorian" explanation and the Predicativist one. I briefly rehearse the reasons why both these ones can be neglected and propose a third, "Extensionalist", one.
The Extensionalist explanation identifies the key of Russell’s Paradox in a proposition about the extensions: ∀F ∃x(x = ext(F )), which allows to derive, from the existence of Russell’s concept, the existence of Russell’s extension. This proposition is a theorem of classical logic whose derivation presupposes the classical treatment of identity (Law of identity) and quantification (Law of Specification and Law of Generalisation). So, we can explain Russell’s paradox by the (inappropriate) classical correlation between concepts and extensions: the flaw of this correlation does not consists (as in the Cantorian explanation) in the injective feature of the correlation but (as in the Predicativist explanation) in its domain, namely in the implicit assumption that it is defined on the whole second order domain; however this result does not mean that, for restoring consistency, the whole second order domain has to be restricted (as in the Predicativist solution) but only the domain of the extensionality function.
The solution related to the Extensionalist explanation consists in a reformulation of Frege’s theory which allows the derivation of Peano Arithmetic as a logical theory of extensions. The new language L comprises two sorts of first order quantifiers (generalised Π, Σ, and restricted ∀,∃) respectively governed by classical and by negative free logic. From a syntactic point of view, the proposed system consists of the axioms of propositional classical logic, some specific axioms of predicative negative free logic, an impredicative comprehension’s axioms schema and, as the only non logical axiom, a version of Basic Law V (with a generalised universal first order quantification) restricted to the existents abstracts: ∀F ∀G(ext(F ) = est(G) ↔ ∃x(x = ext(F )) ∧ Πx (F x ↔ Gx)). This version of Basic Law V characterises the behaviour of the correlation denoted by "ext" as functional and injective only for a subset of second order domain, which excludes Russell’s concepts. Then, this system does not allow to derive Russell’s Paradox.
From a semantic point of view, the interpretation of the theory is provided by a model M = , in which D is the domain of restricted quantification (such that D ⊆ D0), D0 is the domain of generalised quantification and I is a total interpretation function on D0. The symbol "ext" is interpreted as a partial injective function from a subset of the power set of D0 in D.
References:
[1] Antonelli A. and May R. (2005). Frege’s Other Program, Notre Dame Journal of Formal Logic, Vol. 46, 1, 1-17.
[2] Boolos, G. (1993). Whence the Contradiction?, Aristotelian Society, Supplementary Volume 67, 213–233.
[3] Cocchiarella, N. B. (1992). Cantor’s power-set Theorem versus Frege’s double correlation Thesis, History and Philosophy of logic, 13, 179-201.
[4] Uzquiano, G. (forthcoming). Impredicativity and Paradox.
Anna Bellomo (University of Amsterdam, Netherlands)
Bolzano's real numbers: sets or sums?
ABSTRACT. Recently, two 19th century constructions of the real numbers have received attention: that of Frege (Snyder and Shapiro 201X) and that of Bolzano (Russ and Trlifajová 2016). While Frege’s construction is explicitly placed by Epple (2003)’s conceptual scaffolding into traditional (Frege, Hankel), formal (Hilbert, Thomae) and arithmetical (Cantor, Dedekind) constructions of the reals, it is an open question how to categorise Bolzano’s.
Interpreters agree that what Bolzano calls measurable numbers are in fact the reals. If we follow Bolzano literally, numbers, including thus the measurables, are sums, i.e. a certain kind of collections. This follows from the fact that, per GL and PU, numbers are quantities (GL §1) and quantities in turn are defined in terms of sums (PU §6). Hence, the definition of measurable numbers ultimately relies on the concept of sum. In the existing literature (van Rootselaar 1964, Rusnock 2000, Russ and Trlifajová 2016), however, Bolzano's measurable numbers are discussed through set theoretic interpretations. The reasons for adopting such an interpretation are understandable, as set theory is the language of modern mathematics, and that set theoretic interpretations can thus be mathematically expedient. The downside of such choice is however that it can lead to an anachronistic understanding of the underlying philosophy of science Bolzano endorses, thus hindering efforts of placing Bolzano’s measurable within a framework like Epple’s.
In this talk I examine some of Bolzano’s mathematical proofs and assess them in the context of Bolzano’s general philosophy of science while resisting the use of set theoretic resources. Aim of this analysis is to put into starker relief the differences between Bolzano’s theory of collections — sums in particular — and modern set theory, while highlighting the importance of Bolzano’s conception of science for his philosophy of mathematics. By tracking the concepts Bolzano deploys in his work on the measurable numbers, we can better assess his contribution in relation to those of later mathematicians such as Frege, Dedekind and Cantor.
References
Bolzano, Bernard (1985-2000 [1837]) Wissenschaftslehre, Bernard Bolzano’s Gesamtausgabe I. 11/1- 13.3. Stuttgart-Bad Cannstatt: Frohmann-Holzboog.
Bolzano, Bernard (1975) Grössenlehre, Bernard Bolzano’s Gesamtausgabe IIA. 7A Stuttgart-Bad Cannstatt: Frohmann-Holzboog.
Bolzano, Bernard (1976) Grössenlehre II, Bernard Bolzano’s Gesamtausgabe IIA. 8 Stuttgart-Bad Cannstatt: Frohmann-Holzboog.
Bolzano, Bernard (1851) Paradoxien des Unendlichen, ed. by Franz Přihonský. Leipsic: C. H. Reclam sen.
Epple, M. (2003) ‘The End of the Science of Quantity: Foundations of Analysis, 1860-1910’. In: A History of Analysis, ed.: H. N. Jahnke, 291-323. The American Mathematical Society.
Rootselaar, Bob van (1964) ‘Bolzano’s Theory of Real Numbers’. Archive for the History of the Exact Sciences, no. 2: 168-180.
Rusnock, Paul (2000) Bolzano’s Philosophy and the Emergence of Modern Mathematics.
Russ, Steve and Trlifajová, Kateřina (2016). ‘Bolzano’s Measurable Numbers: Are They Real?’ In Research in History and Philosophy of Mathematics. Basel: Birkhäuser.
Snyder, E. and Shapiro, S. (201X) ‘Frege on the Real Numbers’, manuscript available at https://sites.google.com/site/esnyderhomepage/contact/freges-theory-of-the-reals
15:45
Peter Simons (Department of Philosophy, Trinity College Dublin, Ireland, Ireland)
On the several kinds of number in Bolzano
ABSTRACT. Modern philosophy of arithmetic almost invariably begins with a discussion of Gottlob Frege and his failed attempt to give a logicist derivation of the laws of arithmetic. A much less well known but similarly revolutionary treatment of the natural numbers and their close cousins was given half a century earlier, by Bernard Bolzano. Despite sharing many platonist assumptions with Frege, Bolzano’s treatment of numbers is markedly more differentiated than that of Frege. It is therefore instructive to pull together Bolzano’s somewhat scattered remarks (found principally in [1] § 85 ff., [2], [3] and [4]), investigate the variety of objects he was prepared to call “number”, and see to what extent we can learn from this variety even today. We confine attention to what Bolzano himself called “the whole numbers”. Those quantities which call for negative, rational and irrational numbers form in Bolzano a topic too vast for brief discussion. By contrast with his account of continuous magnitudes, his theory of numbers stands up remarkably well.
Bolzano’s treatment of the numbers presupposes his theory of collections (Inbegriffe), itself a rich and extensive topic in his ontology. The four most important kinds of collection for Bolzano’s theory of number are multitudes (Mengen), sequences (Reihen), sums (Summen) and pluralities (Vielheiten), whose informal definitions we give. On this basis, Bolzano distinguishes concrete and abstract units of a given kind A, concrete and abstract pluralities of As, named and unnamed pluralities, and finally, the abstract natural numbers themselves. All of these distinctions make perfect sense and have straightforward application. The main flaw in Bolzano’s treatment is his account of sequences, which, because he does not admit repetitions (unlike the modern concept of a sequence) undercuts his theory of counting. Sequences are usually now defined as functions from the natural numbers or an initial segment thereof, but that presupposes numbers. It would be preferable to avoid such dependence. We show how Bolzano’s theory can be modestly extended and given a better basis by admitting collections of collections. The chief remaining gaps are then a proper understanding of the ancestral of a relation, and a recognition of collections of different infinite cardinalities: both of these innovations followed a good fifty years later.
Literature
[1] Bolzano, B. Wissenschaftslehre. Kritische Neuausgabe der §§ 46–90, ed. J. Berg. Bernard Bolzano Gesamtausgabe I/11/2. Stuttgart-Bad Cannstatt: Frommann–Holzboog, 1987. Translation in: Theory of Science, tr. P. Rusnock and R. George. Vol. 1. Oxford: Oxford University Press, 2014.
[2] Bolzano, B. Einleitung zur Größenlehre: Erste Begriffe der allgemeinen Größenlehre, ed. J. Berg. Bernard Bolzano Gesamtausgabe IIA/7. Stuttgart-Bad Cannstatt: Frommann–Holzboog, 1975.
[3] Bolzano, B. Reine Zahlenlehre, ed. J. Berg. Bernard Bolzano Gesamtausgabe IIA/8. Stuttgart-Bad Cannstatt: Frommann–Holzboog, 1976.
[4] Paradoxien des Unendlichen, ed. C. Tapp. Hamburg: Meiner, 2012.
Lieven Decock (Department of Philosophy, Vrije Universiteit Amsterdam, Netherlands)
Varieties of conceptual change: the evolution of color concepts
ABSTRACT. In recent years, several philosophers have explored the prospect of conceptual engineering, i.e., the deliberate modification of our concepts for specific cognitive, scientific, societal, or ethical purposes. The project is not uncontroversial and various objections can be raised. First, it may be argued that concepts cannot be changed, either because the very concept CONCEPT is defective, or because concepts are immutable. Second, it can be argued that conceptual engineering requires evaluative standards, which may not exist. Third, it can be argued that “engineering” of concepts is not possible, because intentional planning and controlled implementation of the changes are not feasible.
I will argue that the first and second objection are less compelling, but that the third objection does impose restrictions on the possibility of conceptual engineering. I will argue that continuous but slow conceptual change is ubiquitous, that conceptual change often results in an optimization of our conceptual structure, that conceptual change often results from intentional changes, but that in very few cases there can be genuine planning and controlled implementation. Conceptual change can best be viewed as an evolutionary process, in which there can be random change (drift), unplanned optimization (adaption), and intentionally designed but uncontrolled amelioration (reform) and intentionally designed fully controlled optimization (engineering). This picture of conceptual change will reduce the scope of conceptual engineering considerably; conceptual engineering appears as a limit case of conceptual change in which there is optimization based on deliberate intentions and under full control.
I will focus on color concepts to illustrate the notions of drift, adaption, reform, and engineering. I will describe the evolution in time of color concepts and highlight the varying degrees of amelioration, intentional planning, and control. Subsequently I analyze the evolution of color concepts from the perspective of Cappelen’s Austerity Framework for conceptual engineering. I conclude with a brief discussion of the relevance of the color case for theorizing about the change of concepts in general and for the prospects of conceptual engineering.
References:
Cappelen, H. 2018. Fixing Language, Oxford: Oxford University Press.
Douven, I., Wenmackers, S., Jraissati, Y., and Decock, L. 2017. “Measuring Graded Membership: The Case of Color.” Cognitive Science 41:686-722.
Jäger, G. & van Rooij, R. 2007. “Language Structure: Psychological and Social Constraints” Synthese 159:99-130.
Jraissati, Y., Wakui, E., Douven, I., & Decock, L. 2012. “Constraints on Color Category Formation” International Studies in the Philosophy of Science 26:171-196.
Jraissati, Y. & Douven, I. 2017. “Does Optimal Partioning of Color Space Account for Universal Color Categorization?” PLoS ONE 12: e0178083. https://doi.org/10.371/journal.pnoe.0178083
Regier, T., Kay, P., & Khetarpal, N. 2007. “Color Naming Reflects Optimal Partitions of Color Space”, PNAS 104:1436-1441.
Saunders, B. & van Brakel, J. (1997). “Are There Nontrivial Constraints on Colour Categorization?” Behavioral and Brain Sciences 20:167-179.
Saunders, B. & van Brakel, J. (2002). “The Trajectory of Color.” Perspectives on Science 10:302-355.
Steels, L. & Belpaeme, T. 2005. “Coordinating Perceptually Grounded Categories Through Language: A Case Study for Colour,” Behavioral and Brain Sciences 28:469-529.
The Semantic Account of Slurs, Appropriation, and Metalinguistic Negotiations
ABSTRACT. What do slurs mean? The semantic account has it that slurs express complex, negative properties such as ‘ought to be discriminated because of having negative properties, all because of being [neutral counterpart]’. In this paper I discuss whether the semantic account can explain the phenomenon of appropriation of slurs, and I argue that the best strategy for the semantic account is to appeal to a change of literal meaning. In my view, we could appeal to the phenomenon of meta-linguistic negotiation, in order to make sense of conversations where the two parties employ the same slur with different meanings.
"Cultured people who have not a technical mathematical training": audience, style, and mathematics in The Monist (1890–1917)
ABSTRACT. The Monist appeared in 1890, founded by the German-American zinc magnet Edward Hegeler. This quarterly magazine "Devoted to the Philosophy of Science" was edited by Hegeler's son-in-law, Paul Carus (who was later joined by his wife Mary Carus), who also edited Open Court Publishing. The magazine was a family affair, nevertheless celebrated within the emerging landscape of American periodicals for "sound scholarship" and "fruitful suggestiveness."
From the first volume, The Monist regularly included mathematical content, including popularizations, original research, translations of French or German works, book reviews of recent textbooks. Most of the regular contributors of this content might be considered enthusiastic amateurs, few were professional mathematicians. Yet, such turn-of-the-century names as Poincaré, Hilbert, and Veblen also occasionally wrote for The Monist.
The audience for the mathematics was understood to be "cultured people who have not a technical mathematical training" but nevertheless "have a mathematical penchant." With these constraints, a uniform and inviting style emerged among the varied contributions, described in contrast to the "very repellent form" of elementary textbooks.
This talk will begin by outlining the main features of the style of mathematics that appeared in The Monist between 1890 and 1917. In particular, articles suggested an active conversation between author and reader, encouraging the latter to work through proposed problems or thought experiments with the assistance of diagrams, metaphors, and narratives. A look at readership, evolving content, and competing publications will also help to evaluate to what extent this style succeeded, or was perceived to be successful, in reaching the intended audience.
The focus on style naturally leads to a consideration of what kinds of mathematical content were susceptible to this style and (more tentatively) how this style within a philosophy of science publication may have shaped contemporary developments in the philosophy of mathematics.
ABSTRACT. The idea that logic is a mechanical enterprise is so widespread that it is expressed regularly in introductory textbooks. For example, van Dalen writes that "It may help the reader to think of himself as a computer with great mechanical capabilities, but with no creative insight [...]" (2008, 2–3). Quine, after presenting a "silly but mechanical routine for finding any proof" concludes: "So our heavy dependence on luck and strategy, in our own deductive work, is merely a price paid for speed: for anticipating, by minutes or weeks, what a mechanical routine would eventually yield" (Quine 1972, 190). These views are based on the theoretical notions of decision procedures, formal systems, models of computation, and the Church-Turing thesis.
It is curious to note, however, that similar views were also put forward by logicians in the 19th century, who developed very diverse logical systems based on algebra (Boole), combinatorics (Jevons), and diagrams (Venn), when none of these theoretical notions were at hand. An early, and influential, proponent of the mechanical style in mathematics was John Stuart Mill, who wrote in his System of Logic:
The complete or extreme case of the mechanical use of language, is when it is used without any consciousness of a meaning, and with only the consciousness of using certain visible or audible marks in conformity to technical rules previously laid down. This extreme case is, so far as I am aware, nowhere realized except in the figures of arithmetic, and the symbols of algebra [...] Its perfection consists in the completeness of its adaptation to a purely mechanical use. (Mill 1843, 292–293, Book VI, Ch. VI, §6; emphasis by me)
Boole referred approvingly to Mill in his The Mathematical Analysis of Logic (1847, 2) and he also speaks of his own laws of logic as "the very laws and constitution of the human intellect" that underlie the "entire mechanism of reasoning" (Boole 1847, 6). Some 20 year later, in 1865 Jevons began with the design of a reasoning machine that "will be played like a piano, and give give the results of what you read to it without any trouble of thinking further" (Jevons 1886, 213). After two unsuccessful attempts Jevons completed this machine in 1869 (Jevons 1870). An engraving of "the Logical Machine" is shown on the frontispiece of Jevons (1874), where its use is also described. Also Venn, who promoted the use of diagrams in logical reasoning, speaks of "mechanical representation’ or propositions and notes that ‘It will be easily seen that such methods as those here described readily lend themselves to mechanical performance" (1880, 15).
In this talk, views on the mechanical character of logic will be presented and discussed, with the aims of elucidating the sense in which logic was considered to be mechanical and what epistemological role the mechanical aspect was supposed to play.
References
Boole, George. 1847. The Mathematical Analysis of Logic. Cambridge: Macmillan, Barclay, and Macmillan.
Jevons, Harriet A., ed. 1886. Letters and Journal of W. Stanley Jevons. London: Macmillan and Co.
Jevons, W. Stanley. 1870. “On the mechanical performance of logical inference.”
Philo- sophical Transactions of the Royal Society London (160), pp. 497–518. Reprinted in (Jevons 1890, 137–172).
--- 1874. The Principles of Science. A treatise on logic and scientific method, vol. 1. Oxford: Macmillan.
--- 1890. Pure Logic and Other Minor Works. London and New York: Macmillan and Co. Edited by Robert Adamson and Harriet A. Jevons, with a preface by Professor Adamson.
Mill, John Stuart. 1843. A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation, vol. II. London: John W. Parker.
Quine, Willard van Orman. 1972. Methods of Logic. New York: Holt, Rinehart and Winston, 3rd edn.
van Dalen, Dirk. 2008. Logic and Structure. Berlin, Leipzig: Springer, 4th edn. 2nd corrected printing.
Venn, John. 1880. “On the diagrammatic and mechanical representation of propositions and reasonings.” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 9(59), pp. 1–18.
Decolonising Scientific Knowledge: Morality, Politics and a New Logic
ABSTRACT. I argue that among the issues in the production and dissemination of scientific knowledge which occupy philosophers of science nowadays is a neglected concern in the form of institutional coloniality that gate keep against epistemologies of the south. This has led to the bordering of scientific knowledge—a segregation that validates own side and invalidates the other side. But should philosophers of science be at the border where standing on one side they draw a line between cultures that have the right to regulate and the power to determine what counts as knowledge and those that don’t? Coloniality of knowledge bases its logic on the pretension that the Western episteme is acontextual and universal and therefore superior to the rest that are not. Here, I employ a decolonial strategy to confront the ethics that seeks to regulate, and the politics that seeks to determine, what counts as knowledge in order to address the residualisation of knowledge formations from the global South. I transcend the approach in extant decolonial literature which confines itself to the remediation of epistemic injustices and offer a new logic that might be able to disborder the philosophy of science.
Javier Suárez (University of Barcelona - University of Exeter, UK)
Stability of traits as the kind of stability that matters among holobionts
ABSTRACT. Holobionts are biological assemblages composed of a host plus its symbiotic microbiome. Holobionts are pervasive in biology and every macrobe (animal or plant) is believed to host a huge number of microorganism, some of which are known to have a substantial influence on the realized phenotype of the animal. Because of this, a considerable number of biologists have suggested that holobionts are units of selection, i.e. entities that exhibit inherited variance in fitness (Gilbert et al. 2012; Rosenberg & Zilber-Rosenberg 2014; Roughgarden et al. 2018).
Contrary to their claim, some authors have recently argued that holobionts cannot be considered units of selection. They argue that the existence of independent reproductive regimes among the entities that compose the holobiont (the host, on the one hand; and the symbionts, on the other) makes it impossible to talk about holobiont inheritance (Moran & Sloan 2015; Douglas & Werren 2016). In other words, even if there might be phenotypic variation among different holobionts in a population, this phenotypic variation cannot be intergenerationally transmitted from parent to offspring. Or, at least, it cannot be transmitted with the degree of fidelity that would be required for natural selection to act on the holobiont, instead of on the different entities that compose it. Therefore, according to them, the microbiome is, at most, an environmental factor that influences (sometimes substantitally) the phenotype that the host expresses, but which cannot be selected as a unit with the host.
In this talk, building on some recent evidence about holobionts, I elaborate an extended notion of inheritance for symbiotic conglomerates that overcomes the difficulties posed in Moran & Sloan (2015) and Douglas & Werren (2016), which I call "stability of traits" (SoT). I argue that "SoT" is adequate to capture some phenomena of intergenerational preservation of variation among holobionts that could not be captured with the restricted notion of inheritance that the critics use, and thus SoT allows arguing that holobionts are units of selection, at least in some cases.
My talk will be divided in three parts:
- First, I will argue that critics of holobiont inheritance use a very restricted notion of inheritance that equates inheritance with genome replication, a notion I refer to as "stability of species" (SoS). I argue that, since the genome that is replicated and intergenerationally transmitted does not exhaust the full range of factors that have an influence on the expressed phenotype, SoS is insufficient to capture the concept of “inheritance”.
- Second, I argue that the definition of inheritance should be widened to include those factors that are actively acquired and maintained for an organism to express its phenotype, that is, to guarantee SoT. Since these factors are net contributors to phenotype variation in the population they are the raw material among which natural selection can "select". I argue that the microbiome plays this role in many holobionts, and thus should be considered as "inherited" material. An important aspect of SoT is that it only requires that the phenotypic variation is intergenerationally preserved in the population, and not only between parents and offspring, as some conceptions of natural selection demand (Charbonneau 2014).
- Finally, I will argue that there are reasons to believe that some holobionts satisfy SoT, and thus it is reasonable to considered them as units of selection.
15:45
Joeri Witteveen (University of Copenhagen / Utrecht University, Denmark)
"Taxonomic freedom" and referential practice in biological taxonomy
ABSTRACT. Taxonomic names serve vital epistemic and communicative roles as identifiers for taxonomic groups. Yet most taxonomic names are poor vehicles for communicating taxonomic content – names don’t wear their meaning on their sleeves. For names to have meaning, they must be associated with fallible and subjective judgments about the proper circumscription of the taxonomic groups they refer to. So while the practices of naming and taxonomizing are principally distinct, they closely depend on each other. With a twist on Kant’s famous dictum, one could say that “nomenclature without taxonomy is empty; taxonomy without nomenclature is blind.”
This clear distinction – yet intimate connection – between nomenclature and taxonomy is enshrined in the major codes of taxonomic nomenclature as the principle of taxonomic freedom: the codes only govern the application of names, they don’t lay down principles for taxonomic practice. What counts as a ‘good’ taxonomic judgment is not codified but is left to taxonomic science and the individual taxonomist. In recent years, the principle of taxonomic freedom has come under attack from various side in scientific journals, on taxonomic mailing lists, and in the popular press. It has been argued that in contemporary taxonomic practice, taxonomic freedom all too often results in taxonomic free-for-all. Names based on poor-quality taxonomic hypotheses make their way into databases and end up misleading taxonomists and non-taxonomic end users, including policy makers on conservation efforts. This raises the question whether taxonomic freedom should continue to be a central principle, or whether it should be restricted to allow for additional quality assurance mechanisms on taxonomic judgment. Or would this be fundamentally unscientific and do more harm than good to taxonomy? In this talk, I will provide a closer philosophical look at recent arguments for and against the principle of taxonomic freedom.
Is semantic structuralism necessarily "set-theoretical" structuralism? A case of ontic structural realism.
ABSTRACT. Standard approaches to scientific structuralism combine issues relevant to debates in metaphysics, epistemology, philosophy of science, and scientific methodology. Nowadays, one of the most frequently discussed and developed is the ontic structural realism, whose main thesis states that only the structure exists (French & Ladyman 2003). The emergence and development of various forms of structuralism eventually gave rise to a discussion on various methods of representing relevant structures that we should be realists about: from typically syntactic ones, such as Ramsey's sentence, to set- (or group-) theoretical and categorical semantic interpretations. The most popular version of the semantic approach to structural realism, attempting to define the concept of a „shared structure” between models in terms of relevant functions determined between the analyzed structures, is undoubtedly one that assumes a structure of sets as its formal framework (Suppes 1967, French 2000). The main advantage to such an approach is certainly the fact that it was thought to provide a unified formal framework for formulating explanations on such issues as the structure of scientific theories, the problem of applicability of mathematics in natural sciences, and the philosophical account of Structural Realist’s commitment to the structure shared by successive theories. However, as it soon has been made clear by the critics, even if such framework can be quite useful in, for example, analyzing and comparing the structures of scientific theories, it is also very problematic when it comes to extracting the Structural Realist’s ontological commitments, since we inevitably run into theoretical troubles while trying to talk about the set-structure made of relations without objects.
In recent years, voices saying that the set-theoretical approach - with all its advantages and drawbacks - does not have to be the „only right way” were raised. Mathematician Elaine Landry (2007) has joined the discussion noting that, from a mathematical point of view, there is no rational (devoid of dogmatism) reason to take for granted the claim about the fundamental character of set theory. What's more, in cooperation with Jean-Pierre Marquis (2005), she formulates an alternative vision of scientific structuralism, whose formal framework is provided not by - highly popular in this context - set theory, but by means of category theory. Reflections around alternative methods of representing relevant structures within scientific structuralism eventually lead Landry to conclude that the proposed categorical framework, even if historically and genealogically rooted in set theory, to achieve an adequate level of accuracy in the analysis of the notion of „model” or „shared structure”, does not depend in any way on the previous support or embedding in set theory.
The aim of this talk is a comparative analysis of the philosophical foundations for applications of different structural frameworks discussed in the context of the ontic version of structural realism, with particular emphasis on set theory and category theory. These considerations will then be confronted with a hypothesis stating that we should make use of some kind of trade-off between the comparative and relevant powers of different representational methods, proposing a pluralistic (in opposition to unificatory) view on the role of structural representation within OSR.
French, S. (2000). The reasonable effectiveness of mathematics: Partial structures and the application of group theory to physics. Synthese, 125, 103-120.
French, S. & Ladyman, J. (2003). Remodelling structural realism: Quantum physics and the meta physics of structure, Synthese, 136(1), s. 31-56/
Landry E., & Marquis, J-P. (2005). Categories in context: Historical, foundational and philosophical. Philosophia Mathematica, 13(1), 1-43.
Landry, E. (2007). Shared structure need not to be shared set-structure. Synthese, 158, 1-17.
Suppes, P. (1967). Set theoretical structures in science. Mimeograph, Stanford: Stanford University.
15:45
Michel Ghins (Universite catholique de Louvain, Belgium)
Scientific realism and the reality of properties
ABSTRACT. Scientific realism is a claim about the reality of properties (Anjan Chakravartty). According to epistemological scientific realism we have more reasons to believe in the reality of some unobservable properties rather than to doubt their existence. Moreover, we have more reasons than not to believe that some laws, which connect properties such as volume, pressure and temperature (in the perfect gases law) are (approximately) true. Coherent scientific realists should understand truth in a correspondence sense. What is the kind of reality that makes scientific statements true, what is the nature of their truth-makers? Then, we enter the territory of metaphysics.
I argue that what makes a statement true is a fact, namely the instantiation of a property in particular entities or, shortly, particulars. Particulars are individual property bearers which exemplify some properties. Instantiation should be construed as primitive. Attempts to analyze this notion leads to insuperable difficulties such as the Bradley regress. According to ontological scientific realism some instantiated unobservable properties exist and are involved in (approximately) true scientific laws.
Justified belief in some instantiated unobservable properties is grounded on an analogy with ordinary perceptual experience in agreement with moderate empiricist requirements. First, the reality of a property, such as solidity, is attested by its actual presence in perception, through various sensory modalities (seeing, touching, hearing…). Repeated perceptions of the same property must give invariant results. Second, we must have reasons to believe that there is a causal link (in the minimal Millian sense) between the instantiated property and its perception. Third, inference of the existence of something which is not actually observed, such as a mouse (cf. van Fraassen), can only be based on perceptions of properties, such as producing noises, losing hair etc. In fact, this is not an inference to the best explanation, but the empirical verification of the compresence of a set of instantiated properties by a “something”, a property bearer that we call a “mouse”.
Similarly, we are justified to believe in unobservable scientific properties if analogous requirements are satisfied. First, observations of the same property by means of various experimental methods must deliver concordant results. This is the requirement of invariance. Second, we must have reasons to believe that there is a causal relation between the observed property and the observation results. This is the requirement of causality. Third, the properties must be observable in principle, that is, they could be brought to our perception by means of adequate instruments, such as microscopes or telescopes. This is the requirement of observability in principle. To these three requirements, we must add a fourth, namely the requirement of measurability, in order to make possible the satisfaction of the first. Various methods of measurements of the same property should deliver concordant measurement results.
Summarizing, I argue that compliance with these four requirements give us strong reasons to believe that the truth makers of some scientific statements are properties whose instances in particulars are observable in principle.
Structural Modality as the Criterion for Naturalistic Involvement in Scientific Metaphysics
ABSTRACT. Ladyman & Berenstain 2012 argue that Ontic Structural Realism (OSR) attributes rich natural necessities to the world whereas Epistemic Structural Realism (ESR) developed by Worrall 1989 and Constructive Empiricism do not. Although it is undoubtedly clear that the world is intrinsically de-modalized according to Constructive Empiricism, I think there is a place for natural necessities in ESR since the negative argument about our epistemic access to the basic furniture of the world and attributing structural natural necessities to it can be held at the same time. The reason for that is an argument due to Stanford (Stanford et al. 2010). He has shown that modality claims in OSR do not say anything more than endorsing No Miracle Argument (NMA). Since ESR, just like OSR, endorses NMA, modality can similarly be attributed to structures in ESR; hence modality cannot be introduced as a distinctive feature for OSR. After showing that introducing modality supports OSR and ESR equally, I will try to understand the significance of natural necessities in a plausible version of naturalized metaphysics that would be suitable for structuralism. I will argue, following Wilson (forthcoming), that modality is the key component of naturalistic involvement. Since natural necessities have the same strength in OSR and ESR, I will conclude that naturalized metaphysics developed by Ladyman & Ross could easily be adapted for ESR. Although I endorse Wilson’s argument that modality should be taken as the criteria for naturalisticness, I reject his claim that his position provides the strongest version of scientific modality since it reconciles Lewisian modal realism with Everettian quantum mechanics. For, post-positivist concept analysis driven analytic metaphysics mainly populated with Lewis is the primary target for most scientific metaphysicians today; hence combining modal realism with current fundamental physics would not strike most of the naturalized metaphysicians as a successful reconciliation. I conclude by arguing that an account for structural modalities, including structural necessities in ESR, is the most convenient locus for a plausible version of naturalized metaphysics.
References
Berenstain, Nora and Ladyman, James. 2012. “Ontic Structural Realism and Modality”.
In E. Landry & D. Rickles (Eds.), Structural Realism: Structure, Object,
and Causality. New York: Springer.
Ladyman, James and Don Ross, with David Spurrett and John Collier. 2007.
Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press.
Stanford, Kyle, Paul Humphreys, Katherine Hawley, James Ladyman, and Don Ross. 2010.
“Protecting Rainforest Realism, Symposium” review of Every Thing Must Go:
Metaphysics Naturalized, by James Ladyman and Don Ross. Metascience:
19:161-85.
Wilson, Alastair. Forthcoming. Natural Contingency: Quantum Mechanics as Modal Realism.
Oxford: Oxford University Press.
Worrall, John. 1989. “Structural Realism: The Best of Both Worlds?” Dialectica, 43:99-124.
The argument put forth in this article shows that the despite the voices of Ross (2008) and Kincaid (2008), structural realism is incongruent with the realistically reconstructed project of econometrics (the empirical branch of economics).
In detail, a detailed analysis of the actual research practices and developments of econometrics indicates that strains of empirical literature devoted to many topics experience ‘reversals’ so that the relation (partial correlation in the case of linear models) between, let me say, X and Y is positive according to some studies and then newer econometric models suggest a negative sign (Goldfarb 1995; Goldfarb 1997; Maziarz 2017; Maziarz 2018). During the presentation, I will first abstract an account of empirical reversals on the ground of analyzing empirical literature devoted to the phenomenon of ‘emerging contrary result’/emerging recalcitrant result (ERR) phenomenon and later (2) show that such changes in the econometric models of data violate the set-theoretic definition of structure. Therefore, the argument presented states that structural realism is not adequate to the empirical branch of economics. Either a realist position should be elaborated on by incorporating a pragmatist dimension (Hoover 2012) or an antirealist position should be put forward.
Bibliography
Goldfarb, R. S. (1995). The economist-as-audience needs a methodology of plausible inference. Journal of Economic Methodology, 2(2), 201-222.
Goldfarb, R. S. (1997). Now you see it, now you don't: emerging contrary results in economics. Journal of Economic Methodology, 4(2), 221-244.
Hoover, K. D. (2012). 10 Pragmatism, perspectival realism, and econometrics1. Economics for real: Uskali Mäki and the place of truth in economics, 14, 223.
Kincaid, H. (2008). Structural realism and the social sciences. Philosophy of Science, 75(5), 720-731.
Maziarz, M. (2017). The Reinhart-Rogoff controversy as an instance of the ‘emerging contrary result’phenomenon. Journal of Economic Methodology, 24(3), 213-225.
Maziarz, M. (2018). ‘Emerging Contrary Result’Phenomenon and Scientific Realism. Panoeconomicus, 1-20.
Ross, D. (2008). Ontic structural realism and economics. Philosophy of Science, 75(5), 732-743.
ABSTRACT. The story of field theory is often told as a tale of triumph. The field concept (it seems) is one of the most successful in all of science. It has survived, if not driven, all major physical revolutions of the past century and is now a central part of our most thoroughly tested theories.
The aim of this talk is to put forward a different narrative: while fields have proven to be extremely successful effective devices, physics has time and time again run into an impasse by reifying them and buying into a dualistic ontology of particles and fields.
On closer examination, the concept of fields as mediators of particle interactions turns out to be philosophically unsatisfying and physically problematic, as it leads in particular to problematic self-interactions. Contrary to common claims, this issue was not solved but inherited by quantum field theory, where it appears in form of the notorious ultraviolet divergence.
Against this background, I will argue that the true significance of fields is that of “book-keeping variables” (Feynman), summarizing the effects of relativistic interactions to formulate dynamics as initial value problems. The field concept, in other words, allows us to trade a diachronic, spatiotemporal description in terms of particle histories for a synchronic description in terms of an infinite number of field degrees of freedom that encode the particle histories in their spatial dependencies. And while this move is very successful (maybe even indispensable) for practical applications, it is ultimately at odds with the principles of relativity.
I will first spell out the case against fields in the context of Maxwell-Lorentz electrodynamics, the locus classicus of field theory, which is actually inconsistent as a theory of fields and point particles. In the classical regime, the Wheeler-Feynman theory provides a viable alternative, explaining electromagnetic phenomena in terms of direct particle interactions, without introducing fields as independent degrees of freedom.
Turning to more modern physics, I will point out that the ontology of quantum field theory is a largely open question and explain why attempts to construct QFT as a quantum theory of fields in 3-dimensional space have failed. I will end with some indications how quantum field theory can be construed as a theory of point particles, recalling the concept of the Dirac Sea that has recently received renewed attention in the philosophical literature. In the upshot, quantum field theories do not provide convincing arguments for a field ontology; for physical and philosophical reasons, point particles are still our best candidate for a fundamental ontology of physics.
15:45
Charles Sebens (California Institute of Technology, United States)
The Fundamentality of Fields
ABSTRACT. In the quantum field theory of quantum electrodynamics, photons and electrons are generally treated quite differently. For photons, we start with the classical electromagnetic field and then quantize it. For electrons, we start with classical particles, then move to a relativistic quantum description of such particles (using the Dirac equation) before taking the final step from relativistic quantum mechanics to quantum field theory. In brief, photons are treated as a field and electrons as particles. There are a number of reasons to be unhappy with this inconsistent approach. We can achieve consistency either by treating photons as particles or electrons as a field (the Dirac field). In this talk, I will evaluate both of these strategies and argue that the second strategy is preferable to the first. Thus, we should view the electromagnetic and Dirac fields as more fundamental than photons or electrons.
The first strategy for finding consistency requires one to develop a relativistic quantum mechanics for the photon. One can attempt to construct such a theory by reinterpreting the electromagnetic field (when it is sufficiently weak) as a quantum wave function and finding a Schrödinger equation in Maxwell’s equations. Around 1930, Majorana and Rumer suggested a way of doing this: take the complex vector field formed by summing the electric field with i times the magnetic field to be the wave function for the photon. However, this is not an acceptable wave function: its amplitude-squared is not a probability density. Good (1957) proposed a more complicated but better motivated way to form a photon wave function from the electric and magnetic fields. Unfortunately, his wave function is not suitably relativistic to be part of a relativistic quantum mechanics for the photon.
The second strategy is to treat electrons as a field, not particles. This involves first interpreting the Dirac field as a classical field, not a quantum wave function, and then quantizing that field. Such an approach yields a classical picture in which the mass and charge of electrons are spread out, not concentrated at points. This spread-out charge distribution serves as a more sensible source for the electromagnetic field than troublesome point charges. The spread of mass and charge also helps us understand electron spin. Analyzing the distribution and flow of mass and charge in the Dirac field allows us to see the classical electron as truly spinning and provides a new explanation of the electron’s gyromagnetic ratio.
Reference:
Good, Roland H., Jr. 1957. Particle Aspect of the Electromagnetic Field Equations.
Physical Review, 105(6), 1914–1919.
Realism and Representation in Model-Based Explanation
ABSTRACT. There has been a growing interest in the study of scientific representation. One of the motivations given for studying representation is that representational capacity is taken to be a requirement for successful explanation, one of the main aims of scientific inquiry. This is of special importance when accounting for the explanatory power of scientific models, amounting to what is sometimes called the ‘representationalist’ view according to which, in order to explain, a model has to represent its target phenomenon in a suitable manner. Interestingly, this assumed connection between scientific explanation and representation is most explicitly spelled out when it is actively criticized by authors who point towards the role of essential, false components in models employed for explanatory purposes in order to motivate accounts of explanation that do not rely on successful representation. In close neighborhood to this idea about representation one often finds a general preference in favor of realism in lots of the most prominent accounts of scientific explanation according to which the principles employed in our explanations should be approximately true. This is often, at least implicitly so, formulated in terms of representation. In this way, a representational match serves realism. Hence, the realist presupposition can be criticized with reference to much of the same problem cases as the representationalist view of explanation.
In this contribution, I will argue that it serves to look at what specific accounts of representation can do for the explanation debate and that the link between realist explanation and representationally successful explanation can be loosened in the process. There are multiple different accounts of representation that have been proposed in recent years and not all of them serve realist tendencies in model-based explanation equally. The view of representation that often seems to be presupposed by realist as well as by explicitly non representationalist accounts of explanation is one of representation as an objective two-place relationship between model and target phenomenon in terms of similarity or isomorphism. While this is in line with realist requirements on scientific models, it carries much of the same problems as well. Furthermore, pragmatic accounts that take into account the role of agents interpreting and drawing inferences from models have emerged as an alternative. These are not conducive to realism about explanatory models in the same way. They have found little to no application to accounts of model-based explanation. I argue that it is beneficial to connect them more closely. The result is a view of model-based explanation that is representational but not realist, and that avoids some of the drawbacks of the realist version.
Is the No-miracles argument an Inference to the Best Explanation?
ABSTRACT. As is well known, scientific realism about our best scientific theories is usually defended with the No-miracles argument (NMA). In its simplest form the NMA states that it would be a miracle, if our best scientific theories, i.e., theories enjoying tremendous empirical success such as mod-ern atomism and the theory of evolution were false. The NMA is usually explicated as an infer-ence to the best explanation (IBE): “Given a body of data find potential explanations for the da-ta, compare them with regard to explanatory quality, and infer the approximate truth of the best explanation.” The assumption then is that our intuitive grasp of the notion of “explanatory quali-ty” generally suffices to apply IBE in practice, but that it is also a task for philosophical investi-gation to explicate this notion and show its truth-conduciveness.
I aim to compare IBE with another explication of the NMA, namely an improved version of hypothetico-deductivism, which I call HD+: “If T is a reasonably simple theory, the data is ex-cellent, and T together with auxiliaries covers the data, then T is approximately true.” The condi-tion that T be reasonably simple is a standard way to ward off most cases of underdetermination of theory by all possible evidence. The condition that T cover D allows for probabilistic relations between T and D, not just deductive ones, stretching the meaning of “deductive” in “hypotheti-co-deductivism”, but so what. Covering of D by T will normally require assistance from suitable auxiliaries and test conditions. Finally, data counts as excellent, if it exhibits good making fea-tures such as diversity and accuracy to a high degree. For example, the theory of evolution is supported by many independent lines of data. The main aim of my talk is to show that HD+ is a better explication of the NMA than IBE.
The most interesting difference between the two principles is the following. IBE tells us to construct rival theories, and compare them with regard to explanatory quality, whereas HD+ al-lows us to infer the truth of T without considering any rival theories of T. I claim that HD+ is right: If the data is excellent, we need not examine any rival theories of T to infer the truth of T. This claim conflicts with received wisdom about the confirmation of theories, so I have to pro-vide arguments in its support.
My first argument alludes to scientific practice. When a theory is supported by excellent da-ta, scientists usually don’t consider rivals theories. A telling example is Perrin’s famous argu-mentation for the atomic hypothesis in his book Atoms (1916). Perrin marshals the relevant data, determines its relationship with atomism, and notes its good-making features, most importantly its diversity and accuracy. He famously states that the data comes from 13 entirely different phenomena such as Brownian motion, radioactive decay, and the blueness of the sky. He does not engage in anything resembling IBE: He does not construct any rival explanations of the 13 different phenomena, compare them with respect to explanatory quality, and judge atomism to be the best explanation. The whole book is solely concerned with working out the 13 different applications of atomism. Perrin obviously thinks that this suffices to show that atomism is true. Thus, his reasoning accords nicely with HD+, but not with IBE. I examine another example from scientific practice, namely discussions of the empirical support for the theory of evolution in biology textbooks.
My second argument is general. It aims to show that under certain plausible assumptions ex-cellent data automatically refutes all reasonably simple rival theories, and we can know this without having to formulate and consider these rivals explicitly. I show how the second argu-ment can be embedded and justified in a Bayesian framework.
Rigidity and Necessity: The Case of Theoretical Identifications
ABSTRACT. Kripke holds the thesis that identity statements containing natural kind terms are if true, necessary; he denominated those statements theoretical identifications. Kripke alleges that the necessity of theoretical identifications grounds on the rigidity of natural kind terms. Nevertheless, I will argue that the conception of natural kind terms as rigid designators, in one of their most natural views, hinders the establishment of the truth of theoretical identifications and thus of their necessity.
According to Kripke, one of the similarities between natural kind terms and proper names is that both sorts of expressions appear in identity statements that, if true, are necessary. Kripke exemplifies theoretical identifications by the statement “Water is H2O”. Nevertheless, Kripke claims that this similarity follows from another one, namely that both sorts of expressions are rigid designators.
Kripke’s definition of rigid designation for proper names is the following: a designator is rigid if it designates the same object with respect to all possible worlds. Since in the third lecture of Naming and Necessity he extends the notion of a rigid designator to natural kind terms, I will extend that definition to natural kind terms. The most literal extension, and the only one I will take as a basis for my considerations, is the following: A natural kind term is rigid if it designates the same kind with respect to all possible worlds. In this regard I will follow Kripke’s assertions (see Naming and Necessity 1980: 135-136), which suggest that he conceives a natural kind as a type of universal instantiated in particular entities. The view of natural kinds as universals makes it possible for natural kind terms to be rigid designators. On this matter, I will adopt the following condition for the identity of natural kinds: two natural kinds are identical if and only if the instances of each kind are the same in all possible worlds.
The theoretical identity “Water is H2O” will be true if and only if the natural kinds water and H2O are identical. However, according to the condition for the identity of natural kinds proposed, this will hold if and only if the instances of both natural kinds are the same in all possible worlds, i.e., if and only if the terms “water” and “H2O” are coextensive in all of them. Notwithstanding, even if we concede that the extension of those terms is the same in the actual world, we cannot take into consideration each and every possible world to ascertain whether the instances of the natural kinds designated by those two terms are coextensive in all of them. Furthermore, I will allege that from the rigidity of the terms “water” and “H2O” and their coextensiveness in the actual world it does not follow their coextensiveness in all possible worlds, which is the condition to be satisfied for the truth – and necessity – of the statement “Water is H2O”.
In other words, even if we accept that the terms “water” and “H2O” are rigid designators and that the extension of those terms is the same in the actual world, this does not lead to the conclusion that their referents ‒ the universals designated by them ‒ are identical, i.e., that the theoretical identity “Water is H2O”, conceived as expressing an identity between natural kinds (universals), is true, although if it were true, it would also be necessary.
15:45
Marek Porwolik (Institute of Philosophy, Cardinal Stefan Wyszyński University in Warsaw, Poland)
The axiomatic approach to genidentity according to Z. Augustynek
ABSTRACT. The identity of objects through time can be described using the notion of genetic identity (genidentity). This term was introduced to the language of science by Kurt Lewin (1890-1949) in 1922. Polish philosopher Zdzisław Augustynek (1925-2001) devoted a number of his works to this issue. He tried to specify the notion of genidentity using axiomatic definitions. He presented three sets of specific axioms. They delimit sets of theses, which are called systems by him. These systems are marked as AS1, AS2, and AS3. In particular axioms, besides the term genidentity (G), also the following terms are used: logical identity (I), quasi-simultaneity (R), quasi-collocation (L), and causality (H), as well as the complements of these relations: genetic difference (G'), logical difference (I'), time separation (R'), space separation (L'), and the complement of relations H (H'). They represent binary relations whose field is the set of events S. Augustynek analyzed some consequences of these axioms that are important, in his opinion, from the philosophical point of view. His results can be supplemented or even corrected in some places. This fact motivated us to analyze systems AS1, AS2, and AS3 once again. The first aim of this paper is to present the set-theoretic approach to the analysis of Augustynek’s systems. This approach, in our opinion, facilitates the analysis of the sets of specific axioms. The next step is formulating and justifying theses concerning, first, the relationships among systems AS1, AS2, and AS3, and second, supplementary axioms that cause a mutual equivalence of the axioms when added to systems AS1, AS2, and AS3. In this way we will correct some conclusions concerning systems AS1, AS2, and AS3 from Augustynek’s works. We will show a method of creating alternative axioms for systems AS1, AS2, and AS3, and suggest methods for further modifications of axioms of these systems. Our last aim is to analyze the problem of reducing the above-mentioned axiomatic definitions to conditional definitions containing the necessary condition and the sufficient condition of a selected notion from Augustynek’s systems. We are going to prove that these axiomatics can be reduced to certain conditional definitions of genidentity.
Bibliography:
Augustynek Z. (1981), Genidentity, "Dialectics and Humanism" 1, 193-202.
Augustynek Z. (1984), Identyczność genetyczna [Genetic identity], "Studia Filozoficzne" [Philosophical Studies] 2, 31-42.
Augustynek Z. (1996), Relacje czasoprzestrzenne [Spatiotemporal relationships], "Filozofia Nauki" [Philosophy of Science] 4(4), 7-19.
Augustynek Z. (1997), Wspólna podstawa czasu i przestrzeni [The common basis for time and space] in: Czasoprzestrzeń. Eseje filozoficzne, [Spacetime. Philosophical Essays] Warszawa: WFiS UW, 51-57.
Augustynek Z. (1997), Substancja — przyczynowość — przestrzeń — czas [Substance - causality - space - time] in: Czasoprzestrzeń. Eseje filozoficzne, [Spacetime. Philosophical Essays] Warszawa: WFiS UW, 99-111.
Integrated HPS? Formal versus Historical Approaches to Philosophy of Science
ABSTRACT. While recent decades have seen a thorough-going synthesis of history and philosophy of science—most notably in the form of the ‘ Integrated History and Philosophy of Science’ movement—the gap between history and formal philosophy of science remains as wide as ever. In this talk, I argue that the divide between historical and formal philosophies of science is unwarranted and that by drawing the right lessons from existent literature we can overcome their century-long separation. I start by considering the origin of the supposed dichotomy between history and formalism in philosophy of science in the reception of Kuhn’s The Structure of Scientific Revolutions (1962). Drawing on recent literature on the Kuhn-Carnap and Kuhn-Stegmüller connections, I argue that the contrast between Kuhn’s approach with that of the antecedent formal-sentential analyses of science (exemplified by Carnap) and the subsequent formal-structuralist analyses (exemplified by Stegmüller) is not as stark—and undoubtedly more nuanced—than it is presumed to be in many contemporary appraisals.
Following this, I highlight several aspects of existent formal philosophies of science that, so I argue, make them ill-suited for historicization. In particular, I focus on the school of formal philosophy in which the history of science features most prominently, viz. the Munich-brand of set-theoretic structuralism and its analysis of the ‘diachronic structure’ of scientific theories as expounded by Stegmüller (1976) and Balzer, Moulines and Sneed (1987). These analyses, I argue, suffer from an overly simplistic approach to the history of science, as exemplified by (i) an overly narrow construal of the science-world relation and (ii) an overly narrow construal of the role of mathematics in science. To overcome these issues, I argue that formal methodologies need to move from considering the structure of scientific theories simpliciter to considering various supratheoretical, contextual elements of scientific knowledge as well.
Finally, I put forth a positive proposal that I take to do just this. To this end, I introduce the metalogical framework of abstract model theory. After briefly delineating my proposal from that of Pearce and Rantala (1983), the first authors to apply abstract model theory to philosophy of science, I sketch how this framework may be used to arrive at a formal picture of science that does justice to the plurality of different knowledge-systems we find throughout history.
References
Balzer, Wolfgang, C. Ulises Moulines and Joseph D. Sneed. 1987. An Architectonic for Science: The Structuralist Program. Dordrecht: D. Reidel Publishing Company.
Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Pearce, David and Veikko Rantala, 1983. ‘‘New Foundations for Metascience.’’ Synthese 56 (1): 1-26.
Stegmüller, Wolfgang. 1976. The Structure and Dynamics of Theories. New York (NY): Springer-Verlag.
A Philosophy of Historiography of the Earth. Metaphor and analogies of “natural body”
ABSTRACT. Long before the “Gaia hypothesis”, the analogy between “Earth” and “body” has an history that may starting with history of thought itself. Indeed, combining political, social, theological and scientific issues, "body" is the image par excellence of the functional whole lasting over time. Can we say then that it is a “metaphor”? What does this mean from an epistemological point of view? If metaphor is only a “trope” in classical rhetoric, what can be said about its use and function within modern theories ? Is it, to use Paul Ricoeur's distinction in the “métaphore vive”, a catachesis or a real semantic innovation? Does it serve as a model, a “tool”, to direct scientific research towards a new interpretation of reality or does it have a specific heuristic? What is the relationship between metaphor and analogy in modern thought?
These questions will be explored from the examples of three early modern philosophers, namely Hobbes, whose Leviathan makes an analogy between the animal body and the social body, serving as a justification for placing the sovereign at his head, Herder, who taking up Leibniz's notion of "organism" and transforms it into an epistemological tool for thinking nations, seen as living, growing, dying, and have to be interpreted by historians themselves, and finally Leibniz, who, between Hobbes and Herder, not only criticizes Hobbes (and Pufendorf in his wake), but extends the use of the animal body’s analogy (seen as "organism") to many areas of his thought, from natural history to human history, because the world as the organism contains its own principle of action and development. Criticizing the Cartesian idea of reducing the body to the mathematical extent and seeking at the same time how to apply the principle of bodies’ individuation, Leibniz finds the specificities of what he calls the "machines of nature", that is to say of organic bodies that are not only mechanical but have in them a principle of order infinitely superior to human artefacts. The problem of the body’s individuation exceed the biological domain and has to be extended to the whole nature, because, according to Leibniz, Earth is also a kind of natural body, which has specific organs producing and destroying metals: that is a metaphor of rhetorical he used aiming at rehabilitating as quickly as possible the mining in decadence. But this strategic form is repeated when Leibniz presents to King Ernest August the fruit of several years of genealogical research on royal funds and, by so, speaks a language that the prince’s education, nourished by humanism and pufendorfian thought, made familiar. The animal body, a structured whole whose organs are defined by their function, becomes in that correspondance the metaphor of history.
ABSTRACT. Proof-theoretic reductions from a prima facie stronger theory to a prima facie weaker one have been used for both epistemological ends (conceptual reduction, reduction of the levels of explanation, relative consistency proofs) and ontological ones (chiefly, ontological parsimony). In this talk I will argue that what a proof-theoretic reduction can achieve depends on whether the proof transformation function meets certain intensional constraints (e.g. preservation of meaning, theoremhood, etc.) that are determined locally, by the aims of the reduction. In order to make this point more precise, I will use Feferman [1979]’s terminology of a formalisation being faithful or adequate, and direct or indirect, and I will consider two case studies:
(I) The proof-theoretic reduction of the prima facie impredicative theory Δ1-1-CA to the predicative theory ACA<ε_0. The aim of this reduction was not to dispense with the assumptions of the former theory in favour of those of the latter, but rather to sanction the results obtained in the former theory from a predicativist standpoint. Even though the patterns of reasoning carried out in Δ1-1-CA are not faithfully represented by proofs in ACA<ε_0, the proof-theoretic reduction yields a conservativity result for Π1-2 sentences, important from a predicative perspective because they define arithmetical (and thus predicative) closure conditions on the powerset of the natural numbers. Using Feferman’s terminology, this reduction demonstrates that ACA<ε_0 is an indirectly adequate formalisation of the mathematics that can be directly formalised within Δ1-1-CA, but not an indirectly faithful one, because the methods of proof available within Δ1-1-CA do go beyond those available in ACA<ε_0.
(II) The proof-theoretic reduction of subsystems of second order arithmetic S to first order axiomatic theories of truth T, conservative over arithmetical sentences. The aim of such reductions is to obtain a more parsimonious ontology, thus showing that even though reasoning carried out in S can be epistemically more advantageous (shorter proofs, closeness to informal analytical reasoning, etc.), the existential assumptions of S can be ultimately eliminated in favour of the leaner assumptions of T. I will argue that since the aim of this reduction is the elimination of a part of the ontology of S, we should demand that the proof theoretic reduction is not only indirectly adequate, but also indirectly faithful––that is, that it does not merely preserve the arithmetical theorems of S, but also that it preserves (under some appropriate translation) the second-order theorems of S.
In the last part of the talk, I will apply these arguments to the debate on theoretical equivalence in the philosophy of science and I will argue that by imposing additional intensional criteria on the proof-theoretic translation function between two theories that are coherent with the aims of the idealisation, we can obtain locally satisfactory syntactical criteria of theoretical equivalence.
ABSTRACT. Quantum logic is a set of rules for reasoning about propositions that takes the principles of quantum theory into account. This research area originated in a famous article by Birkhoff and Neumann [3], who were attempting to reconcile the apparent inconsistency of classical logic with the facts concerning the measurement of complementary variables in quantum mechanics, such as position and momentum. A key role is played by the concept of orthomodular lattice, an algebraic abstraction of the lattice of closed subspaces of any Hilbert space [1, 14, 2, 5, 9, 15, 13]. In 1968, it was realized that a more appropriate formalization of the logic of quantum mechanics could be obtained relaxing the lattice conditions to the weaker notion of orthomodular poset [4, 15].
Actually, in the case of orthomodular posets, even if the orthocomplementation is still a total operation, lattice operations are only partially defined, in general. This difficulty cannot be resolved by considering the completion of the underlying poset. In fact, Harding [2] [14] showed that the Dedekind-MacNeille completion of an orthomodular lattice need not be orthomodular, in general. Therefore, there is no hope to find orthomodularity preserved by completions even for posets. In our approach, we will weaken the notion of orthomodularity for posets to the concept of generalized orthomodularity property by considering LU-operators [6], and then analyzing the order theoretical properties it determines. This notion captures important features of the set of subspaces of a (pre-)Hilbert space, the concrete model of sharp quantum logic [7].
After dispatching all basics, we define the concept of generalized orthomodular poset, we provide a number of equivalent characterizations and natural applications. In the same section, we study commutator theory in order to highlight the connections with Tkadlec’s Boolean posets. Finally, we apply those results to provide a completely order-theoretical characterization in terms of orthogonal elements, generalizing Finch’s celebrated achievements [8]. Subsequently, we will describe some Dedekind's type Theorems, that describe generalized orthomodular posets by means of particular subposets, generalizing [6]. We then propose a novel characterization of atomic amalgams of Boolean algebras [1]. In particular, a development of our arguments will yield Greechie’s Theorems as corollaries [10, 11].
Finally, we will prove that our notion, for orthoalgebras, corresponds to orthomodularity. This allows to conclude that an orthoalgebra has a Dedekind-MacNeille completion if and only if its induced poset is orthomodular, and it can be completed to an orthomodular lattice. To the best of our knowledge, these results are new and subsume, under a unifying framework, many well know facts sparsely scattered in the literature [16, 17].
References
[1] Beran L., Orthomodular Lattices: Algebraic Approach, Riedel, Dordrecht, 1985.
[2] Bruns G., Harding J., “Algebraic Aspects of Orthomodular Lattices”, In: Coecke B., Moore D., Wilce A. (eds) Current Research in Operational Quantum Logic. Fundamental Theories of Physics, vol 111. Springer, Dordrecht, 2000.
[3] Birkhoff G., von Neumann J., “The logic of quantum mechanics”, Annals of Mathematics, 37, 1936, pp. 823-843.
[4] Chajda I., Kolaˇr ́ık M., “Variety of orthomodular posets”, Miskolc Mathematical Notes, 15, 2, 2014, pp. 361-371.
[5] Chajda I., L ̈anger H., “Coupled Right Orthosemirings Induced by Orthomodular Lattices”, Order, 34, 1, 2017, pp. 1-7.
[6] Chajda I., Rach ̊unek J., “Forbidden configurations for distributive and modular ordered sets”, Order, 5, 1989, pp. 407–423.
[7] Dalla Chiara M. L., Giuntini R., Greechie R., Reasoning in Quantum Theory–Sharp and Unsharp Quantum Logic, Kluwer Dordrecht, 2004.
[8] Finch P. D., “On orthomodular posets”, Bulletin of the Australian Mathematical Society, 2, 1970, pp. 57-62.
[9] Foulis D. J., “A note on orthomodular lattices”, Portugaliae Math., 21, 1962, pp. 65-72.
[10] Greechie R. J., “On the structure of orthomodular lattices satisfying the chain condition”, Journal of Combinatorial Theory, 4, 1968, pp. 210-218.
[11] Greechie R. J., “Orthomodular lattices admitting no states”, Journal of Combinatorial Theory, 10, 1971, pp. 119-132.
[12] Husimi K., “Studies on the foundations of quantum mechanics, I”, Proceedings of the Physics-Mathematics Society of Japan, 19, 1937, pp. 766-789.
[13] Kalmbach G., “Orthomodular logic”, Z. Math. Logic Grundlagen J. Math., 20, 1, 1974, pp. 395-406.
[14] Kalmbach G., Orthomodular Lattices, London Math. Soc. Monographs, vol. VIII, Academic Press, London, 1983.
[15] Matouˇsek M, Pt ́ak P.,“Orthocomplemented Posets with a Symmetric Difference”, Order, 26, 1, 2009, pp. 1-21.
[16] Navara M., Rogalewicz V.,“The Pasting Constructions for Orthomodular Posets”, Mathematische Nachrichten, 154, 1991, pp. 157-168.
[17] Riecˇanov ́a Z., “MacNeille Completions of D-Posets and Effect Algebras”, International Journal of Theoretical Physics, 39, 2000, pp. 859-869.
ABSTRACT. Recent statements of the responsibilities of scientists (e.g. the AAAS 2017 statement) have strengthened the responsibilities of scientists towards the societies in which they pursue their research. Scientists are now expected to do more than treat their experimental subjects ethically and communicate their results. They are also expected to benefit humanity. In a shift from the predominant views of the past 70 years, this responsibility is now tied to the freedom to pursue scientific research, rather than opposed to such freedom. The first half of this talk will describe this change and its drivers. The second half will address the fact that research institutions have not caught up with this theoretical understanding in practice. As exemplified by oversight of dual-use research, the responsibility to not cause foreseeable harm is not fully embraced by scientists and is not well supported by institutional oversight. Yet this is weaker than the responsibility to provide benefit. I will argue that scientists do in fact have a pervasive responsibility to provide benefit (and avoid foreseeable harm), but that this responsibility for individual scientists is different from the responsibility for the scientific community as a whole, and that minimally acceptable practices are also different from ideals. These differences in the nature of responsibility have important implications for science policy.