View: session overviewtalk overview

Organizers: Giuseppe Primiero and Nicola Angius

Defining identity between two objects is a fundamental problem in several philosophical disciplines, from logic to language and formal ontology. Since Frege, identity has been addressed in terms of formal constraints on definitional criteria which vary depending on context, application and aims. This symposium collects and compares current approaches to identity for computational systems in formal and applied contexts. Problems of interest include: definitional identity in arithmetics, intensional identity for proofs, the definition of replicas and the

study of preservation of second-order properties for copied computational artefacts, and the identity over time of formally defined social institutions. All these contexts offer problematic interpretations and interesting questions for the notion of identity.

Arithmetics offers a precise formal interpretation of logical identity, but higher types display a tension between extensionality and equivalent term evaluation of identical functions: if the latter is accepted, then functions are co-definable but irreducible.

In proof-theoretical semantics a sentence is identified by the set of all its proofs with a common inferential structure. Accounting for intensional aspects of these objects means to uphold their identity, while investigating common meta-theoretical properties like harmony and stability.

From formal to implemented objects, the problem of identity resurfaces for computational artefacts. For these objects, by definition subject to replication, the notion of copy has started receiving formal treatment in the literature, while the notion of replica can be further analysed with respect to existing approaches for technical artefacts. Moreover, the problem of preservation of behavioural properties like safety and reliability is crucial.

Finally, these problems extend to applications in social ontology. In particular, identity criteria are at the basis of an ontological analysis of the persistence of organisations through time and changes, a problem which can be formulated both theoretically and formally.

The problem of defining formal identity criteria for natural and technical objects traces back to ancient philosophy and it characterises modern and contemporary analytic ontology from Leibniz to Frege. This symposium collects contemporary analyses of the logical accounts of identity in formal and applied contexts.

This symposium is submitted on behalf of the Commission for the History and Philosophy of Computing, Member of the DLMPST.

09:00 | Definitional identity in arithmetic ABSTRACT. Definitional identity is the relation that holds between two expressions that are identical by definition. It is natural to expect that the principles governing this relation will depend on the underlying language. In this talk I wish to consider the formalization of definitional identity in arithmetic and draw attention to a striking contrast between the language admitting only first-order terms and the language admitting terms also of higher order. In the first-order case we may rely on Kleene (Introduction to Metamathematics, §54) and Curry (e.g. Combinatory Logic, ch. 2E). They formalize definitional identity as the equivalence relation inductively generated by all axioms of the form definiens = definiendum and the two rules of inference: 1. from a=b infer a[t/x]=b[t/x] 2. from a=b and c=d infer a=b[d/c]* Here b[t/x] is ordinary substitution, and b[d/c]* is substitution of d for any number of occurrences of c in b. There are two admissible forms of definition: explicit definition of an individual constant or a function constant and recursive definition of a function constant. A definitional equation for a function constant takes the form f(x)=t[x] where x is a sequence of variables. An explicit definition consists of one such equation, a recursive definition of two. This relation can be shown to be the reflexive, symmetric, and transitive closure of a reduction relation that formalizes the process of unfolding defined terms in an expression. This relation may, in turn, be thought of as an evaluation relation. Indeed, one can show that it is confluent and strongly normalizing. The definitional identity a=b can in this case therefore be interpreted as "a and b have the same value". Now let us admit variables x and constants f of arbitrary type. Definitional equations for such f's take the same form as before. In particular, the definiens is f(x) of type N. However, f may now occur isolated, for instance as argument to another function. Various considerations leads one here to postulate a further rule: 3. from f(x)=g(x) infer f=g, and vice versa. One consideration in support of this rule is the following. Suppose I have defined f and g by f(x)=t g(x)=t' and it turns out that t=t' namely that t and t' are definitionally identical. It is then natural to say that also f and g are definitionally identical, f=g But this example also shows that when rule 3 is admitted, then definitional identity can no longer be interpreted as "a and b have the same value". Namely, in the example we have f=g, but both f and g are irreducible (only when they are supplied with arguments can a reduction take place). One conclusion to draw from this is that although a reduction relation can be defined for higher-order expressions f, this relation cannot be thought of as evaluation. Evaluation makes good sense only after f has been supplied with arguments so that a ground-level expression results. |

09:30 | Harmony, stability, and the intensional account of proof-theoretic semantics PRESENTER: Luca Tranchini ABSTRACT. Proof-theoretic semantics (PTS) is based on the idea that the meaning of a proposition A should not be explained in terms of the conditions at which A is true, but rather in terms of the conditions at which the truth of A can be recognized. For mathematical propositions, this amounts to equate knowledge of the meaning of A with knowledge of what is to count as a proof of A, i.e. of its proof-conditions. The explanation of the proof-conditions comes together with criteria of identity of proof, that is of criteria for telling, given two proofs of a proposition, whether they are the same proof or not. The talk focuses on the problem of whether, for certain classes of propositions, such criteria deliver a trivial notion of identity of proof. By this we mean that for a proposition belonging to these classes, there can be at most one proof of it, or, equivalently, that any two proofs of such a proposition are identical. If identity of proof is not trivial, PTS delivers an intensional account of meaning, that is, it can give substance to the idea that there may be essentially different ways in which a proposition can be true, corresponding to the different proofs of the proposition. On the contrary, if identity of proof is trivial, the set of proofs of a proposition A - what in PTS can be seen as the semantic value of A - is either the empty set, or the singleton containing the (only) proof of A. In this case, PTS would come very close to an extensional semantics in which the semantic value of a proposition is simply identified with its truth-value. Central to PTS is thus the understanding of proofs as abstract objects, as opposed to the syntactic representations of proofs in formal calculi. Formal derivations should be seen as merely ``representing´´, or ``denoting´´ proofs in the abstract sense. Given an appropriate equivalence relation on formal derivations, one may say that equivalent derivations denote the same proof, and proofs could really be thought of as the abstracta obtained by quotienting derivations modulo equivalence. One of the most promising accounts of the two central concepts of PTS, called ``harmony´´ and ``stability´´, has been given in terms of certain transformations on derivations in the natural deduction format: reductions and expansions. Reductions and expansions determine an equivalence relation on derivations, and in turn a notion identity of proofs, which, as we show, is trivial for two significant classes of proposition: identity statements and negated propositions. In order to recover an intensional account of the meaning of these propositions, we consider the possibility of weakening either the notion of harmony, or that of stability, or both. We conclude by discussing the extent to which the proposed framework is coherent with the interpretation of proofs as programs stemming from the Curry-Howard correspondence. |

Organizers: Ilya Kasavin, Alexandre Antonovskiy, Liana Tukhvatulina, Anton Dolmatov, Eugenia Samostienko, Svetlana Shibarshina, Elena Chebotareva and Lada Shipovalova

The topic of the special symposium is inspired by Max Weber’s lecture on “Science as a Vocation” [Wissenschaft als Beruf], which will be celebrating the 100th anniversary of its publication in 2019. The ambivalence of the German term Beruf [occupation, job, vocation] plays a crucial role in Weber’s text, making it possible, on the one hand, to view science as a highly specialized activity, and on the other hand, to uncover its openness, its communicative nature, and its ethical dimension. In particular, the essay’s focus on the communicative dimension of science, and its relation to ideas of social progress, brings to light the significance of human meaning and practice in the conduct of science, but also the reliability of scientific knowledge and its perceived status in society. Weber’s lecture clearly remains relevant today, since it interrogates the possibility of history and philosophy of science to be both a specialized and an open project, designed to bridge the disciplinary gaps between various approaches to study science. More broadly, his essay thus presents a timely attempt to address the problem of integrating different academic cultures: philosophy and the sciences; ethics and methodology.

The call for epistemic openness should be complemented by a renewed methodological focus, including an emphasis on detailed historical and sociological research, and the development of educational practices that foster the creation of new “trading zones” (Peter Galison), in which cross-disciplinary discussions of science, technology and human values can take place. With our call, we thus invite scholars to re-engage Weber’s text, from the perspective of 21st century Science and Technology Studies (STS), to help forge new forms of interdisciplinary interaction and expertise.

Organizer: Mate Szabo

This project investigates the interplay between informal mathematical theories and their formalization, and argues that this dynamism generates three different forms of understanding:

1. Different kinds of formalizations fix the boundaries and conceptual dependences between concepts in different ways, thus contributing to our understanding of the content of an informal mathematical theory. We argue that this form of understanding of an informal theory is achieved by recasting it as a formal theory, i.e. by transforming its expressive means.

2. Once a formal theory is available, it becomes an object of understanding. An essential contribution to this understanding is made by our recognition of the theory in question as a formalization of a particular corpus of informal mathematics. This form of understanding will be clarified by studying both singular intended models, and classes of models that reveal the underlying conceptual commonalities between objects in different areas of mathematics.

3. The third level concerns how the study of different formalizations of the same area of mathematics can lead to a transformation of the content of those areas, and a change in the geography of informal mathematics itself.

In investigating these forms of mathematical understanding, the project will draw on philosophical and logical analyses of case studies from the history of mathematical practice, in order to construct a compelling new picture of the relationship of formalization to informal mathematical practice. One of the main consequences of this investigation will be to show that the process of acquiring mathematical understanding is far more complex than current philosophical views allow us to account for.

While formalization is often thought to be negligible in terms of its impact on mathematical practice, we will defend the view that formalization is an epistemic tool, which not only enforces limits on the problems studied in the practice, but also produces new modes of reasoning that can augment the standard methods of proof in different areas of mathematics.

Reflecting on the interplay between informal mathematical theories and their formalization means reflecting on mathematical practice and on what makes it rigorous, and how this dynamism generates different forms of understanding. We therefore also aim to investigate the connection between the three levels of understanding described above, and the notion of rigor in mathematics. The notion of formal rigor (in the proof theoretic sense) has been extensively investigated in philosophy and logic, though an account of the epistemic role of the process of formalization is currently missing. We argue that formal rigor is best understood as a dynamic abstraction from informally rigorous mathematical arguments. Such informally rigorous arguments will be studied by critically analyzing case studies from different subfields of mathematics, in order to identify patterns of rigorous reasoning.

Organizers: Dominik Klein, Soroush Rafiee Rad and Ondrej Majer

Epistemic and Doxastic logic, on one hand, and probabilistic logics on the other, are the two main formal apparatus used in the representation of knowledge and (graded) belief. Both are striving fields that have allowed for many fruitful applications in philosophy and AI. In representing knowledge and belief, classic epistemic and deontic logic rely on a number of minimal assumptions. Agents are, for instance, usually taken to be logically omniscient and their informational states are assumed closed under logical equivalence. Within dynamic logic, also informational updates are often assumed to be correct, i.e. truthful. In the same manner in the probabilistic approach the agents beliefs are assumed to satisfy the Kolmogorov axioms for probabilities which in turn impose strong rationality and consistency conditions on these beliefs.

These assumptions are, of course, idealizations. Newly learned information can often be false or misleading and rarely satisfies classic strong consistency criteria. What is more, real agents frequently behave in ways that are incompatible with orthodox assumptions of logical and probability theory. To establish more comprehensive positive or normative theories about agents, there is hence a need for theories that are able to deal with weaker contexts where some standard assumptions are violated.

This workshop brings together a number of approaches for weak, substructural epistemic logic. The approaches discussed apply to the merging of possibly contradictory information, probabilistic assignments based on contradictory or inconclusive information, intensional and hyper-intensional beliefs, i.e. belief states that are not closed under logical equivalence and collective epistemic states such as group knowledge among groups of weaker-than-classic agents.

09:00 | Common belief logics based on information ABSTRACT. Substructural epistemic logics present an example of formal models of beliefs of rational agents, when perspective switches from a traditional, epistemic alternatives based semantical approach, to information based approach. In particular, we can interpret beliefs as based on available information or reasonable expectations, and capture them via diamond-like modalities interpreted over information states or probability distributions. In the former case, the corresponding notion of belief is that of confirmed-by-evidence belief. A logical account of belief along these lines needs to take into account inconsistencies and incompleteness of information, or uncertainty how likely an event is, based on the evidence locally available to agents. This naturally leads us to use and study in general modal extensions of non-classical logics such as substructural, paraconsistent or many-valued (Belnap-Dunn four-valued logic and Lukasiewicz logic especially). Particular examples of such epistemic logics have been investigated as modal extensions of distributive substructural logics [1,3]. As we think that understanding the notion of common belief is crucial to any formalization of group beliefs and their dynamics, the aims of this contribution is to present common belief extensions of some epistemic logics based on information states semantics, and prove their completeness. We will consider Hilbert style axiomatizations of those (both of finitary and infinitary nature), where common belief is formalized as a greatest fixed point expression. To approach the completeness problem we in particular use two different insights of which we provide theoretical accounts: one coming from abstract algebraic logic, the other from coalgebraic logic: First, to prove the strong canonical completeness of the infinitary versions of the logics we will use a proper version of extension lemmata such as Lindenbaum or Belnap’s pair-extension lemma. A general abstract algebraic perspective at both lemmata for infinitary logics, widening the area of their applicability beyond modal extensions of classical logic, and pointing at their limits, is given in [2]. Second, understanding the frame semantics of logics we consider as given by coalgebras, and generalizing insights available in flat fixed point coalgebraic logics based on classical logic, we prove the completeness of the finitary axiomatization of the logics. [1] Bilkova, M., O. Majer and M. Pelis, Epistemic logics for sceptical agents, Journal of Logic and Computation, 26(6), 2016, pp. 1815-1841, (first published online March 21, 2015). [2] Bilkova, M., Cintula P., Lavicka T., Lindenbaum and Pair Extension Lemma in Infinitary Logics,Logic, Language, Information, and Computation. WoLLIC 2018. Springer, 2018. [3] Sedlar, I., Epistemic extensions of modal distributive substructural logics, Journal of Logic and Computation, 26(6), 2016, pp. 1787–1813. |

09:30 | Non-classical probabilities over Dunn Belnap logic PRESENTER: Ondrej Majer ABSTRACT. Bellnap and Dunn [1] introduced a four valued propositional logic allowing, in addition to the classical truth values True and False, the attribution of non-classical truth values Neither and Both accounting for possibly incomplete or contradictory information concerning a particular proposition. The Bellnap-Dunn four-valued logic has been extensively studied since its introduction and has been proved fruitful in the study of rational agency and the rational agents attitude towards the truth or falsity of propositions in more realistic contexts. More recently there has been attempts to study also the probabilistic extensions of this logic by Dunn [2] and Childers, Majer and Milne [3]. In particular Dunn investigates this probabilistic extension by introducing non-classical probability functions that assign to each proposition in the language a normalised four valued vector that encodes a probability mass function on the four possible truth values. This is in contrast to the classical case where the probability function on the language assigns to each proposition two values expressing a mass function on the proposition and its negation. Dunn [2] studies the logical structure of this probabilistic setting. However to define the logical connectives he makes some very strong independence assumptions that end up having undesirable consequences. In particular in that setting every proposition ends up probabilistically independent of every other proposition. Even of its logical consequences and even of itself. Our work picks up on the non-classical probability functions defined by Dunn but redefine the logical connectives in a way to avoid such undesirable independence consequences. In this new setting we introduce the necessary ingredients for defining conditional probabilities and will show the standard properties for it. Furthermore we propose strategies for aggregating these four valued probability assignments and show the standard properties for the proposed aggregation procedures. We also study the connection with the approach given in [3] and will show that the two setting are inter-translatable. [1] Belnap, N. D., Jr, A useful four-valued logic: How a computer should think, §81 of Alan R. Anderson, Nuel D. Belnap, Jr, and J. Michael Dunn, Entailment: The Logic of Relevance and Necessity, Vol. II, Princeton NJ and Oxford: Princeton University Press, 1992. [2] Dunn, J. M. ,Contradictory information: Too much of a good thing, Journal of Philosophical Logic 39 (2010): 425–452 [3] Childers, T., Majer, O., Milne, P., The (Relevant) Logic of Scientic Discovery, (under review) |

10:00 | Priority Merge and Intersection Modalities PRESENTER: Zoé Christoff ABSTRACT. Distributed knowledge, defined by taking the intersection of the individuals’ indistinguishability relations or information cells, is the standard notion of pooled information in epistemic logic. It is well-suited to represent the pooling of true information. In contrast, intersection of “softer” types of information, for instance plausibility orderings, yields counter-intuitive results. Indeed, the corresponding notion of “distributed belief”, defined as the intersection of the individual underlying plausibility orderings on possible worlds, becomes inconsistent as soon as individual opinions diverge too radically. To avoid such inconsistencies, we focus on an alternative way to pool beliefs, the so-called priority (or lexicographic) merge. Priority merge aggregates plausibility pre-orders as follows. Assume that the group members are somehow ordered in terms of expertise or influence in the group. Then priority merge proceeds by lexicographically considering the strict preferences of each agent in order of priority. Every pair of states strictly ordered by the agent on top of the order is also strictly ordered for the group. For the pairs about which the topmost agent is indifferent, we move to the second agent and order them strictly if she does so, and so on until all pairs are strictly ordered, or we have gone through all agents. Priority merge has been generalized by [2] to arbitrary priority operators which pool any number of relations using priority graphs for agents, i.e., relations that need not be linear nor transitive. Most importantly, [2] shows that any such priority operator can be represented as a combination of two simple operations. The logical framework proposed here relies directly on this result. We start with a systematic comparison between the logical behavior of priority merge and the more standard notion of pooling through intersection, for different notions of belief, on multi-agents plausibility models. We then provide a sound and complete axiomatization of the logic of priority merge, as well as a proof theory in labeled sequents that admits cut. To the best of our knowledge, the only previous logical study of lexicographic merge has been done in extended modal languages [3]. One of the contributions of the present paper is therefore to show that priority merge can also be captured by a modal logic without any hybrid machinery. Finally, we study Moorean phenomena and define a dynamic resolution operator for priority merge, in the same way as [1] does for distributed knowledge, and we provide a complete set of reduction axioms. References: [1] Thomas Ågotnes and Yì N. Wáng. Resolving distributed knowledge. Artificial Intelligence, 252:1 – 21, 2017. [2] Hajnal Andréka, Mark Ryan, and Pierre-Yves Schobbens. Operators and laws for combining preference relations. Journal of Logic and Computation, 12(1):13–53, 2002. [3] Patrick Girard and Jeremy Seligman. An analytic logic of aggregation. In Proceedings of the 3rd Indian Conference on Logic and Its Applications - Volume 5378, ICLA 2009, pages 146–161, Berlin, Heidelberg, 2009. Springer-Verlag. |

Books and journals have a significant role in the scholarly disciplines, as means of disseminating work, as professional forums for debate, and as criteria for advancement in most research fields. Scholarly publishing is undergoing profound changes, which makes it all the more critical that researchers, especially junior researchers, stay abreast of the current state of scholarly publishing.

To this end, the Editors of prominent journals in the history and philosophy of science will convene a panel on issues facing scholarly publishing. The forum will have a strong focus on providing advice and mentorship to junior scholars about selecting journals, placing their work in journals, best practices for navigating the review process, and obtaining a broad and engaged audience for scholarly work. These recommendations will be of interest to more senior researchers as well, including discussion of the role of referees and of the review process, and of recent changes to the landscape of journal publishing.

As part of the panel, some speakers will discuss current developments in, and prospects for, scholarly publishing. These may include the increasing role of open access publishing, including Plan S in Europe, and the changing relationships between book and journal publishing.

The following Editors have agreed to take part:

• Rachel Ankeny, Studies in History and Philosophy of Biological and Biomedical Sciences

• Otávio Bueno, Synthese

• Sabina Leonelli, History and Philosophy of the Life Sciences

• Thomas Reydon, Journal for General Philosophy of Science

• K. Brad Wray, Metascience

A 90 minute time slot will allow ample time for questions.

09:00 | Modeling in neuroscience: Can complete and accurate understanding of nerve impulse propagation be achieved? PRESENTER: Henk De Regt ABSTRACT. Explanatory understanding of phenomena in neuroscience is typically achieved via the construction of models. A prime example is the celebrated Hodgkin-Huxley model (HH-model) of the action potential (Hodgkin and Huxley 1952, Craver 2007). The HH-model is central to the electricity-centered conception of the nerve impulse that dominates contemporary neuroscience. In recent years, however, the HH-model has been challenged because it is unable to account for non-electrical aspects of the nerve impulse, some of which have been known for decades. Consequently, alternative theories and models of nerve impulse propagation have appeared upon the scene, using a thermodynamic or mechanical framework instead of an electrical one. One of these models is the Heimburg-Jackson model (HJ-model) (Heimburg and Jackson 2005), according to which the nerve impulse is an electromechanical density pulse in the neural membrane. Its proponents assume that this model is potentially able to replace the HH-model. Alternatively, one might think that these models of nerve impulse propagation should not be regarded as rivals but may be integrated in a general unifying model. An example of such a proposal is the model of Engelbrecht et al. (2018), which has been developed to unify all relevant manifestations of the nerve impulse and their interactions. The attempt of Engelbrecht et al. aligns with the widespread neuroscientific conviction that the ultimate goal of neuroscience is to develop models that represent neuroscientific phenomena accurately and completely. In this paper, however, we argue that the Engelbrecht model does not provide an accurate and complete representation of the nerve impulse. One reason for this conclusion is that the HH-model and the HJ-model, which the Engelbrecht model attempts to integrate, contain inconsistent assumptions. We submit that the above-sketched approaches to modeling nerve impulse propagation are motivated by a misguided assumption, namely that accurate and complete representation is a unique, objective criterion for evaluating neuroscientific models. By contrast, we propose, in line with Giere (2004), to take into account the purpose for which a model is used when evaluating the value of models; models are tools that represent the nerve impulse accurately and completely enough to achieve a specified goal. Considering models as tools for specific purposes, and acknowledging that different purposes often require incompatible assumptions, suggests that it will be impossible to develop a consistent general unifying model of nerve impulse propagation (cf. Hochstein 2016, Craver and Kaplan 2018). Instead of aiming at explaining such a complex phenomenon in a single model, neuroscientists would do better to employ a ‘mosaic’ (cf. Craver 2007) framework of models. From this collection of models the explanation of nerve impulse propagation can be inferred based on the piecemeal and sometimes contradictory representation of it in distinct models. References Heimburg and Jackson (2005). PNAS 102: 9790-9795. Craver (2007). Explaining the Brain. Oxford University Press. Craver and Kaplan (2018). BJPS: https://doi.org/10.1093/bjps/axy015. Giere (2004). Philosophy of Science 71: 742-752. Engelbrecht et al. (2018). Proc. Estonian Acad. Sci. 67: 28-38. Hochstein (2016). Synthese 193: 1387-1407. Hodgkin and Huxley (1952). J. Physiol. 117: 500-544. |

09:30 | Understanding Causal Reasoning in Neurophysiology ABSTRACT. Neuroscientists value the research that provides some causal understanding of the targeted system. In order to achieve that, they perform causal reasoning, a reasoning type of activity that aims at producing and/or evaluating causal claims about their targeted system. When they perform their causal reasoning within a specific context, they need to employ some standards to guide and justify their causal reasoning. But what are these standards? How we as philosophers analyze and evaluate them? The questions get more complicated when you take the evolution and heterogeneity of neuroscientific practice into consideration. First, structures and standards for good experimental paradigms are co-evolving with technological innovation. For example, in neurophysiology, after the invention of the technique that allows geneticists to genetically modify neurons to express light-sensitive ion channels, a good experimental paradigm in neurophysiology often involves the component of using this genetical technique. Second, it is common in current neuroscience that, for a given set of experiments, it might combine various types of techniques from different areas. Each set of techniques brings in different methodological standards that may or may not be relevant to the success of causal reasoning. These evolving and heterogeneous aspects of neuroscientific practice pose a particular challenge to the philosophers who aim to provide a normative framework in order to understand how causal reasoning in neuroscience works. We need a normative framework that accommodates these evolving and heterogeneous aspects. One way to meet the challenge is to reduce or subsume the heterogeneous practices under a single category/concept of mechanistic causal explanation that is flexible enough to accommodate the heterogeneity (Craver, 2007; Craver and Darden, 2013). Another way to meet the challenge is to adopt the framework of modest scientific pluralism on causal pattern explanations (Potochnik, 2017). In this paper, I will first present a case study from neurophysiology. I will use the following methodology to analyze the case study: (1) delve into the details of the heterogeneous practices, (2) identify instances of causal reasoning, (3) analyze what the relevant standards are in use, (4) perform some literature analysis to assess how the community of neurophysiologists, in fact, evaluate the relevant standards. I will then apply both Craver’s framework and Potochnik’s normative framework to the case study. I aim to adjudicate which framework provides better conceptual tools for evaluating the success of the identified instances of causal reasoning. I will conclude that Potochnik’s framework does a better job than Craver’s one with respect to the case study from neurophysiology. To that end, the paper will proceed as follows. In Section 2, I will present a case study to identify three instances of causal reasoning in neurophysiology and the relevant standards in use. In Section 3, I will argue that the conceptual tools from Craver’s framework are insufficient to complete the evaluative task. In Section 4, I will show that the conceptual tools from Potochnik’s framework adequately assist the evaluative task and help generate a better philosophical understanding of the evolving and heterogeneous aspects of causal reasoning in neurophysiology. |

Organizers: Nina Atanasova, Karine Chemla, Vitaly Pronskikh and Peeter Müürsepp

This symposium is predicated upon the assumption that one can distinguish between different scientific cultures. This is the founding hypothesis of the IASCUD commission. The distinction between these scientific cultures can be made on the basis of the bodies of knowledge actors uphold (which present differences depending on the culture) and the scientific practices they adopt; the distinct kinds of material environment that actors shaped to operate in these contexts and how they operate with them; and also on the basis of epistemological facets. Among the facets that appear to be useful to differentiate cultures, we include: epistemic and epistemological values; types of questions and answers that are expected; types of explanation and understanding that actors hope for. This approach to scientific cultures has the potential of allowing us to understand cultures as temporary formations and not as fixed entities.

The aim of this symposium is to focus on the types of circulation that can be identified between cultures conceived along these lines and also on how these various phenomena of circulation can help us approach the historicity of scientific cultures and of their relationship with one another. The issues we would like to address include the following:

• What can circulate between scientific cultures? We are interested in cases when knowledge and practices migrate from one culture to another. We are also interested in the borrowing of material elements and practices, as well as the adoption of epistemological choices and practices from one context into another. Events of this latter type have perhaps been studied to a lesser extent, but they seem to us to deserve specific attention.

• How does the circulation affect what is circulated? If we all agree that the adoption of an element of knowledge or practice in a different environment transforms this element, we lack a systematic approach to these phenomena of “recycling”.

• How does the circulation affect the adopting culture, and its relationship with the culture of origin? How can it elicit a reconfiguration of the scientific cultures in presence? The study of how actors revise their knowledge in the light of new elements falls for us under this broader category of questions. However, if we consider circulation in the wider perspective that we advocate, the issue of revision presents itself in a new light. In the symposium, we aim at promoting the study of revision more broadly.

09:00 | Situated Counting PRESENTER: Paula Quinon ABSTRACT. We present a model of how counting is learned based on three knowledge components: number as a property of collections; ordered sequence of numerals; and one-to-one mappings between collections. In the literature, the so-called cardinality principle has been in focus when studying the development of counting. We argue that identifying the ability to count with the ability to perform by fulfilling the cardinality principle is not sufficient, and that counting should be analyzed as the ability to perform a series of tasks. The tasks require knowledge of the three components. Some of these tasks may be supported by the external organization of the counting situation. Using the methods of situated cognition, we analyze how the balance between external and internal representations will imply different loads on the working memory and attention of the counting individual. This analysis will show that even if the counter can competently use the cardinality principle, counting will be more or less vary in difficulty depending on which kind of collection is to be counted and on its physical properties. The upshot is that a number of situated factors will influence the counting performance. This will determine the difficulty of the different tasks of counting. |

09:30 | Numerical cognition in the perspective of the Kantian program in modern neuroscience PRESENTER: Valentin Bazhanov ABSTRACT. Hundreds - if not thousands - of works are devoted to the nature of the number. Meanwhile, no generally accepted and therefore acceptable understanding of the phenomenon of number and numerical cognition so far achieved. For instance, in the current Russian philosophy and psychology the concept of “numerical cognition” is virtually absent, as well as studies directly related to this kind of cognitive activity. However, the intensive development of neuroscience opens up prospects for analyzing the nature of number and the mechanisms of “numerical cognition” from the point of view of the Kantian program in neuroscience. This program stimulates this analysis by combining the principles of naturalism and sociocentrism, and allows us to look at the number as a cultural phenomenon due to the ontogenetic features of the human brain. What are the most important features of the modern Kantian program in neuroscience? What are the (neuro) biological prerequisites for the implementation of this program? What is the “sense of number” (or numerosity) and what is the role of this “feeling” in the genesis of ideas about number and numerical cognition? What are the features of the representation of digital information in symbolic and non-symbolic form, and what is the role of language here? When and under what circumstances did the ordering of a set of numbers occur with the help of a horizontal number axis and what was the meaning of culture for this process? What are “conceptual metaphors” and what is their role in numerical cognition? Finally, how do the ontogenetic foundations of the “sense of number” and the successes (or failures) in the education of children and their future career correlate? These questions are supposed to give some answers in the presentation. Research was supported by RFBR grant 19-011-00007a. |

09:00 | Combining truth values with provability values: a non-deterministic logic of informal provability PRESENTER: Pawel Pawlowski ABSTRACT. Mathematicians prove theorems and are not committed to any particular formal system. They reason in a semi-formal setting using informal proofs. According to the proponents of the standard view (Enderton, 1977; Antonutti Marfori, 2010) informal proofs are just incomplete formal derivations — in principle, an informal proof can be associated with a formal proof in a fully-formal system, usually some version of set theory. There are quite a few reasons not to reduce informal provability to formal provability within some appropriate axiomatic theory (Antonutti Marfori, 2010; Leitgeb, 2009). The main worry about identifying informal provability with formal provability starts with the intuition that what- ever is informally provable is true. This means that when we do informal proofs, we are committed to all the instances of reflection schema: if φ is informally provable, then φ is true. However, this principle is not provable in any decent axiomatic theory of its own formal provability predicate. Moreover, no such theory can even be extended with the schema, provided some other non-controversial principles for formal provability are retained. One of the approach to regain the reflection is to treat informal provability as a partial notion. Pawlowski and Urbaniak (2018) developed a framework in which informal provability is modeled by a non-dete- rministic three-valued logic CABAT. Semantics in CABAT is based on an intuitive partition of mathematical claims into provable (value 1), refutable (value 0), and neither (value n). The main reason to use non-deterministic semantics is that the value of a complex formula in deterministic logics is always a function of the values of its components. This fails to capture the fact that, for instance, some informally provable disjunctions of mathematical claims have informally provable disjuncts, while some other don’t. Alas, two main problems with the system they proposed appeared. First, the CABAT reading of the reflection schema does not connect informal provability with truth, since CABAT strictly speaking does not have truth-values. The other problem is related to the natural asymmetry between truth and informal provability. The latter implies the former but not the other way around. So, ideally, we want to have a difference between both: the reflection schema and provabilitation (φ → Bφ), and between strong NEC and CONEC. Unfortunately, CABAT cannot make all these distinctions. In this talk, we propose a logic that follows the same motivations as CABAT, but whose semantics incorporates truth values along with provability values. We develop a four-valued non-deterministic logic T-BAT (provable and true, refutable and false, provable and neither, and refutable and neither), which does a better job as the logic of informal provability than CABAT, since it can prove the reflection schema, can distinguish between truth values and provability values, and preserves the intuitive asymmetries between truth and provability. References Antonutti Marfori, M. (2010). Informal proofs and mathematical rigour. Studia Logica, 96:261–272. Enderton, H. (1977). Elements of set theory. Academic Press, New York. Leitgeb, H. (2009). On formal and informal provability. In Bueno, O. and Linnebo, Ø., editors, New Waves in Philosophy of Mathematics, pages 263–299. New York: Palgrave Macmillan. Pawlowski, P. and Urbaniak, R. (2018). Many-valued logic of informal provability: A non-deterministic strategy. The Review of Symbolic Logic, 11(2):207–223. |

09:30 | Comparative infinite lottery logic ABSTRACT. Norton (2018) proposes an inductive logic for infinite lotteries and inflationary multiverse cosmologies that is radically different from Kolmogorov probability. In particular, both finite and countable additivity fail. Norton calls the support values of this logic “chances” in order to distinguish from Kolmogorovian probabilities. The events to which he applies it are lottery outcomes (the ticket drawn is x, the ticket drawn is in set S, etc.), and random choices of pocket worlds (the chosen world is x, the chosen world is in set S, etc.). The cornerstone of his logic is the principle of Label Independence: Every fact about the chances of events remains true when the elementary outcomes (lottery tickets or pocket worlds) are relabelled by any permutation. From this Norton derives the failure of additivity. The intuitively attractive Containment Principle, which says that if a set of outcomes A is properly contained in a set of outcomes B then the chance of A is strictly less than the chance of B, is also shown to fail. Norton argues that this logic is too weak to help us to confirm or disconfirm particular inflationary models. The Principle of Mediocracy says that we should assume that we live in a randomly chosen civilisation in the multiverse. If one model makes it more likely that a randomly chosen world is like ours than another model does, then the former model is better confirmed in that respect. However, given Label Independence, all infinite, co-infinite sets of worlds are equally likely. Any reasonable eternal inflation model predicts infinitely many worlds like ours and infinitely many unlike ours, so on Norton's logic, each such model is equally confirmed. However, these results depend on a reification of chance, consisting in the postulation of a chance function Ch from events to things called chances. Thus we can say, for example, Ch(ticket 7 is chosen) = C, so by Label Independence, this must remain true no matter which ticket is labelled ‘7’. If instead chances are purely comparative, consisting in relations of the form ‘A is less likely than B’, etc., then we can have an infinite lottery logic that satisfies comparative versions of Label Independence, additivity, Containment, and also regularity. (Regularity here is the property, which some find appealing, that any strictly possible event is more likely than a strictly impossible event.) Unfortunately, even this comparative infinite lottery logic will not help us to confirm or disconfirm inflationary models. Given one inflationary model, our comparative logic may tell us that our world is more likely to be of one kind than another based on that model. However, this gives us no basis on which to say that our world is more likely to be as it is on one model than it is on another model, and thus no basis to say which model is better confirmed. Norton, John D. (2018). Eternal Inflation: When Probabilities Fail. Synthese. https://doi.org/10.1007/s11229-018-1734-7. |

10:00 | On conditions of inference in many-valued logic semantics of CL$_{2}$} ABSTRACT. \documentclass[10pt]{article} \usepackage{amssymb,amsmath,latexsym,amsfonts} \begin{document} \noindent The main aim of this paper is to find conditions of inference in many-valued logic semantics of two-valued CL$_{2}$. \emph{J}-operators will be used due to J. Rosser and A. Turquette. Let’s have a n-valued logic L$_{n}$ with one designated truth-value 1 and anti-designated truth-value 0. $J_{1}$-operator corresponds to 1. $J_{0}$-operator corresponds to 0. The set of L$_{n}$-formulae is L$_{n}$-\emph{For}. If \emph{S} is L$_{n}$-formula, then $J_{1}(S), J_{0}(S)$ are TF-formulae. If $\emph{P}_{1}, \emph{P}_{2}$ are TF-formulae, then $J_{1}P_{1}, J_{0}P_{1}$, \textbf{=}\!\raisebox{-1pt}{$\shortmid$}\emph{P}$_{1}$ and $(P_{1} \Rightarrow P_{2})$ are TF-formulae, where \textbf{=}\!\raisebox{-1pt}{$\shortmid$} -- It is false that, $\Rightarrow$ -- if...then . $\emph{P}, \emph{P}_{1}, $ denote meta-variables for TF-formulae. Set of TF-formulae is TF-\emph{For}. \noindent It is asserted CL$_{2}$(TF-\emph{For}, \textbf{=}\!\raisebox{-1pt}{$\shortmid$}, $\Rightarrow$). If \emph{A} is L$_{n}$-formula or \emph{A} is TF-formula, then \emph{A} is formula. $A, A_{1}$, denote meta-variables for formulae. Set of formulae is \emph{For}. \noindent Axioms for \emph{J}-operators: \\ \indent $(J_{1}P \Leftrightarrow P), \\ \indent (J_{0}P \Leftrightarrow$ \textbf{=}\!\raisebox{-1pt}{$\shortmid$}\emph{P}), \\ \indent $((J_{k}A \wedge J_{m}A) \Rightarrow (k=m))$. Adding rules of inference: \begin{tabular}{c} $ \emph{A} $ \\ \hline $J_{1}A$ \\ \end{tabular} \hspace{1cm } , \begin{tabular}{c} $J_{1}A$ \\ \hline $\emph{A}$ \\ \end{tabular} \hspace{1cm } \noindent Definition 1. $(A_{1} \supset A_{2}) =_{df} (J_{1}A_{1} \Rightarrow J_{1}A_{2})$, $\neg A =_{df} $ \textbf{=}\!\raisebox{-1pt}{$\shortmid$}$J_{1}A$. Then formatting rules, axioms and modus ponens of logic CL$_{n}$(\emph{For}, \textbf{=}\!\raisebox{-1pt}{$\shortmid$}, $\Rightarrow$) with n-valued (non-main by A. Church) interpretation are inferred. \noindent Theorem 1. 1.1. $(J_{1}A \vee J_{0}A) \Rightarrow (J_{1}A \Rightarrow J_{0}\neg A)$ 1.2. $(J_{1}A \vee J_{0}A) \Rightarrow (J_{0}A \Rightarrow J_{1}\neg A)$ 1.3. $((J_{1}A_{1} \vee J_{0}A_{1}) \wedge (J_{1}A_{2} \vee J_{0}A_{2})) \Rightarrow ((J_{0}A_{1} \vee J_{1}A_{2}) \Rightarrow J_{1}(A_{1} \supset A_{2}))$ 1.4. $((J_{1}A_{1} \vee J_{0}A_{1}) \wedge (J_{1}A_{2} \vee J_{0}A_{2})) \Rightarrow ((J_{1}A_{1} \wedge J_{0}A_{2}) \Rightarrow J_{0}(A_{1} \supset A_{2}))$ Note that right part of theorem 1 correspond to semantic rules of two-values (main) interpretation of CL$_{2}$. Let’s have a set Q2 which (Q2 $\subset \emph{For}$) and for all \emph{Q} if ($Q \in$ Q2), then $(J_{1}Q \vee J_{0}Q)$. \noindent Theorem 2. \emph{If exist} Q2, \emph{then} CL$_{2}$(Q2, $\neg, \supset$) \emph{is inferred}. Therefore conditions of inference in many-valued logic semantics of two-valued CL$_{2}$(Q2, $\neg , \supset$) are existence of Q2 which (Q2 $\subset \emph{For}$) and for all \emph{Q} if ($Q \in$ Q2), then $(J_{1}Q \vee J_{0}Q)$. Reference Rosser J.B., Turquette A.R. Many-valued logics. — Amsterdam, 1952. \end{document} |

11:00 | Second order properties of copied computational artefacts PRESENTER: Nicola Angius ABSTRACT. This paper provides a logical analysis of second order properties of computational artefacts preserved under copies as defined in (Angius and Primiero 2018). Properties like safety or liveness assume special significance in computer security and for the notion of malfunctioning. While definition and model checking of these properties are extensively investigated (Kupferman and Vardi 2001, Padon et al. 2018), the study of their preservation under copies is less considered in the literature. Another context of application is in the computer ethics debate on software property rights (Johnson 1998), translated to the question of which formal characteristics are or should be preserved by copies. Copies for computational artefacts are defined in (Angius and Primiero 2018) as set-theoretic relations holding between abstract machines x and y. For exact copies, behaviours prescribed by y are all and only the behaviours prescribed by x; for inexact copies, the behaviours of y are all, but not only, the behaviours of x; for approximate copies, the behaviours of y are some, but not only, the behaviours of x. Bisimulation and simulation are used to provide formal definitions of the three copy relations (Fokkink 2013). In this paper, we introduce CTL* temporal logic (Kröger and Merz 2008) for formulas of the form EX, EG, and EU (respectively: existential next, existential global and existential until) to which any other CTL* formula can be reduced using equivalence rules. We then analyse whether EX, EG, and EU formulas are preserved by exact, inexact, and approximate copies. We prove that any second order property is preserved by exact copies, since bisimulation implies CTL* equivalence. EX, EG, and EU formulas in positive form are satisfied by inexact copies, including liveness EU, provided that they admit infinite paths. They can or cannot satisfy any negation thereof, including safety ¬E¬U, so the copying machines need to be model checked against those formulas to determine whether they satisfy properties of interest of the copied machines. If y is an approximate copy of x, we prove that a definable subset of the behaviours prescribed by y preserves EX, EG, and EU properties expressed in negative form, provided that x and y only allow for finite paths. And a definable subset of EX, EG, and EU formulas in positive form satisfied by x is also satisfied by y, provided that both machines admit infinite paths. References Angius, N., & Primiero, G. (2018). The logic of identity and copy for computational artefacts. Journal of Logic and Computation, 28(6), 1293-322. Fokkink, W. (2013). Introduction to process algebra. Springer Science & Business Media. Johnson, D. G. (1998). Computer ethics, 4th edition. Pearson. Kröger, F., & Merz, S. (2008). Temporal Logic and State Systems. Springer. Kupferman, O., & Vardi, M. V. (2001). Model Checking of Safety Properties. Formal Methods. System Design 19(3): 291-314. Padon, O., Hoenicke, J., Losa, G., Podelski, A., Sagiv, M., & Shoham, S. (2017). Reducing liveness to safety in first-order logic. Proceedings of the ACM on Programming Languages, 2(POPL), 26. |

11:30 | Organisations and variable embodiments ABSTRACT. Organisations are peculiar entities: during their lifetime, they can incur many changes, like acquiring or losing members, changing their organisational chart, changing the rules that regulate their activities and even changing the goal at which such activities aim. Nevertheless, there are many situations in which we refer to these apparently different entities as “the same organisation”. What are the criteria that allow us to re-identify organisations through time and changes? Can such criteria be considered as characterising the identity of organisations? These are the questions at the basis of an ontological analysis of the identity and persistence of organisations. In this contribution, we first analyse organisations and their elements and components at a given time and then we enquire some of the possible organisational changes. We leverage Kit Fine’s theory of rigid and variable embodiment and see how such notions may be used to represent organisations, at a time and through change respectively. In the attempt of specifying Fine’s theory to characterise the case of organisations, we propose the (history of the) decisions taken by its members as what “glues together” successive states of an organisation (an organisation at different times) to form the organisation as an evolving whole, thus making this the element that drives its re-identification through time. Finally, to exemplify our approach, we sketch a simple model in situation calculus. |

11:00 | A formalization of logic and proofs in Euclid’s geometry ABSTRACT. According to Paul Bernays and other scholars (e.g. Ian Muller), Euclid's geometry is a theory of constructions, in the sense that the geometrical figures are considered to be constructed entities. Euclid's geometry thus opposes to contemporary axiomatic theories (e.g. Hilbert's geometry) which proceed from a system of objects fixed from the outset, and which just describe the relationships holding between these objects. The aim of this talk is to provide a formal and logical analysis of Euclid's constructive practice as it emerges from the Book I of the Elements. In the first part of the talk we argue that this practice cannot be captured by the standard methods of intuitionistic logic, like the witness extraction for existential formulas, since in Euclid's Elements there is nothing like a fixed domain of quantification. On the contrary, it is the constructive activity itself that allows one to generate, step by step, the domain of the theory. In order to give a formal and precise analysis of this point, in the second part of the talk we study the proof methods used in Euclid's Elements. We propose a reconstruction of these methods, according to which postulates correspond to production rules (acting on terms and) allowing one to introduce new objects starting from previously given ones. This is done by means of primitive functions corresponding to the actions of fixing a point, drawing a straight line, and drawing a circle, respectively. We argue that a combination of these production rules corresponds to a proof allowing one to solve a problem, i.e. to show that a certain construction is admissible from other primitive ones. The constructed objects are considered to be representable by diagrams, which in turn can be used to extract some information concerning the constructions (e.g. diagrams give evidence for the incidence between the two circles used to prove proposition I.1). Moreover, in order to demonstrate that the constructed objects possess certain specific properties, a method for keeping track of the relationships between the entities used during the construction process is proposed. This method consists in labelling proofs by means of relational atoms and by combinations of these relational atoms. The language that we use for our formalization of Euclid's constructive practice is kept as minimal as possible. This marks some crucial differences with other already existent formal reconstructions of Euclid's geometry (e.g. the one by Michael Beeson or the one by Jeremy Avigad, Edward Dean and John Mumma). On the one hand, we claim that no identity relation is needed for points, lines, and circles. Identity is used instead only for abstract objects, like angles. On the other hand, we claim that no negation (as a propositional) operator is needed when reasoning on the properties of the constructed objects. The use of dual (incompatible) predicates is indeed already sufficient (e.g. "being strictly smaller in length" and "having the same length"). The logic of the Elements is thus taken to be weaker than usual standard logics, like intuitionistic or classical logic. |

11:30 | Gödel's and Post's Proofs of Incompleteness PRESENTER: Mate Szabo ABSTRACT. In the 1920s, Emil Post worked on the questions of mathematical logic that would come to dominate the discussions in the 1930s: incompleteness and undecidability. To a remarkable degree, Post anticipated Gödel’s incompleteness theorem for 'Principia Mathematica', but did not attempt to publish his work at the time for various reasons. Instead, he submitted it for publication in 1941, adding an introduction and footnotes discussing how his results relate to the ones of Gödel, Turing and Church. In the Introduction, written in 1941, Post declared that “[t]here would be a little point in publicizing the writer's anticipation [...] merely as a claim to unofficial priority.” Although he saw his own work as “fragmentary by comparison” he emphasized that “with the 'Principia Mathematica' as a common starting point, the roads followed towards our common conclusions are so different that much may be gained from a comparison of these parallel evolutions.” However, his submission was declined for publication. In the rejection of March 2, 1942 Weyl wrote: “I have little doubt that twenty years ago your work, partly because of its then revolutionary character, did not find its due recognition. However, we cannot turn the clock back; in the meantime Gödel, Church and others have done what they have done, and the American Journal is no place for historical accounts”. Post's paper, 'Absolutely Unsolvable Problems and Relatively Undecidable Propositions - Account of an Anticipation', was published only posthumously in Davis’ (1965). Altough it might be claimed that both Gödel and Post formalize the same informal idea, a diagonal argument, we agree with Post, that there is a lot to learn from a careful comparison of the two, quite different, formal proofs. Examining their proofs presented in strikingly different formal frameworks, we distill and emphasize two key dissimilarities between the proofs of Incompleteness by Gödel and Post. The first considers the scope and generality of their proofs. Gödel was dissatisfied with the specificity of his (1931), i.e. being tied to 'Principia Mathematica' and related systems. On the other hand Post took a purely syntactic approach which allowed him to characterize a much more general notion of formal systems which are shown to be affected by the incompleteness phenomena. The second dissimilarity arises from the fact that Post was first and foremost interested in the decidability of symbolic logics and of 'Principia Mathematica' in particular. As a consequence he arrived at incompleteness as a corollary to undecidability. This “detour,” compared to Gödel’s more direct proof of incompleteness, convinced Post that his characterization of formal systems is not only very general, but gives the correct characterization. The argument that this characterization is the correct one mirrors how Kleene convinced himself that lambda-definability was a right characterization of `effective calculability’. |

12:00 | Gödel and Carnap on the impact of incompleteness on formalization and understanding ABSTRACT. On the basis of the proof of the incompleteness theorem in 1931, Gödel and Carnap both drew conclusions on the open character of mathematics, noting either that “whichever system you chose (...) there is one more comprehensive” (Gödel, On the present situation of the foundations of mathematics, 1933) or that “mathematics requires an infinite series of always richer systems” (Carnap, Logische Syntax der Sprache, 1934, §60). However, such similar formulations concerning the immediate consequences of the incompleteness theorem are misleading because Gödel’s and Carnap’s respective reactions to incompleteness were actually fundamentally different. In this paper, it will be argued 1) that incompleteness had a deep impact not only on the general issue of the limitations of formalisms but more precisely on both Gödel’s and Carnap’s conception of formalization and understanding — but in completely different directions; 2) that Gödel’s and Carnap’s foregoing remarks on the open character of mathematics do not provide a sufficient account on their views on the impact of incompleteness on formalization and understanding (and it will be shown why it is so); 3) that a full account on theirs views concerning this point requires a distinction between kinds of incompleteness, and that this illuminates their divergent conceptions of mathematics and philosophy in a new way. On the basis of this analysis, it will be shown how a new interpretation of Carnap’s reading of incompleteness can be provided. On Gödel’s side, a satisfactory interpretation of his views on the incompleteness of theories and its impact on formalization and understanding requires the distinction between two kinds of incompleteness, which depend on the character of the independent sentences. On the one hand, let us consider for example the Gödel sentence G which Gödel’s incompleteness theorem proves is neither provable nor refutable in T (for any consistent and axiomatizable extension T of Robinson’s arithmetic); as Gödel himself argues, the arithmetical sentence G may be shown to be true (true in the standard interpretation of the language of T) but this sentence is completely uninteresting from the viewpoint of ordinary mathematics (although it has a tremendous logical significance). On the other hand, let us consider for example a sentence such as Cantor’s Continuum Hypothesis (CH), which is independent of a classical theory of sets such as ZFC; CH is mathematically highly significant although it is not known to be true by mathematical standards. The fact that Gödel’s reaction to these different kinds of incompleteness was different is highly significant for his views on mathematics, on what he thought about the impact on formalization and understanding, and about the connection between formalization and mathematical practice. As for Carnap, it is well-known that like other logicians, he tried to circumvent incompleteness by devising a method for formalization to which Gödel’s theorem would not apply. His ideas was to have recourse to infinitary methods for the definition of such concepts as “consequence” and “analyticity”, which enabled him to prove some kind of Carnapian completeness theorem, to the effect that every logical (including mathematical) sentence in some language L for the reconstruction of science is either analytic or contradictory. This is possible because “analytic” does not reduce du “provable” and “contradictory” does not reduce to “refutable”. With this result in mind, what is not easy to understand is the impact Gödel’s incompleteness theorem really had on Carnap’s philosophy of mathematics in the Logical Syntax of Language and what the actual significance of Carnap’s remark to the effect that “mathematics requires an infinite series of always richer systems” is. A new interpretation will be proposed which will show that Carnap’s reaction to Gödel’s incompleteness theorem and its impact on formalization and understanding has to take into account the connection between formalization and the actual development not only of mathematics but of science in general (and why it is so). |

11:00 | Algebraic Semantics for Inquisitive Logics ABSTRACT. Inquisitive semantics is a currently intensively developing approach to the logical analysis of questions (see Ciardelli, Groenendijk, Roelofsen 2018). The basic idea of inquisitive semantics is that, to be able to provide a semantical analysis of questions, one has to go beyond the standard approach that identifies sentential meaning with truth conditions. This approach is applicable only to declarative sentences but not to questions which have no truth conditions. The concept of ``truth in a world'' is replaced by a concept of ``support in an information state''. Support is suitable for a uniform logical analyses of both statements a questions. Inquisitive semantics, as we just described it, is a relational semantics -- it is based on a relation of support between information states and sentences. Although this semantics has been explored thoroughly in the last decade, so far not much attention has been payed to the algebraic aspects of inquisitive logic (with the exceptions of Frittella et al. 2016 and Roelofsen 2013). But it is evident that the models of inquisitive semantics generate interesting algebras of propositions and it would be desirable to understand better the nature of these algebraic structures. This paper is intended as a contribution to the algebraic study of inquisitive propositions. We will not focus solely on the algebras related to the basic inquisitive logic, i.e. the logic generated by standard inquisitive semantics. Our perspective will be more general and we will define a class of algebras that are suitable for a broad (in fact, uncountably large) class of propositional superintuitionistic inquisitive logics that were introduced in (Author, 2016). (Standard inquisitive logic is the strongest logic in this class having similar role as classical logic has among superintuitionistic logics.) We will call such algebras ``inquisitive Heyting algebras''. We will explain how questions are represented in these structures (prime elements represent declarative propositions, non-prime elements represent questions, join is a question-forming operation) and provide several alternative characterizations of inquisitive algebras. For example: A Heyting algebra is inquisitive iff it is isomorphic to the algebra of finite antichains of a bounded implicative meet semi-lattice iff it is join-generated by its prime elements and the set of prime elements is closed under meet and relative pseudo-complement iff its prime filters and filters generated by prime elements coincide and prime elements are closed under relative pseudo-complement. We will also explain how exactly are inquisitive algebras related to inquisitive logics. References: Author (2016) Ciardelli, I., Groenendijk, J., Roelofsen, F. (2018). Inquisitive Semantics. Oxford University Press. Frittella, S., Greco, G., Palmigiano, A., Yang, F. (2016). A Multi-type Calculus for Inquisitive Logic. In: V\"{a}\"{a}n\"{a}nen, J., Hirvonen,~\r{A}.,~de Queiroz,~R. Proceedings of the 23rd International Workshop on Logic, Language, Information, and Computation, Springer, pp.~215--233. Roelofsen, F. (2013). Algebraic foundations for the semantic treatment of inquisitive content, Synthese, 190, Supplement 1, 79--102. |

11:30 | Substructural propositional dynamic logic ABSTRACT. Propositional dynamic logic, PDL, is a well known modal logic with applications in formal verification of programs, dynamic epistemic logic and deontic logic, for example [2]. More generally, PDL can be seen as a logic for reasoning about structured actions modifying various types of objects; examples of such actions include programs modifying states of the computer, information state updates or actions of agents changing the world around them. In this contribution we study a version of PDL where the underlying propositional logic is a weak substructural logic in the vicinity of the full distributive non-associative Lambek calculus with a weak non-involutive negation. Our main technical result is a completeness proof for the logic with respect to a class of modal Routley-Meyer frames. The motivation for this endeavor is to provide a logic for reasoning about structured actions that modify situations in the sense of Barwise and Perry [1]; the link being the informal interpretation of the Routley-Meyer semantics for substructural logics in terms of situations [3]. In the contribution we inform on our partial progress in this area (the version of the Lambek calculus used in our paper is weaker than the logic related to situation semantics in [3]) and comment on the problems that need to be solved. References [1] Barwise, J. and Perry, J.: Situations and Attitudes. MIT Press, 1983. [2] Harel, D., Kozen, D. and Tiuryin, J.: Dynamic Logic. MIT Press, 2000. [3] Mares, E.: Relevant Logic: A Philosophical Interpretation. Cambridge University Press, 2004. |

12:00 | Residuals and Conjugates in Positive Substructural Logic PRESENTER: Andrew Tedder ABSTRACT. In substructural, fuzzy, and many other logics, relations of residuation between conjunction-like and conditional-like connectives are central (Ono et. al. 2007). For instance, in the context of frame semantics, relations of residuation between connectives allows for those connectives to be interpreted by a single relation, as has been studied in the context of Gaggle Theory (Bimbó & Dunn 2008). In Boolean algebras with operators (BAOs), residuation has a second face in the form of relations of conjugation (Jónsson & Tsinakis 1993; Mikulás 1996) – the residuals of an operator are definable in terms of its conjugates, and vice versa, by means of Boolean negation. An immediate result of this is that in BAOs, a collection of operations all conjugated with respect to each other may be interpreted by a single relation in the frame semantics. This talk concerns relations of residuation and conjugation in a positive context – in particular, in the context of logics extending the positive non-associative Lambek calculus with distributive lattice operations. This logic is the extension of distributive lattice logic by means of a binary operator – sometimes called fusion – with left and right residuals, where fusion is assumed only to be monotone (or, equivalently when the residuals are present, to distribute over (and into) the lattice join). The extension of this logic with which we’re concerned is that resulting from the addition of two additional fusion-like connectives, where each fusion is conjugated with respect to the others, and left and right residuals for each additional fusion. Our concern with this logic is motivated by the ternary relation frame semantics for the Lambek calculus – since the residuals and conjugates of an operator can be interpreted by means of one accessibility relation in BAOs, it is an interesting question whether the same is true in a positive context. Of particular interest here is that adding conjugates, and their residuals, to the language would allow for more expressive power in characterising ternary relation frames – in the language including the conjugates, simple frame correspondents can be found for some classes of frames which have otherwise only been characterised by means of negation. Furthemore, there are interesting connections to the semantics for other substructural and relevant logics. This talk presents the logic in question as characterised by a class of ternary relation models – then we go on to consider the question of completeness. References Bimbó & Dunn Generalised Galois Logics: Relational Semantics of Nonclassical Logical Calculi, CSLI publications, 2008 Jónsson & Tsinakis Relation Algebras as Residuated Boolean Algebras, Algebra Universalis, Volume 30, 1993, pp. 469–478 Mikulás Complete Calculus for Conjugated Arrow Logic, in Arrow Logic and Multi-Modal Logic, ed. Marx, Polos, and Masuch, CSLI, 1996 Ono, Galatos, Jipsen, and Kowalski, Residuated Lattices: An Algebraic Glimpse at Substructural Logics, Elsevier, Studies in Logic Series, Volume 151, 2007 |

Organizer: Joeri Witteveen

This is a symposium proposal with a PANEL DISCUSSSION format. No individual symposium papers will be submitted.

Prospective participants: Mieke Boon, Hasok Chang, Hans Halvorson, Mikkel Johansen, Alan Love, Roy Wagner

Abstract:

The teaching of history and philosophy of science occupies a somewhat unusual position in many university curricula. It is typically offered to philosophy students as part of their program, but is sometimes also part of the science curriculum. Often philosophy of science courses are electives, but at some (European) universities, history and philosophy of science is a required course for science students and forms part of the core curriculum. The aim of this panel discussion is to reflect on the practice of teaching HPS to science students. This is a particularly fitting topic for discussion at CLMPST 2019 as it takes the conference theme, “Bridging across academic cultures,” beyond research, into teaching.

The symposiasts will share and reflect on their experience with teaching HPS to science students in the formal, physical, life, and engineering sciences. The session format will be open-ended and allow for a broad variety of inputs and contributions. It will provide room for sharing personal experiences, reflect on the institutional and organizational embedding of teaching science students, and allow for the presentation of sample teaching materials. The audience of the session is welcomed to join the discussion, which will touch on the following questions, among others.

(1) What makes teaching science students different from teaching philosophy students and how should we (historians and philosophers) adapt to an audience of practitioners of a field of study that we are reflecting on? The goals of teaching science students will often be somewhat different from teaching philosophy students, which could affect the selection of topics, the teaching format and styles, and the modes of examination.

(2) How can the teaching of philosophy of science to science students benefit from recent developments in integrated HPS, practice-oriented philosophy of science, and socially relevant philosophy of science? The increasing attention to case studies and scientific practice in contemporary HPS research is a rich source of teaching materials. Based on particular examples, panel members will discuss how these can be packaged and processed to make them suitable for teaching.

(3) What kind of teaching materials are useful for teaching HPS to science students? Many history and philosophy of science textbooks are written without an audience of scientists in mind, but some newer textbooks are particularly written for training scientists. If used, what role should a textbook occupy? What is the proper role of other teaching materials (articles, dedicated webpages, podcasts, vlogs) for exploring specific topics and examples? We discuss advantages and disadvantages of working with different kinds of textbooks and with collections of articles.

(4) What is the added value of having someone trained in HPS teach a course history and philosophy of a scientific subject? Does HPS teaching occupy a special niche, which HPS teachers do better than specialists in the field, and if this is claimed, what is the evidence for it? Reflection on these questions will be crucial to explain the importance of educational expertise in HPS to students and program managers.

(5) What are the best practices for co-teaching a philosophy of science course with a scientist? We consider best practices for developing co-taught courses and discuss how different academic backgrounds and teaching styles can be complementary and in conflict.

(6) What, if any, are the essential ingredients for a course in HPS for scientists? Should a brief twentieth-century history of philosophy of science from (say) logical empiricism to Feyerabend be part of any philosophy of science course, or should developments in the particular science under discussion be leading in the selection of topics? And what about teaching students about their own role as scientists: should an HPS course make space for discussion of responsible conduct of research, integrity, and social responsibility?

The outcomes of the panel discussion will be used in a project led by the University of Copenhagen to inventory, organize, and disseminate teaching materials and information about best practices on teaching philosophy of science to science students. To this end, we aim to open a web portal for philosophy of science teachers in the near future.

11:00 | A Scientific-Understanding Approach to Evo-Devo Models PRESENTER: Rodrigo Lopez-Orellana ABSTRACT. The aim of this paper is to characterize Evolutionary Developmental Biology (evo-devo) models in order to show the role they fulfill in biology —especially how they can provide functional elements which would enrich the theoretical explanation of large-scale evolution. For this purpose, we analyze two evo-devo models: (i) Polypterus model, which explains anatomical and behavioral changes in the evolution of ancient stem tetrapods (Standen et al. 2014), and (ii) Lepidobatrachus model, which accounts for the processes of embryogenesis and morphogenesis of early vertebrates (Amin et al., 2015). In the last two decades, evo-devo has represented an interesting shift in the way we understand evolution —mainly driven by experimental research in Causal Embriology and Genetics of Development. At the same time, evo-devo has also inspired new challenges in the study of scientific explanation, modeling, experimentation, and as well on the ontological commitments that scientists assume when they undertake theoretical generalizations. Specifically, explanatory models in evo-devo attempt to represent emergent phenomena, such as phenotypic plasticity. This kind of complex phenomenical relationships, which are of a causal-functional kind, prevents the analysis of scientific explanation from being restricted to the syntactic structure or the semantic content of the theories and models. Thus, we assert that it is required to include the notion of understanding, in order to account the salient role that models play in evo-devo (Diéguez 2013; de Regt et al. 2009). Understanding belongs to a cognitive but also pragmatic domain: the analysis of models must include issues such as the scientist's intentions (to explain/understand a phenomenon) (Knuuttila and Merz 2009), and the material and abstract resources she uses to achieve her goals. Thereby, from a pluralist and pragmatic approach of the meaning and use of models in science, our ultimate goal is to provide some minimum criteria that evo-devo models must fulfill to provide a genuine or effective understanding that must be conceived as the state of a cognitive subject, but it refers to the utility and manipulability of theories and models evaluated by the scientific community. References: Amin, N. et al. (2015). Budgett’s frog (lepidobatrachus laevis): a new amphibian embryo for developmental biology. Developmental Biology, 405 (2), 291-303. Diéguez, A. (2013). La función explicativa de los modelos en biología. Contrastes. Revista Internacional de Filosofía, (18), 41–54. de Regt, H. W., Leonelli, S., & Eigner, K. (2009). Focusing on scientific understanding. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific Understanding. Philosophical Perspectives (p. 1-17). Pittsburgh: University of Pittsburgh Press. Knuuttila, T., & Merz, M. (2009). Understanding by Modeling. An Objectual Approach. In H. W. de Regt, S. Leonelli, & K. Eigner, eds., Scientific Understanding. Philosophical Perspectives (pp. 146–168). Pittsburgh: University of Pittsburgh Press. Elgin, C. Z. (2009). Is Understanding Factive? In A. Haddock, A. Millar, & D. Pritchard, eds., Epistemic Value (pp. 322–330). Oxford University Press. Standen, E. M., Du, T. Y., & Larsson, H. C. E. (2014). Developmental Plasticity and the Origin of Tetrapods. Nature, 513, 54–58. |

11:30 | How do evolutionary explanations explain? ABSTRACT. In this talk I address the explanatory scope of evolutionary theory and the structure of evolutionary explanations. These issues are in need of clarification because of (at least) two factors. First, evolutionary theory has been undergoing continuous, profound change from Darwin’s formulation via late nineteenth-century Neo-Darwinism to the mid-twentieth century Modern Synthesis, to a possible Extended Synthesis that is being considered today. These changes, which are likely to continue after the current debate on a possible Extended Synthesis has been settled, affect both what evolutionary theory is thought to explain and the ways in which it can perform explanatory work. Second, investigators in areas outside biology, such as evolutionary economics and the various evolutionary programs in the social sciences (e.g., Aldrich et al., 2008; Hodgson & Knudsen, 2010), are increasingly attempting to apply evolutionary theory to construct scientific explanations of non-biological phenomena, using different kinds of evolutionary models in different domains of research. This raises a number of questions: Exactly how much can be explained by applying an evolutionary framework to non-biological systems that differ widely from biological ones? Can applications of evolutionary theory outside biology can achieve a similar explanatory force as when applied to cases in biology? What – if any – basic explanatory structure unifies the different evolutionary models used in biology and the different areas of the social sciences? And so on. I will try to achieve more clarity on these questions by treating them as a set of questions about the ontology of evolutionary phenomena. My claim is that practices of applying evolutionary thinking in non-biological areas of work can be understood as what I call “ontology-fitting” practices. For an explanation of a particular phenomenon to be a genuinely evolutionary explanation, the explanandum’s ontology must match the fundamental ontology of evolutionary phenomena in the biological realm. This raises the question what elements this latter ontology consists of. However, there is no unequivocal answer to this question. There is ongoing discussion about the question what the basic elements in the ontology of biological evolutionary phenomena (such as the units of selection, the units of replication, etc.) are and how these are to be conceived of. Therefore, practitioners from non-biological areas of work cannot simply take a ready-for-use ontological framework from the biological sciences to fit their phenomena into. Rather, they pick elements from the biological evolutionary framework that seem to fit their phenomena, disregard other elements, and construct a framework that is specific to the phenomena under study. By examining cases of such “ontology fitting” we can achieve more clarity about the requirements for using evolutionary thinking to explain non-biological phenomena, as well as about the question how evolutionary explanations explain. References Aldrich, H.A., Hodgson, G.M., Hull, D.L., Knudsen, T., Mokyr, J. & Vanberg, V.J. (2008): ‘In defence of generalized darwinism’, Journal of Evolutionary Economics 18: 577-596. Hodgson, G.M. & Knudsen, T. (2010): Darwin’s Conjecture: The Search for General Principles of Social and Economic Evolution, Chicago & London: University of Chicago Press. |

11:00 | Theoretical equivalence and special relativity PRESENTER: Joshua Babic ABSTRACT. Quine [1975] proposes an attractive criterion - later refined by Barrett and Halvorson [2016] - for when two first order systems count as formulations of the same [scientific] theory: a specific sort of translation function must exist between the two languages so as to map one set of axioms into a logical equivalent of the other. Barrett and Halvorson [2016] ask also for a reverse function that returns for each formula a logical equivalent of the original. Elementary philosophical considerations - about the way in which the reference of theoretical terms gets fixed - suggest that equivalent theories are simply notational variants of each other. No rational preference can be had and no distinction about ontology can be made between pairs of intertranslatable theories. Unfortunately few of the interesting cases of theories believed to be equivalent - such as Lagrangian and Hamiltonian mechanics, matrix and wave quantum mechanics or the manifold and the algebraic formulations of general relativity - have been examined under this strict notion of equivalence. A lack of axiomatisations is mainly to blame. In the present work we will begin by liberalizing the class of translation functions admitted by Quine [1975] to include mappings that send a predicate of a fixed arity to predicates of a fixed larger arity. We illustrate the need for this extension by some mathematical examples that we will employ later: the interpretation of the theories of matrices and of polynomials of a fixed degree in the theory of their [field of] coefficients and also that of rational numbers in that of the integers. We will then consider two systems of axioms for light rays moving in an empty Minkowski spacetime. One is a revision of the system of [Andréka, Németi et al. 2011], in which what appear to us as several mistakes in the formulation are corrected. This first system does not appear, at least at first, to assume the existence of spacetime points and merely assigns coordinate values to physical objects. It treats mainly of frames of reference and of the transformations between frames. The second set of axioms is our own and attempts to formalize the geometric account of spacetime given in [Maudlin 2012] and [Malament, unpublished]. This second system assumes the existence of spacetime points, but its vocabulary is purely geometric and it makes no reference to coordinates or to numbers. We will present the axioms of both system and then proceed to prove their equivalence by constructing an appropriate translation manual. References Andréka, H., Madarász, J. X., Németi, I. and Székely, G., On logical analysis of relativity theories,. Hungarian Philosophical Review 54,4 (2011): 204-222. Thomas W. Barrett and Hans Halvorson. Glymour and Quine on Theoretical Equivalence. J Philos Logic (2016) 45: 467 David Malament, Geometry and spacetime, unpublished notes Tim Maudlin. Philosophy of physics: space and time. Princeton University Press. 2012 Willard V. Quine, On Empirically Equivalent Systems of the World. Erkenntnis (1975): 313-28. |

11:30 | Dynamics and Chronogeometry in Spacetime Theories ABSTRACT. A recent debate on the foundations of special relativity (SR) concerns the direction of an alleged arrow of explanation between Minkowski structure and Lorentz covariant dynamics. Harvey Brown (2005) argues that the latter explains the former, whereas Michel Janssen (2009) argues that the arrow points in the opposite direction. I propose a different view concerning the relation between dynamics and spacetime structure, drawing a lesson from Helmholtz’s (1977) work on the epistemology of geometry. Helmholtz’s insight was that for the question of the geometric structure of physical space to make sense at all, dynamical considerations must be involved from the outset. If the notions of congruence and rigidity are not previously defined and operationalized in terms of dynamical principles, the measurements that can tell about the geometric structure of physical space are neither defined nor possible. Geometric structure cannot refer to the physical world unless dynamical principles define congruence and rigidity. The converse is also true: dynamics makes definite sense only on a geometric structure background. This is why measurements with rigid bodies constitute empirical evidence for a certain geometric structure in the first place. If the dynamics of measuring rods were geometrically neutral, measurements would be idle with respect to the geometric structure of physical space. I illustrate this point comparing SR and Lorentz’s ether theory. The mathematical form of the dynamical laws in both theories is the same, but they have a different meaning. In Lorentz’s theory, ∆x'=∆x⁄γ refers to the longitudinal contraction of an object that moves with respect to the ether with velocity v. In SR the formula refers to different measurements of the length of the same object in two frames in relative motion. This difference is grounded in that ∆x'=∆x⁄γ is setup on different chronogeometric structures. For the ether theory to be able to pick a privileged ether-rest frame, Galilean spacetime must be the chronogeometric background. On the other hand, in SR the formula is about kinematics in different frames since the background chronogeometric structure is Minkowski spacetime. If the law were chronogeometrically neutral we could not assign it any of the two meanings—or any physical meaning at all. In conclusion, for a chronogeometric structure to have a physical meaning, dynamical principles that operationalize it are necessary, and if dynamical laws are to have a definite physical meaning, they must be setup on a chronogeometric structure background. Thus, the connection between them cannot be explanatory, and then the debate mentioned above gets dissolved. This thesis is a development of the argument in (Acuña 2016): there it is argued that in SR Minkowski spacetime and Lorentz covariance are inextricably connected. Here I argue that the same relation holds between spacetime structure and dynamics in spacetime theories in general. REFERENCES Acuña, P. (2016). ‘Minkowski Spacetime and Lorentz Invariance: the cart and the horse or two sides of a single coin?’ SHPMP 55: 1-12. Brown, H. (2005). Physical Relativity. OUP. Helmholtz, H. (1977). Hermann von Helmoltz’s Epistemological Writings. Reidel. Janssen, M. (2009). ‘Drawing then Line between Kinematics and Dynamics in Special Relativity’. SHPMP, 40: 26-52. |

12:00 | Symmetry, General Relativity, and the Laws of Nature ABSTRACT. In modern physics, the laws of nature are derived from the fundamental symmetries of physical theory. However, recent attempts to ground a philosophical account of natural law on the symmetries and invariances of physical theory have been largely unsuccessful, as they have failed to address the framework-dependence of symmetry and invariance, the disparate nature of physical theory, and the recalcitrance of the natural world (e.g. see van Fraassen, 1898; Earman, 2003; and Brading and Castellani, 2003). In response, this paper provides a detailed study of the mathematical foundation of symmetries and invariances in modern physics in order to present a novel philosophical account of the constitutive role that mathematics plays in grounding the laws of nature. In the first section of the paper, I provide a discussion of the geometrical connection between symmetries, invariances, and natural law within modern physical theory. In this context, I present an account of Noether's theorem, which states that every continuous symmetry in a set of dynamical equations corresponds to a conserved quantity. In fact, Noether’s theorem is often taken to establish the connection between symmetries and conservation laws within modern physics (Butterfield, 2006). However, what is often let out of this discussion is the fact that the symmetries in a set of dynamical equations are not sufficient, on their own, to establish the relevant invariant quantities as the symmetries must also be present in the underlying spacetime structure as well (Shutz, 1999). This leads me to a discussion of the spacetime structures of modern physical theory. Through analyzing these structures, I show that in the case of classical mechanics, quantum mechanics, and special relativity, the spacetime structures are all maximally symmetric and Noether's theorem is sufficient to characterize natural law. But the situation drastically changes when we allow for the possibility of a dynamical non-Euclidean geometry. In general relativity, the symmetries in a set of dynamical equations are not sufficient to establish a conserved quantity, as these symmetries are not typically present in the underlying spacetime structure. In fact, the relevant symmetries often have to be imposed on the spacetime in contradiction to the causality constraint of general relativity. The grounding of natural law in the symmetry structure of physical theory appears to be undermined by general relativity. In the second section of the paper, I consider one possible solution to this concern that emerges from the fact that the mathematical structures of classical, quantum, and relativity theory are all formulated within a more general mathematical conception of nature characterized by the Lie calculus. I suggest that this mathematical framework may be able to provide a viable foundation to ground a philosophical account of natural law. Here I take my motivation from Feynman (1964, p. 59) who notes that mathematics may act as ``a great sweeping principle'' from which all laws of modern physics are derived. Despite the fact that each mathematical formalism for physical theory will entail a slightly different conception of symmetry and invariance, they all share a common understanding of natural law grounded in a geometrical-mathematical representation of reality. In this sense, each theory may provide a representation of reality from a particular mathematical vantage point. The common features of the geometrical-mathematical formalism of physical theory may offer the possibility of grounding a viable perspectival account of natural law. To conclude, I consider whether this perspectival account of natural law is able to ground a viable scientific realism, and discuss its broader implications for the philosophy of physics. |

Organizer: John Baldwin

This book serves both as a contribution to the general philosophy of mathemat-

ical practice and as a specific case study of one area of mathematics: model theory.

It deals with the role of formal methods in mathematics, arguing that introduction

of formal logic around the turn of the last century is important, not merely for the

foundations of mathematics, but for direct impact in such standard areas of tra-

ditional mathematics as number theory, algebraic geometry, and even differential

equations. Finding informative axiomatizations of specific areas of mathematics,

rather than a foundation which is impervious to the needs of particular areas drives

this impact. Some of the many uses of the tools of modern model theory are de-

scribed for non-specialists.

The book outlines the history of 20th century model theory, stressing a

paradigm shift from the study of logic as abstract reasoning to a useful tool for

investigating issues in mathematics and the philosophy of mathematics. The book

supports the following four theses that elaborate on this shift.

Theses

1. Contemporary model theory makes formalization of specific mathematical

areas a powerful tool to investigate both mathematical problems and issues

in the philosophy of mathematics (e.g. methodology, axiomatization, purity,

categoricity and completeness).

2. Contemporary model theory enables systematic comparison of local formal-

izations for distinct mathematical areas in order to organize and do mathe-

matics, and to analyze mathematical practice.

3. The choice of vocabulary and logic appropriate to the particular topic are

central to the success of a formalization. The technical developments of first

order logic have been more important in other areas of modern mathematics

than such developments for other logics.

4. The study of geometry is not only the source of the idea of axiomatization

and many of the fundamental concepts of model theory, but geometry itself

(through the medium of geometric stability theory) plays a fundamental role

in analyzing the models of tame theories and solving problems in other areas

of mathematics.

The book emphasizes the importance in formalization of the choice of both

the basic notions of a topic and the appropriate logic, be it first order, second

order, or infinitary logic. Geometry is studied in two ways: the analysis of the

formalization of geometry from Euclid to Hilbert to Tarski and by describing the

role of combinatorial geometries (matroids) in classifying models. The analysis

of Shelah’s method of dividing lines for classifying first order theories provides a

new look into the methodology of mathematics. A discussion of the connections

between model theory and axiomatic and combinatorial set theory fills out the

historical study.

14:00 | The Epistemic Basing Relation in Mathematics ABSTRACT. In my talk, I will discuss different ways in which mathematicians base a mathematical belief on a proof and highlight some conditions on the basing relation in mathematics. The (proper) basing relation is a relation between a reason---in this case a proof---and a doxastically justified belief. I will argue that in mathematics if a subject bases a belief on a proof then she recognizes the proof as a good reason for that belief. Ceasing to recognize the argument as a proof (that is, as a sound argument), she would often be disposed to weaken her conﬁdence in the belief or even to abandon it. Moreover, the basing relation for theorems in mathematical practice (as opposed to other domains) is put in place by a conscious rational activity: grasping how a proof supports a claim. This constraint will lead me to explore in the case of mathematics Leite ‘s (2004) general proposal of how justification is tied to the practice of justifying. As has been pointed out (see for example Azzouni (2013)), there are different ways of grasping how a proof supports its conclusion and therefore the basing relation can assume different forms. It is possible to identify at least two broad types of grasping, leading to different types of basing: 1) a local, step-by-step grasping and 2) a holistic grasping. These are not mutually exclusive, and often basing in practice will be a combination of the two. In some cases, the strength of informal proofs lies in providing us with a holistic grasping, whereas formal proofs often underwrite the possibility of checking the validity of all the inferential steps, since these are generally decomposed into basic steps and thus allow us to gain a local grasp of how the conclusion is supported. At one end of the spectrum, there is a formal derivation of a complex result in an interpreted formal system: we can grasp how the derivation supports its conclusion, but at the same time fail to be aware of the over-all structure of the argument. At the other end, we gain a holistic grasp of how an informal proof supports its conclusion just by appreciating its large-scale structure, without going into all the details, which we accept on testimony or authority. We often grasp how a proof supports its conclusion through a perceptible instantiation of one of its presentations. I will argue that in certain cases a proof presentation (or even a proof) facilitates a type of grasping, while making the other type of grasping difficult. For example, a purely formal proof will tend to be presented in ways which support only the local grasping. This in turn implies that basing the belief in the theorem on the proof involves such grasping. BIBLIOGRAPHY • Jody Azzouni (2013), “The Relationship of Derivations in Artiﬁcial Languages to Ordinary Rigorous Mathematical Proof,” Philosophia Mathematica (III) 21, 247–254. • Adam Leite (2004), “On Justifying and Being Justified,” Philosophical Issues, 14, Epistemology, 219-253. |

14:30 | Epistemic aspects of reverse mathematics PRESENTER: Benedict Eastaugh ABSTRACT. Reverse mathematics is a research programme in mathematical logic that determines the axioms that are necessary, as opposed to merely sufficient, to prove a given mathematical theorem. It does so by formalising the theorem in question in the language of second order arithmetic, and then proving first that the theorem follows from the axioms of a particular subsystem of second order arithmetic, and then “reversing” the implication by proving that the theorem implies the axioms of the subsystem (over a weak base theory corresponding to computable mathematics). The standard view of reverse mathematics holds that a “reversal" from a theorem to an axiom system shows that the set existence principle formalised by that axiom is necessary to prove the theorem in question. Most of the hundreds of ordinary mathematical theorems studied to date have been found to be equivalent to just five main systems, known as the “Big Five”. The five systems have all been linked to philosophically-motivated foundational programmes such as finitism and predicativism. This view of reverse mathematics leads to an understanding of the import of reverse mathematical results in ontological terms, showing what mathematical ontology (in terms of definable sets of natural numbers) must be accepted by anyone who accepts the truth of a particular mathematical theorem, or who wants to recover that theorem in their foundational framework. This is most easily seen in connection with “definabilist” foundations such as predicativism, which hold that only those sets of natural numbers exist which are definable “from below”, without reference to the totality of sets of which they are a member. In this talk, we will argue that this view neglects important epistemic aspects of reverse mathematics. In particular, we will argue for two theses. First, the connections between subsystems of second order arithmetic and foundational programmes is also, if not primarily, motivated by epistemic concerns regarding what sets can be shown to exist by methods of proof that are sanctioned by a certain foundational approach. In this context, reverse mathematics can help to determine whether commitment to a certain foundation warrants the acceptance of a given mathematical result. Second, reversals to specific subsystems of second order arithmetic track the conditions under which theorems involving existential quantification can be proved only in a “merely existential” way, or can provide criteria of identity for the object whose existence is being proved in the theorem in question. These epistemic aspects of reverse mathematics are closely related to computability-theoretic and model-theoretic properties of the Big Five systems. We will conclude by arguing that in virtue of these features, the reverse mathematical study of ordinary mathematical theorems can advance our understanding of mathematical practice by making explicit patterns of reasoning that play key roles in proofs (compactness style arguments, reliance on transfinite induction, etc.) and highlighting the different virtues embodied by different proofs of the same result (such as epistemic economy, the ability to provide identity criteria for the object proved to exist, etc.). |

Organizer: David Casacuberta

About 90% of the biomedical data accessible to researchers was created in the last two years. This certainly implies complex technical problems on how to store, analyze and distribute data, but it also brings relevant epistemological issues. In this symposium we will present some of such problems and discuss how epistemic innovation is key in order to tackle such issues.

Databases implied in biomedical research are so huge that they rise relevant questions about how scientific method is applied, such as what counts as evidence of a hypothesis when data can not be directly apprehended by humans, how to distinguish correlation from causation, or in which cases the provider of a database can be considered co-author of a research paper. To analyze such issue current characterizations of hypothesis formation, causal link, or authorship do not hold, and we need some innovation in the methodological and epistemic fields in order to revise these and other relevant concepts.

At the same time, due to the fact that a relevant deal of such biomedical data is linked to individual people, and how some knowledge from biomedical sciences can be used to predict and transform human behavior, there are ethical questions difficult to solve as they imply new challenges. Some of the them are in the awareness field, so patients and citizens understand these new ethical problems that didn’t arise before the development of big data; others relate to the way in which scientists can and can’t store, analyze and distribute information, and some others relate to the limits on which technologies are ethically safe and which bring erosion of basic human rights.

During the symposium we will present a coherent understanding on what is epistemic innovation, some of logical tools necessary for its development, and then we will discuss several cases on how epistemic innovation applies to different aspect of the biomedical sciences, also commenting its relevance when tackling ethical problems that arise in contemporary biomedical sciences.

14:00 | Proof systems for various FDE-based modal logics PRESENTER: Heinrich Wansing ABSTRACT. We present novel proof systems for various FDE-based modal logics. Among the systems considered are a number of Belnapian modal logics introduced in [2] and [3], as well as the modal logic KN4 with strong implication introduced in [1]. In particular, we provide a Hilbert-style axiom system for the logic BK$^{\Box-}$ and characterize the logic BK as an axiomatic extension of the system BK$^FS$. For KN4 we provide both an FDE-style axiom system and a decidable sequent calculus for which a contraction elimination and a cut elimination result are shown. [1] L. Goble, Paraconsistent modal logic, Logique et Analyse 49 (2006), 3{29. [2] S.P. Odintsov and H. Wansing, Modal logics with Belnapian truth values, Journal of Applied Non-Classical Logics 20 (2010), 279{301. [3] S.P. Odintsov and H.Wansing, Disentangling FDE-based paraconsistent modal logics, Studia Logica 105 (2017), 1221-1254. |

Organizer: Masahiro Matsuo

In philosophy of science, Bayesianism has long been tied to subjective interpretation of probability, or probability as a degree of belief. Although several attempts have been made to construct an objective kind of Bayesianism, most of the core issues and controversies concerning Bayesianism have been biased to this subjectivity, particularly to the subjective priors. Along this line of argument, philosophers currently seem to implicitly assume that Bayesian statistics, which is increasingly getting popular in many fields of science, can be treated legitimately as a branch of subjective Bayesianism.

Despite this comprehensive view, which could be partly traced back to the interpretation of Savage’s ‘Likelihood Principle’, how subjectivity is involved in Bayesian statistics is not so obvious. On the contrary, scientists who use Bayesian statistics are inclined to think of it rather as based on an objective methodology, or else merely as a mathematical technique, without even knowing much of arguments of philosophical Bayesianism. These suggest that there is a considerable gap between typically discussed Bayesianism in philosophy and Bayesian statistical method used in science. The problem is no longer simply the distinction about subjective or objective but more importantly, the present situation where this linkage is almost neglected by both philosophers and statisticians despite the common use of the term “Bayesian”. Bayesian philosophy without statistics and Bayesian statistics without philosophy are both epistemically unsound, and undoubtedly philosophers of science should have responsibility for the restoration of this linkage.

In this symposium, we present some perspectives which could presumably help this restoration. Although an approach trying to examie the history of Bayesianism minutely is certainly necessary in some part of the analysis to achieve this goal, there must be a risk of losing our way if we focus too much attention on this, because the history of it, particularly of the rise of Bayesian statistics, is tremendously complicated to unravel. In order to grasp appropriately the relation between current Bayesian philosophy and statistics, it seems a more plausible way to start from the current situation we are placed in and to investigate it from multiple philosophical and statistical perspectives available, with some help of historical ones when in need. This is the basic strategy we have in this symposium. Accordingly, our focus is not just upon restoration, but rather on (in a positive sense) reconstruction of the linkage between the two Bayesian camps. The perspectives we present are: a parallelism found between Bayesianism and inductive logic; a complementary relation between Bayesian philosophy and statistics; a solution to the conflict between Bayesian philosophy and frequentism through Bayesian statistics; and a linkage between Bayesian philosophy and statistics through statistical theories based on both Bayesianism and frequentism. In this symposium, we have time to discuss after each speaker’s presentation.

14:00 | The alienated/subjective character of scientific communication PRESENTER: Galina Sorina ABSTRACT. In this presentation, we will address some features of scientific communication, which, on the one hand, which we claim are, from a methodological viewpoint, universally valid and, on the other, characterise exchanges between scientific cultures. To do so, we will focus on Peter Galison’s [Galison 1999] notable concept of trading zones and aim at clarifying their definition. We will argue that these zones could be considered ‘alienated forms of scientific communication’ that depend on the characteristics of the actor’s actions. This definition can be understood in the context of Hegel’s concept of alienation [Hegel 2018]. For Hegel, the problem of alienation is that of the spirit that alienates the results of its actions. An analysis of the [Hegel 2018] shows that such results may include those of the activities of self-consciousness that aim at obtaining new knowledge, in particular, in the course of research. We wish to stress that the history of scientific activity testifies to how such processes have continuously taken place. Our analysis accounts, for example, equally for Kuhn’ paradigm, where the results of knowledge are alienated from a concrete individual. Within the latter framework, the representation of scientific activity is that a researcher has to solve only puzzles. Kuhn’s paradigm accounts for forms of stability in science. This stability is embodied in the concept of ‘normal science’. In accepting this paradigm, Kuhn suggests, the scientific community embraces the idea that the basic and fundamental problems have been solved. The results are alienated from concrete studies. Within a paradigm, the world of scientific and theoretical knowledge loses its agility and ability to develop. It freezes within the established terminology and the forms that were once alienated and thus can be transferred between cultures. As a result, the paradigm prevents the researcher from acting as an independent force that regulates and controls his or her investigations. Communication regarding the alienated results that are enshrined in scientific texts can be reduced to researcher’s individual work with text. Each individual asks the text her or his own questions. These questions are affected by the cultural tradition, the individual’s knowledge and competencies, and her or his attitudes and abilities. Thus, communication in science can be considered as a continuous process of interactions between alienation and subjectivity, which betrays the assumption of unity of science. Hence, we define trading zones as alienated (and, therefore, intercultural) forms of communication. Acknowledgements The study is supported by the Russian Foundation for Basic Research (RFBR) within ‘the Informal Text Analytics: A Philosophical and Methodological Approach‘ project (№ 17-03-00772). References Hegel G.W. The Phenomenology of Spirit. Cambridge University Press 2018 Galison, P. 1999. Trading Zone Coordinating Action and Belief. The Science Studies Reader. Edited by M. Biagioli. London and New York: Routledge. |

14:30 | High-energy physics cultures during the Cold War: between exchanges and translations PRESENTER: Polina Petrukhina ABSTRACT. We conduct a sociological and historical analysis of the collaboration between American and Soviet scientists during the chain of experiments in high energy physics that took place in the 1970s (Pronskikh 2016). We consider each of the laboratories of the 1970s as a culture in and of itself and we investigate processes of exchange of material objects and translations of values and interests between different types of individual actors (scientists, politicians, managers) as well as between cultures (Soviet and US high-energy physics laboratories). Bearing upon a case study of a collaboration between Joint Institute for Nuclear Research (JINR, Dubna) and Fermi National Accelerator Laboratory (Fermilab, USA) in 1970-1980 (the period of the Cold War often referred to as “détente”), we examine how the supersonic gas jet target manufactured in JINR and delivered to Fermilab for joint use influenced the scientific culture at Fermilab, contributed to the birth of long-standing traditions as well as contributed to changing scientific policy toward an alleviation of the bi-laterality requirement in scientific exchanges between the Eastern and the Western blocks. We also focus on how processes in scientific cultures that at the time arose from their interactions influenced some of the epistemic goals of the endeavor. We examine the experiment that is premised on an international collaboration within three frameworks: as a trading zone (Galison 1997), where various types of objects and values (not only epistemological) are being exchanged through intermediate languages forming; as a network composed of heterogeneous actors and their interactions accompanied by translations of interests (Latour 1987); as a locus of exchanges, sharing, and circulations which make cultures temporary formations (Chemla 2017). The developments that took place within the experiment can be described as the translation of different types of interests, including political, business, private and public aims. Moreover, the same actors could pursue different types of interests at different times. That prevents us from drawing rigid boundaries between “content” and “context” in science. As a consequence, one more problem arises: if one does not acknowledge such a distinction a priori, how can one eventually identify scientific cultures that act in the course of these translations and distinguish them from other cultures? One possible answer lays in the investigation of the values that circulate in a particular culture or between two or more interacting cultures and shape them, which reveals their historical variability. The reported study was funded by RFBR according to the research project № 18-011-00046 References Chemla Karine (2017) Cultures without culturalism: the making of scientific knowledge / Karine Chemla and Evelyn Fox Keller, editors, Duke University Press, Durham. Galison Peter (1997) Image & logic: A material culture of microphysics, The University of Chicago Press, Chicago. Latour Bruno (1987) Science in Action: How to Follow Scientists and Engineers Through Society, Harvard University Press, Cambridge, Massachusetts. Pronskikh Vitaly S. (2016) ‘E-36: The First Proto-Megascience Experiment at NAL’, Physics in Perspective, Vol. 18, №. 4, pp. 357-378. |

14:00 | A logic for an agentive naïve proto-physics PRESENTER: Nicolas Troquard ABSTRACT. We discuss steps towards a formalisation of the principles of an agentive naïve proto-physics, designed to match a level of abstraction that reflects the pre-linguistic conceptualisations and elementary notions of agency, as they develop during early human cognitive development. To this end, we present an agentive extension of the multi-dimensional image schema logic ISL based on variants of STIT theory, which is defined over the combined languages of the Region Connection Calculus (RCC8), the Qualitative Trajectory Calculus (QTC), Ligozat's cardinal directions (CD), and linear temporal logic over the reals (RTL), with 3D Euclidean space assumed for the spatial domain. To begin to formally capture the notion of `animate agent', we apply the newly defined logic to model the image schematic notion of `self movement' as a means to distinguish the agentive capabilities of moving objects, e.g. study how to identify the agentive differences between a mouse (an animate animal) and a ball (an inanimate yet causally relevant object). Finally, we outline the prospects for employing the theory in cognitive robotics and more generally in cognitive artificial intelligence and questions related to explainable AI. |

14:30 | Splicing Logics: How to Combine Hybrid and Epistemic Logic to Formalize Human Reasoning ABSTRACT. We advocate in this paper for splicing Hybrid and Epistemic Logic to properly model human reasoning. Suppose Wittgenstein wants to meet Russell to discuss some (possibly weird) philosophical issues at 2 a.m. of April 26th. Wittgenstein knows p (“we meet to discuss at 2 a.m. of April 26th”) (symbolically: Kwp, where K is the knowledge operator) at the morning of April 25th, whereas Russell does not: ¬Krp. And at the same time Wittgenstein knows that Russell does not know it: Kw¬Krp. At the afternoon of April 25th Wittgenstein types Russell to communicate him p. But Russell does not reply his message. Thus, at that time we have Kwp ∧ ¬KwKrp ∧ ¬Kw¬Krp ∧ KwKr¬KwKrp, for Wittgenstein does not know whether Russell has read his message but knows that, if he has indeed read it, Russell knows Wittgenstein does not know it. Being considerate, Russell would resolve this situation by answering Wittgenstein´s message. Let us suppose that he does it at night. Then at that moment we have Krp, but it cannot be assumed that KrKwKrp for Wittgenstein could not have read the reply, so ¬KrKwKrp ∧ ¬KrKwKrKwp. Tired of so much ambiguity, Russell phones Wittgenstein to set up the meeting, and then Kwp ∧ Krp ∧ KwKrp ∧ KrKwp ∧… By means of Epistemic Logic we have been able to formalize both Wittgenstein and Russell´s knowledge and their communication. And if we spliced it with Temporal Logic (see [2] and [4]), we could reflect how their knowledge change over time. Nevertheless, even splicing that two systems it is not possible to properly model the whole situation. Neither of them are able to name points (inside a model). They cannot formalize and evaluate, for instance, that a formula such as Kwp is true at exactly the morning of April 25th, i.e., at the point which stands for that moment. So actually, just by means of Temporal-Epistemic Logic none of the formulae which come into play in Wittgenstein-Russell communication can be interpreted. To accomplish it it is necessary to turn to Hybrid Logic. This paper aims at introducing a Hybrid-Epistemic Logic (resulting from splicing both logical systems) in order to improve Temporal-Epistemic Logic expressivity. A proper semantic will be provided, and it will be showed that the only way of accurately modeling how we talk about our knowledge and beliefs, and how they change over time, is via Hybrid-Epistemic Logic. References [1] Blackburn, Patrick (2006), “Arthur Prior and Hybrid Logic”, Synthese, 150(3), pp. 329-372. [2] Engelfriet, Joeri (1996), “Minimal Temporal Epistemic Logic”, Notre Dame Journal of Formal Logic, 37(2), pp. 233-259. [3] Gabbay, Dov; Kurucz, Agi; Wolter, Frank and Zakharyaschev, Michael (eds.) (2003), Many-Dimensional Modal Logics: Theory and Applications, Amsterdam, Elvesier. [4] Surowik, Dariusz (2010), “Temporal-Epistemic Logic”, Studies in Logic, Grammar and Rhetoric, 22(35), pp. 23-28. [5] van Ditmarsch, Hans; Halpern, Joseph; van der Hoek, Wiebe and Kooi, Barteld (eds.) (2015), Handbook of Epistemic Logic, Milton Keynes, College Publications. |

14:00 | About the world described by Quantum Chemistry PRESENTER: Sebastian Fortin ABSTRACT. One of the oldest problems associated with the interpretation of Quantum Mechanics is the role of the wave function in the ontology of the theory. Although Schrodinger himself posed the problem from the beginning of the theory, even today the meaning of the wave function remains the subject of debate. In this context, the problem of the 3N dimensions of the wave function is of particular interest in the philosophy of physics. As the wave function associated with a system of N particles is written in a space of 3N dimensions, it is necessary to ask about the meaning of this space. The debates around the issue have an important impact on the way in which we conceive the world around us. This is clearly manifested by the intense discussions that have taken place in recent years (Albert 2013, Allori 2013, Monton 2006). In this work we will introduce a new perspective, coming from the way in which the wave function is used in scientific practice. Our objective is to investigate the ontology of quantum chemistry emerging from the analysis of the use of the wave function when quantum mechanics is applied to specific cases in the chemical domain. In the field of quantum chemistry there was not much discussion about the meaning of the wave function, and for this reason we consider that it can offer a fruitful context for a philosophical analysis. The typical many body system in this context is a molecule, and the typical problem is to find the energy spectrum of an electron that is in interaction with many other particles. To find such a spectrum, the so called orbital approximation is usually appealed to, which makes it possible to write the total wave function of the system as a product of mono-electron wave functions (Atkins & de Paula 2006). Under this approximation, the wave function of a given electron depends only on the variables of this electron; therefore, it evolves in the space of three dimensions (Lowe & Peterson 2006). In this presentation we will show that the procedure performed by chemists when they use the orbital approximation can be formalized as the result of the application of two mathematical operations: first, a projection onto a subspace of the Hilbert space, and second, a change of variables. With the help of this formalization, we will go beyond the approximation itself and propose an ontology specific to quantum chemistry. Albert, D. (2013). “Wave Function Realism”, in Ney, A. & Albert, D. Z. (eds.), The Wave Function. New York: Oxford University Press. Allori, V. (2013). “Primitive Ontology and the Structure of Fundamental Physical Theories”, in Ney, A. & Albert, D. Z. (eds.), The Wave Function. New York: Oxford University Press. Atkins & de Paula (2006). Physical Chemistry. New York: Oxford University Press. Lowe, J. P. & Peterson, K. A. (2006). Quantum Chemistry. Burlington, San Diego and London: Elsevier Academic Press Monton, B. (2006). “Quantum Mechanics and 3N-Dimensional Space”, Philosophy of Science, 73: 778-789. |

14:30 | Does the reality of the wave function follow from the possibility of its manipulation? ABSTRACT. There are various approaches to the issue of the reality of such unobservable object as the wave function. I examine whether the issue can be resolved by taking the manipulative criterion proposed by J. Hacking in the framework of his experimental realism (Hacking, 1983, 262). According to Hacking, if we can influence some observable objects by manipulating an unobservable object, then the latter is the cause, which means it is real. Accordingly, if we can influence some existing objects by manipulating the wave function, then the wave function is a real entity. I examine the strengths and weaknesses of the manipulative criterion concerning the wave function. I consider one of the experiments with ‘quantum eraser’ and causally disconnected delayed choice. Experimenters ‘label’ the wave function of a system photon by ‘which-way information’ with the help of auxiliary entangled photon. Afterward the measurement of the system photon (no interference) this information is erased that appears to be free or random. Thanks to such manipulations, they restore the wave function of the system photon and observe the interference in the measurement again. It is known that almost all quantum technologies within the second quantum revolution based on the manipulating of the wave function of either single quantum object or entangled ones. For instance, in a quantum computer, by manipulating entangled qubits, you can force them to perform computations. If the wave functions of qubits do not exist, where does a result of the calculation come from? Another example is quantum cryptography. It would seem that these cases confirm the existence of the wave functions. However, I argue that Hacking’s criterion is not a sufficient argument in favor of the reality of the wave functions. First, any laboratory manipulations suggest theoretical loading of unobservable objects. It is possible that in some years the modern quantum theory is found to be a limiting case of some new theory. Then the wave function would be a manifestation of some other fundamental theoretical object. Second, Hacking’s experimental realism is based on a causal relationship between events. However, at the quantum level, causality is something unusual. The uncertainty principle, quantum non-locality, measurement problem – all of these lead to a new notion of causality. Sometimes we cannot accurately identify which of the two correlating quantum events is the cause and which is the effect. It means that a concept of causality also depends on theory. I suppose that manipulation the wave function can only confirm that it represents either a certain real fundamental entity or some real internal structure of the quantum system. It can look like a picture described by ontic or constructive versions of structural realism for the quantum field theory (Cao, 2003; French & Ladyman, 2003). References: Hacking, I. (1983). Representing and intervening. Cambridge: Cambridge University Press. Cao, T. Y. (2003). Structural realism and the interpretation of quantum field theory. Synthese, 136(1), 3-24. French, S., & Ladyman, J. (2003). Remodelling structural realism: Quantum physics and the metaphysics of structure. Synthese, 136(1), 31-56. |

15:15 | An algebraic model for Frege's Basic Law V ABSTRACT. 1 As it is well known, the system of Frege’s Grundgesetze is formally inconsistent. Indeed,BLV,(εF(ε) = εG(ε)) ↔ ∀x(Fx ↔ Gx),plus the full–CA, ∃X∀x(Xx ↔ φ(x)), leads to inconsistency: Russell Paradox is derivable: ∃X ∀x(X x ↔ ∃X [x = y–(Xy) ∧ ¬Xy]. In recent years, Heck, Ferreira and Wehmeier have pointed out that BLV is consistent with some predicative restriction. However, they have succeeded to recover at most Robinson’s Q. Contrary to any predicative setting, I shall employ a full-impredicative approach by using some crucial algebraic intuition in order to give a model for the theory TK . My aim is a model-theoretic consistent representation of Frege’s Grundgesetze. Both BLV and CA won’t be syntactically restricted: I shall only impose semantical restriction on BLV. 2 The above mentioned characterisation proceeds in two different stages. Firstly, I shall fix a domain of interpretation M = ⟨D, ⊆⟩ where D = ℘(ω) is a poset, ⊆ is a relation, reflexive, antisymmetric, and transitive over D. Subsequently, I shall define over M a monotone function φ order-preserving. According to Moschovakis, φ has the least fixed point property. Thus, my purpose shall be to apply φ to TK -predicates: only φ-monotone predicates that have least fixed point property delivers concepts. An interpretation for the syntax of TK shall be given in agreement with the for- mer structure: the pair (E , A), extension and anti-extension, interprets any second- order variable Fi, where E(Fi) ⊆ M2 (second-order domain); A(Fi) ⊆ M2 and E(Fi) ∩ A(Fi) = ∅; the function ν : π → M1 (first-order domain) interprets ε, where π ⊆ M2, is the set of all φ-monotone predicates with extension or anti- extension fixed – it is also clear how BLV is restricted; the interpretation of the quantifiers is given in standard SOL definitions. Secondly, in order to fix denotation for any ε-term (VR-term) I shall generalise M: the triple ⟨M, ⊆, F⟩, where ⟨M, ⊆⟩ is a poset, ⟨M, F⟩ is a field of sets, with F ⊆ ℘(ω) and F = π. According to this representation of the Boolean Algebra, to any point in M corresponds a M1 individual of TK and, to any complex in F correspond a φ-monotone predicate. Thus, such structure is a model of TK . Finally, TK results both consistent and strong enough to recover FA. The Russell paradox is blocked for the following reason: let R = ∃F [y = xˆ(F x) ∧ ¬F x].R is not φ-monotone, it does not delivers any concept and there is no correspond- ing VR-term:R ∈/ Eσ ∩ Aσ. Furthermore, TK manages to recover FA: I may form the concept N(x) =def P red+(0, x) because only with a predicative fragment I have at least Dedekind-infinitely many M1 individuals that fall under it. If P red+(0, x) = ∃F ∃u(F u ∧ y = #F ∧ x = #[λz.F z ∧ z ̸= u]), TK proofs that F is φ-monotone, namely, there is a corresponding VR-term. References - Ferreira F., and Wehmeier K. F., On the consistency of the ∆1-CA fragment of Frege’s Grundgesetze, Journal of Philosophical Logic, 31 (2002) 4, pp. 301-311. - Goldblatt, R., Varieties of Complex Algebra, Annals of Pure and Applied Logic, 44 (1989) 3, pp. 173-242. - Heck, R. G., The consistency of predicative fragments of Frege’s Grundgesetze der Arithmetik, History and Philosophical Logic, 17 (1996) 4, pp. 209-220. - Moschovakis, Y., Notes on Set Theory, New York: Springer, 2006 (2nd edition). |

15:45 | On calculi and ranks for definable families of theories PRESENTER: Sergey Sudoplatov ABSTRACT. Let $\mathcal{T}$ be a family of first-order complete theories in a language $L$, $\mathcal{T}_L$ be the family of all first-order complete theories in a language $L$. For a set $\Phi$ of $L$-sentences we put $\mathcal{T}_\Phi=\{T\in\mathcal{T}\mid T\models\Phi\}$. A family of the form $\mathcal{T}_\Phi$ is called {\em $d$-definable} (in $\mathcal{T}$). If $\Phi$ is a singleton $\{\varphi\}$ then $\mathcal{T}_\varphi=\mathcal{T}_\Phi$ is called {\em $s$-definable}. We consider properties of calculi for families $\mathcal{T}$ with respect to the relations $\vdash_\mathcal{T}$, where $\Phi\vdash_\mathcal{T}\Psi\Leftrightarrow\mathcal{T}_\Phi\subseteq \mathcal{T}_\Psi$. We use terminology from \cite{csMS, rsMS18} including $E$-closure ${\rm Cl}_E(\mathcal{T})$, rank ${\rm RS}(\mathcal{T})$, and degree ${\rm ds}(\mathcal{T})$. \begin{theorem}\label{th1_MS} For any sets $\Phi$ and $\Psi$ of sentences and a family $\mathcal{T}$ of theories the following conditions are equivalent: $(1)$ $\Phi\vdash_\mathcal{T}\Psi$; $(2)$ $\Phi\vdash_{\mathcal{T}_0}\Psi$ for any finite $\mathcal{T}_0\subseteq\mathcal{T}$; $(3)$ $\Phi\vdash_{\{T\}}\Psi$ for any singleton $\{T\}\subseteq\mathcal{T}$; $(4)$ $\Phi\vdash_{{\rm Cl}_E(\mathcal{T})}\Psi$. \end{theorem} \begin{theorem}\label{th2_MS} For any sets $\Phi$ and $\Psi$ of sentences in a language $\Sigma$ the following conditions are equivalent: $(1)$ $\Phi\vdash\Psi$, i.e., each sentence in $\Psi$ is forced by some conjunction of sentences in $\Phi$; $(2)$ $\Phi\vdash_{\mathcal{T}_L}\Psi$; $(3)$ $\Phi\vdash_{\mathcal{T}}\Psi$ for any {\rm (}finite{\rm )} family {\rm (}singleton{\rm )} $\mathcal{T}\subseteq \mathcal{T}_L$; $(4)$ $\Phi\vdash_{\mathcal{T}}\Psi$ for any {\rm (}finite{\rm )} family {\rm (}singleton{\rm )} $\mathcal{T}$. \end{theorem} \begin{theorem}\label{th3_MS} A subfamily $\mathcal{T}'\subseteq\mathcal{T}$ is $d$-definable in $\mathcal{T}$ if and only if $\mathcal{T}'$ is $E$-closed in $\mathcal{T}$, i.e., $\mathcal{T}'={\rm Cl}_E(\mathcal{T}')\cap \mathcal{T}$. \end{theorem} \begin{theorem}\label{th4_MS} For any ordinals $\alpha\leq\beta$, if ${\rm RS}(\mathcal{T})=\beta$ then ${\rm RS}(\mathcal{T}_\varphi)=\alpha$ for some {\rm (}$\alpha$-ranking{\rm )} sentence $\varphi$. Moreover, there are ${\rm ds}(\mathcal{T})$ pairwise $\mathcal{T}$-inconsistent $\beta$-ranking sentences for $\mathcal{T}$, and if $\alpha<\beta$ then there are infinitely many pairwise $\mathcal{T}$-inconsistent $\alpha$-ranking sentences for $\mathcal{T}$. \end{theorem} \begin{theorem}\label{th5_MS} Let $\mathcal{T}$ be a family of a countable language $\Sigma$ and with ${\rm RS}(\mathcal{T})=\infty$, $\alpha\in\{0,1\}$, $n\in\omega\setminus\{0\}$. Then there is a $d$-definable subfamily $\mathcal{T}_\Phi$ such that ${\rm RS}(\mathcal{T}_\Phi)=1$ and ${\rm ds}(\mathcal{T}_\Phi)=n$. \end{theorem} This research was partially supported by Committee of Science in Education and Science Ministry of the Republic of Kazakhstan (Grants No. AP05132349, AP05132546) and Russian Foundation for Basic Researches (Project No. 17-01-00531-a). \begin{thebibliography}{10} \bibitem{csMS} {\scshape S.V.~Sudoplatov}, {\itshape Closures and generating sets related to combinations of structures}, {\bfseries\itshape The Bulletin of Irkutsk State University. Series ``Mathematics''}, vol.~16 (2016), pp.~131--144. \bibitem{rsMS18} {\scshape S.V.~Sudoplatov}, {\itshape On ranks for families of theories and their spectra}, {\bfseries\itshape International Conference ``Mal'tsev Meeting'', November 19--22, 2018, Collection of Abstracts}, Novosibirsk: Sobolev Institute of Mathematics, Novosibirsk State University, 2018, pp.~216. \end{thebibliography} |

16:15 | Semilattices of numberings PRESENTER: Nikolay Bazhenov ABSTRACT. Uniform computations for families of mathematical objects constitute a classical line of research in computability theory. Formal methods for studying such computations are provided by the theory of numberings. The theory goes back to the seminal article of G{\"o}del, where an effective numbering of first-order formulae was used in the proof of the incompleteness theorems. To name only a few, computable numberings were studied by Badaev, Ershov, Friedberg, Goncharov, Kleene, Kolmogorov, Lachlan, Mal'tsev, Rogers, Uspenskii, and many other researchers. Let $S$ be a countable set. A numbering of $S$ is a surjective map $\nu$ from the set of natural numbers $\omega$ onto $S$. A standard tool for measuring the algorithmic complexity of numberings is provided by the notion of reducibility between numberings: A numbering $\nu$ is reducible to another numbering $\mu$ if there is total computable function $f(x)$ such that $\nu(x) = \mu( f(x) )$ for all $x\in\omega$. In other words, there is an effective procedure which, given a $\nu$-index of an object from $S$, computes a $\mu$-index for the same object. In a standard recursion-theoretical way, the notion of reducibility between numberings gives rise to an upper semilattice, which is usually called the Rogers semilattice. Rogers semilattices allow one to compare different computations of a given family of sets, and they also provide a tool for classifying properties of computable numberings for different families. Following this approach, one can formulate most of the problems on numberings in terms of Rogers semilattices. Goncharov and Sorbi [Algebra Logic, 36:6 (1997), 359--369] started developing the theory of generalized computable numberings. We follow their approach and work in the following setting: Given a complexity class, say $\Sigma^0_n$, we consider the upper semilattice $R_{\Sigma^0_n}$ of all $\Sigma^0_{n}$-computable numberings of all $\Sigma^0_n$-computable families of subsets of $\omega$. We prove that the theory of the semilattice of all computable numberings is computably isomorphic to first order arithmetic. We show that the theory of the semilattice of all numberings is computably isomorphic to second order arithmetic. Furthermore, it is shown that for each of the theories $T$ mentioned above, the $\Pi_5$-fragment of $T$ is hereditarily undecidable. We also discuss related results on various algorithmic reducibilities. |

Place: Institute of Philosophy of the Czech Academy of Sciences

Beer, food, fun.