CLMPST2019: 16TH INTERNATIONAL CONGRESS OF LOGIC, METHODOLOGY AND PHILOSOPHY OF SCIENCE AND TECHNOLOGY
PROGRAM FOR SATURDAY, AUGUST 10TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 28A: C6 SYMP Identity in computational formal and applied systems 1. HaPoC symposium (IdCFAS-1)

Organizers: Giuseppe Primiero and Nicola Angius

Defining identity between two objects is a fundamental problem in several philosophical disciplines, from logic to language and formal ontology. Since Frege, identity has been addressed in terms of formal constraints on definitional criteria which vary depending on context, application and aims. This symposium collects and compares current approaches to identity for computational systems in formal and applied contexts. Problems of interest include: definitional identity in arithmetics, intensional identity for proofs, the definition of replicas and the
study of preservation of second-order properties for copied computational artefacts, and the identity over time of formally defined social institutions. All these contexts offer problematic interpretations and interesting questions for the notion of identity.

Arithmetics offers a precise formal interpretation of logical identity, but higher types display a tension between extensionality and equivalent term evaluation of identical functions: if the latter is accepted, then functions are co-definable but irreducible.

In proof-theoretical semantics a sentence is identified by the set of all its proofs with a common inferential structure. Accounting for intensional aspects of these objects means to uphold their identity, while investigating common meta-theoretical properties like harmony and stability.

From formal to implemented objects, the problem of identity resurfaces for computational artefacts. For these objects, by definition subject to replication, the notion of copy has started receiving formal treatment in the literature, while the notion of replica can be further analysed with respect to existing approaches for technical artefacts. Moreover, the problem of preservation of behavioural properties like safety and reliability is crucial.

Finally, these problems extend to applications in social ontology. In particular, identity criteria are at the basis of an ontological analysis of the persistence of organisations through time and changes, a problem which can be formulated both theoretically and formally.

The problem of defining formal identity criteria for natural and technical objects traces back to ancient philosophy and it characterises modern and contemporary analytic ontology from Leibniz to Frege. This symposium collects contemporary analyses of the logical accounts of identity in formal and applied contexts.

This symposium is submitted on behalf of the Commission for the History and Philosophy of Computing, Member of the DLMPST.

Chair:
Raymond Turner (University of Essex, UK)
Location: Room 402
09:00
Ansten Klev (Czech Academy of Sciences, Czechia)
Definitional identity in arithmetic

ABSTRACT. Definitional identity is the relation that holds between two expressions that are identical by definition. It is natural to expect that the principles governing this relation will depend on the underlying language. In this talk I wish to consider the formalization of definitional identity in arithmetic and draw attention to a striking contrast between the language admitting only first-order terms and the language admitting terms also of higher order.

In the first-order case we may rely on Kleene (Introduction to Metamathematics, §54) and Curry (e.g. Combinatory Logic, ch. 2E). They formalize definitional identity as the equivalence relation inductively generated by all axioms of the form

definiens = definiendum

and the two rules of inference:

1. from a=b infer a[t/x]=b[t/x] 2. from a=b and c=d infer a=b[d/c]*

Here b[t/x] is ordinary substitution, and b[d/c]* is substitution of d for any number of occurrences of c in b.

There are two admissible forms of definition: explicit definition of an individual constant or a function constant and recursive definition of a function constant. A definitional equation for a function constant takes the form

f(x)=t[x]

where x is a sequence of variables. An explicit definition consists of one such equation, a recursive definition of two.

This relation can be shown to be the reflexive, symmetric, and transitive closure of a reduction relation that formalizes the process of unfolding defined terms in an expression. This relation may, in turn, be thought of as an evaluation relation. Indeed, one can show that it is confluent and strongly normalizing. The definitional identity a=b can in this case therefore be interpreted as "a and b have the same value".

Now let us admit variables x and constants f of arbitrary type. Definitional equations for such f's take the same form as before. In particular, the definiens is f(x) of type N. However, f may now occur isolated, for instance as argument to another function. Various considerations leads one here to postulate a further rule:

3. from f(x)=g(x) infer f=g, and vice versa.

One consideration in support of this rule is the following. Suppose I have defined f and g by

f(x)=t g(x)=t'

and it turns out that

t=t'

namely that t and t' are definitionally identical. It is then natural to say that also f and g are definitionally identical,

f=g

But this example also shows that when rule 3 is admitted, then definitional identity can no longer be interpreted as "a and b have the same value". Namely, in the example we have f=g, but both f and g are irreducible (only when they are supplied with arguments can a reduction take place).

One conclusion to draw from this is that although a reduction relation can be defined for higher-order expressions f, this relation cannot be thought of as evaluation. Evaluation makes good sense only after f has been supplied with arguments so that a ground-level expression results.

09:30
Luca Tranchini (Eberhard Karls Universität Tübingen, Germany)
Alberto Naibo (IHPST (UMR 8590), Université Paris 1 Panthéon-Sorbonne, France)
Harmony, stability, and the intensional account of proof-theoretic semantics
PRESENTER: Luca Tranchini

ABSTRACT. Proof-theoretic semantics (PTS) is based on the idea that the meaning of a proposition A should not be explained in terms of the conditions at which A is true, but rather in terms of the conditions at which the truth of A can be recognized. For mathematical propositions, this amounts to equate knowledge of the meaning of A with knowledge of what is to count as a proof of A, i.e. of its proof-conditions.

The explanation of the proof-conditions comes together with criteria of identity of proof, that is of criteria for telling, given two proofs of a proposition, whether they are the same proof or not. The talk focuses on the problem of whether, for certain classes of propositions, such criteria deliver a trivial notion of identity of proof. By this we mean that for a proposition belonging to these classes, there can be at most one proof of it, or, equivalently, that any two proofs of such a proposition are identical.

If identity of proof is not trivial, PTS delivers an intensional account of meaning, that is, it can give substance to the idea that there may be essentially different ways in which a proposition can be true, corresponding to the different proofs of the proposition. On the contrary, if identity of proof is trivial, the set of proofs of a proposition A - what in PTS can be seen as the semantic value of A - is either the empty set, or the singleton containing the (only) proof of A. In this case, PTS would come very close to an extensional semantics in which the semantic value of a proposition is simply identified with its truth-value.

Central to PTS is thus the understanding of proofs as abstract objects, as opposed to the syntactic representations of proofs in formal calculi. Formal derivations should be seen as merely ``representing´´, or ``denoting´´ proofs in the abstract sense. Given an appropriate equivalence relation on formal derivations, one may say that equivalent derivations denote the same proof, and proofs could really be thought of as the abstracta obtained by quotienting derivations modulo equivalence.

One of the most promising accounts of the two central concepts of PTS, called ``harmony´´ and ``stability´´, has been given in terms of certain transformations on derivations in the natural deduction format: reductions and expansions. Reductions and expansions determine an equivalence relation on derivations, and in turn a notion identity of proofs, which, as we show, is trivial for two significant classes of proposition: identity statements and negated propositions. In order to recover an intensional account of the meaning of these propositions, we consider the possibility of weakening either the notion of harmony, or that of stability, or both.

We conclude by discussing the extent to which the proposed framework is coherent with the interpretation of proofs as programs stemming from the Curry-Howard correspondence.

09:00-10:30 Session 28B: B1/B5 SYMP Science as a profession and vocation. On STS's interdisciplinary crossroads 1 (WEBPROVOC-1)

Organizers: Ilya Kasavin, Alexandre Antonovskiy, Liana Tukhvatulina, Anton Dolmatov, Eugenia Samostienko, Svetlana Shibarshina, Elena Chebotareva and Lada Shipovalova

The topic of the special symposium is inspired by Max Weber’s lecture on “Science as a Vocation” [Wissenschaft als Beruf], which will be celebrating the 100th anniversary of its publication in 2019. The ambivalence of the German term Beruf [occupation, job, vocation] plays a crucial role in Weber’s text, making it possible, on the one hand, to view science as a highly specialized activity, and on the other hand, to uncover its openness, its communicative nature, and its ethical dimension. In particular, the essay’s focus on the communicative dimension of science, and its relation to ideas of social progress, brings to light the significance of human meaning and practice in the conduct of science, but also the reliability of scientific knowledge and its perceived status in society. Weber’s lecture clearly remains relevant today, since it interrogates the possibility of history and philosophy of science to be both a specialized and an open project, designed to bridge the disciplinary gaps between various approaches to study science. More broadly, his essay thus presents a timely attempt to address the problem of integrating different academic cultures: philosophy and the sciences; ethics and methodology.


The call for epistemic openness should be complemented by a renewed methodological focus, including an emphasis on detailed historical and sociological research, and the development of educational practices that foster the creation of new “trading zones” (Peter Galison), in which cross-disciplinary discussions of science, technology and human values can take place. With our call, we thus invite scholars to re-engage Weber’s text, from the perspective of 21st century Science and Technology Studies (STS), to help forge new forms of interdisciplinary interaction and expertise.
 

Chair:
Ilya Kasavin (RAS Institute of Philosophy, Moscow, Russia)
Location: Room 401
09:00
Ilya Kasavin (RAS Institute of Philosophy, Moscow, Russia)
The Scientist’s Dilemma: After Weber

ABSTRACT. Max Weber's lecture “Science as vocation” [Wissenschaft als Beruf], which will be celebrating the 100th anniversary of its publication in 2019, raises a number of ethical issues in science, which topicality only grows during the last century. Among them there is a problem confronting the underdetermination of the disciplinary status of the ethics of science in terms of philosophical and specially-scientific subject matters, methods, on the one hand, and the goal to determine methods and approaches for a normative management of science. Along this way, one has to compare the philosophical and other ethical and regulatory programs as it relates to science; to respond to questions about resolving Hume’s guillotine and the paradox of Merton-Popper in the ethics of science; to demonstrate the ethics of science ability to justify science as a form of public good. The ethics of science has been partially formed by transferring the methods of its justification from philosophical ethics: as descriptive, normative and applied ethics, as metaethics. And here, Aristotle and Kant dispute with Bentham and Quine on the autonomy of or naturalizing the ethics. It turns out that a clear choice among these programs is hardly possible for the ethics of science. It cannot, on the one hand, do without borrowing empirical evidence from the cognitive sciences and the cultural studies, and on the other, cannot be limited to the actual state of affairs. Rather, specially scientific and philosophical aspects of the ethics of science (descriptive, applied, professional, normative ones) focus on the various types of formal and informal regulation of science. The ethical impetus generated by Max Weber’s lecture “Wissenschaft als Beruf” helps bridge a gap between these two dimensions: a “profession” as a feature of science’s social institute and “vocation” as an existential propensity of a person. The Merton-Popper paradox is also an attempt to elucidate and to reconcile the factual autonomy of profession and vocation pointed out by Weber. I propose to project the ethos of science assimilating some approaches in virtue epistemology (J. Kvanvig, J. Turri) in order to resolve the paradox. In the ethos, particular norms and values operate at the definite structural levels of epistemic community and at the stages of its development. Also, they occasionally transcend the core of scientific community and influence the social change. Namely, epistemic virtues and vices can be viewed as distributed among three dimensions of scientific community viewed in terms of the revised grid-group analysis (M. Douglas): inner structure, group boundary and the pass transmitting the content of inner structure over the group boundary to the external surrounding of science, society as a whole. Let inductive inference hardly bridge a reliable gap from moral facts and moral norms as well as from epistemic facts and epistemic norms. Still, one is able to use the gap positively, falsifying norms with facts and criticizing facts in the face of norms. Thus the communication aimed at resolving Hume’s guillotine and Merton-Popper paradox creates a rational discourse transcending science toward society and establishing an external “trading zone”. Yet it is hardly a trade that shapes the game; it is rather a gift (an analogy borrowed from M. Mauss)/ As a result, special epistemological status of science is ethically justified as providing integrity, solidarity and common good.

09:30
Svetlana Shibarshina (National Research Lobachevsky State University of Nizhni Novgorod, Russia)
Scientists’ social responsibilities in the context of science communication

ABSTRACT. This study considers the issue of scientists’ social responsibilities in the context of science communication. I accentuate this question by rethinking scientists’ tasks as they were outlined by Max Weber in his “Science as a vocation and profession”. In the twenty-first century, researchers are increasingly focusing on the new evolving mission of a science communicator engaged in interactions with the public. First of all, it includes science popularization, the image of which is being re-actualized in the contemporary mediatized society and culture. I briefly depict the picture of science popularization in Germany of the 1920s, illustrated particularly by popular science magazines and radio programs, the emergence of which originally had to do with the concept of Kulturnation (German: “cultural nation”). In those times, however, science popularization was not propagated as a scientist’s mission and generally proceeded within the 'deficit' model. What is also important is that the necessity to disseminate scientific knowledge for lay audiences was recognized by scientists earlier than it was by the state authorities, and state support of science communication started mainly in the 20th century. In this respect, I focus primarily on countries with accelerated modernization and an acute need in a rapid science and technology development, with the Soviet Union as an illustration. The Soviet state supported science studies and science dissemination projects – science popularization was rooted in the overall political and social ideology. Here, my point is that if somewhat reconsidered, the idea of science communication as a vocation and profession would find a partial application to the Soviet context (the image of science popularization as both invoking younger generations’ interest in scientific endeavor and maintaining the future career guidance). In his lecture, Weber differentiates between professional scientific activities, related to solving specific research objectives basing on certain methods and techniques, and science as an inner calling for the truth, inspired by the desire to know the unknowable. These two characteristics of scientists seem contradicting: professional activity is to limit their inward calling for science, and commitment to implement the “calling” is contrary to professionalism. In the Soviet context, however, science communication tended to combining those aspects. It was the idea of “vocation” to be propagated when involving and recruiting younger generations in research activities. It might be well illustrated by the practices of the Society “Znanie” (Knowledge), established in 1947 and effectively contributed to evoking society’s interest in science and implementing the Soviet science and technology policy. In this respect, the Society fulfilled three interrelated objectives: disseminating scientific knowledge; building and propagating the heroic image of the scientist, and by that facilitating the fulfillment of a large number of technical and socio-political tasks (the Society aimed to form a scientist’s image in terms of the “vocation” – scientists were meant to make the world better); shaping a value attitude to scientific knowledge as a significant constituent of people’s common worldview and their guide in everyday practices, e.g. in solving various problems. The Society’s system included open people’s universities, museums, and libraries, “Houses of Knowledge,” publishing and printing houses, etc. Every year it gave 25 million public lectures for 280 million people throughout the USSR. It participates in radio and television programs, including the creation of popular science films. Science popularization intensely employed popular science magazines, the target audience of which included children and young adults. The recruitment of younger generations also proceeded within scientific events for schoolchildren, school competitions, and technical groups engaging in radio electronics, automatics, biochemistry, cosmonautics, etc. All of those were carried out within a model aimed at maintaining a situation when the idea of vocation grew in a seemingly natural way into appraising the idea of profession. The current tendency for scientists to engage in dialogue with the public is justified by the actualization of such phenomena as the mediation of social institutions and processes, the need to demystify the unequivocal negative perception of the consequences of science and technology progress in public consciousness, and the importance of public awareness regarding scientific developments. At the same time, I consider and problematize some debatable aspects of external science communication. A significant question here is who should be a communicator. Nowadays, it involves not only scientists, but also science journalists, press secretaries, and others, right up to amateurs. Should scientists still play a major role in it? If in the Soviet Union their involvement in many ways occurred under the institutional coercion, today it is often up to their inner calling for interacting with the public.

10:00
Liana Tukhvatulina (Institue of Philosophy, Russian Academy of Sciences, Russia)
Scientist as an Expert: Breaking the Ivory Tower

ABSTRACT. Weber considered participation in the "battle of gods" as an unacceptable risk for science. This risk occurs when a scientist (or a lecturer) plays the role of arbitrator in the value debates. Weber argued that science turned into ideology when one uses the authority of science ex cathedra to persuade the audience to follow his ideas. According to Weber, science has nothing to deal with politics since the scientific search doesn’t require political legitimization. It is governed by the principle that any knowledge is worthy “for its own sake”. This idea is closely connected with the key Weber’s principle of “the disenchantment of the world”. Weber considered a deep specialization and functional autonomy of social institutes as the key features of the rationalization process. For Weber, the value incompatibility of politics and science is the result of this process as well. That is why Weber insisted that a scientist shouldn’t assist politicians in decision making unless he or she wants to be called a “prophet”. Nevertheless, the present situation in social science shows that the collaboration of science and politics in a form of expertise is in great demand in the “risk society”. Political decisions require the deep expert analysis, which becomes one of the crucial aspects of their legitimacy. Hence, the expert knowledge makes the science-politics borders blurred again. This process has become the matter for epistemological discussions. Here the relevant question is whether the expert knowledge makes scientific research dependent on political agenda (i.e., destroys the autonomy of science). Meanwhile, I argue that a correct understanding of the expert knowledge allows considering expertise as a purely scientific phenomenon (in Weber’s sense). It requires a differentiation of scientific and political contexts of expertise, which helps reduce the moral pressure on an expert’s work. The differentiation could be analyzed by means of the concepts of “operative closure” and “cognitive openness” provided by Niklas Luhmann. We can advocate the idea of autonomy of science (even in the age of the expert knowledge) if we consider “operative” (methodological) closure and “cognitive openness” (an ability of science as a system to respond to some challenges coming from the other systems) as two interrelated features. I intend to show how Luhmann’s analysis of science can help clarify the epistemological status of expertise as a kind of scientific knowledge.

09:00-10:30 Session 28C: C1 SYMP Formalism, formalization, intuition and understanding in mathematics: From informal practice to formal systems and back again 1 (FFIUM-1)

Organizer: Mate Szabo

This project investigates the interplay between informal mathematical theories and their formalization, and argues that this dynamism generates three different forms of understanding: 

1. Different kinds of formalizations fix the boundaries and conceptual dependences between concepts in different ways, thus contributing to our understanding of the content of an informal mathematical theory. We argue that this form of understanding of an informal theory is achieved by recasting it as a formal theory, i.e. by transforming its expressive means. 

2. Once a formal theory is available, it becomes an object of understanding. An essential contribution to this understanding is made by our recognition of the theory in question as a formalization of a particular corpus of informal mathematics. This form of understanding will be clarified by studying both singular intended models, and classes of models that reveal the underlying conceptual commonalities between objects in different areas of mathematics. 

3. The third level concerns how the study of different formalizations of the same area of mathematics can lead to a transformation of the content of those areas, and a change in the geography of informal mathematics itself. 
In investigating these forms of mathematical understanding, the project will draw on philosophical and logical analyses of case studies from the history of mathematical practice, in order to construct a compelling new picture of the relationship of formalization to informal mathematical practice. One of the main consequences of this investigation will be to show that the process of acquiring mathematical understanding is far more complex than current philosophical views allow us to account for. 

While formalization is often thought to be negligible in terms of its impact on mathematical practice, we will defend the view that formalization is an epistemic tool, which not only enforces limits on the problems studied in the practice, but also produces new modes of reasoning that can augment the standard methods of proof in different areas of mathematics. 

Reflecting on the interplay between informal mathematical theories and their formalization means reflecting on mathematical practice and on what makes it rigorous, and how this dynamism generates different forms of understanding. We therefore also aim to investigate the connection between the three levels of understanding described above, and the notion of rigor in mathematics. The notion of formal rigor (in the proof theoretic sense) has been extensively investigated in philosophy and logic, though an account of the epistemic role of the process of formalization is currently missing. We argue that formal rigor is best understood as a dynamic abstraction from informally rigorous mathematical arguments. Such informally rigorous arguments will be studied by critically analyzing case studies from different subfields of mathematics, in order to identify patterns of rigorous reasoning.
 

Chair:
Gerhard Heinzmann (Université de Lorraine, France)
Location: Room 112+113
09:00
Marco Panza (CNRS, France)
Formalisation and Understanding in Mathematics

ABSTRACT. The aim of my talk should be that of introducing the FFIUM Symposium. It will, then, concern the main topic of this symposium: how formalisation (of a mathematical informal, or diversely formalised theory) can contribute to (mathematical) understanding. I will distinguish three forms of understanding in mathematics. i ) Both the boundaries and the internal relations of conceptual dependence of informal mathematical theories are often blurred. An informal definition of natural numbers does not make it clear, for example, whether defining addition on these numbers is part of defining them, or if the latter is independent of the former, in the same way, in which informally proving a theorem within number theory may not make it clear whether the proof relies on the admission of a form of the Axiom of Choice or on predicative comprehension. Different kinds of formalizations can, then, fix both the frontiers and the conceptual dependences among concepts in different ways, thus contributing to our understanding of the content of the informal theory itself. We might call this form of understanding ‘understanding by recasting’. The basic idea, here is initially due to Ken Manders. It is that mathematics provides understanding of other mathematics by conceptually recasting it. Recasting is intended here as a transformation of expressive means, and it is taken to provide understanding insofar as it enhances the grasp of the relevant content, without loss of precision or rigor. In other words, one way of providing mathematical understanding is by offering a new way to express a certain piece of mathematical content:, more focused on this content itself, and that is less open to errors caused by conceptual contamination. ii ) Once a formal theory is available, it also must be understood. An essential contribution to our understanding of a formal theory is made by our identification of it as a formalization of a particular corpus of informal mathematics. Formal (consistent) theories have models, and often these will not be isomorphic to one another. Investigating these models can be valuable for a variety of mathematical and philosophical purposes, for example proving independence theorems. But quite often, these theories are conceived to account for a previously intended model, or better a previously intend system of objects, relations and operation, intuitively fixed in a way that, in many case, depends on previous history. Establishing how the formal theory (be it categorical or not) account for this model or system, what is lost and gained in formalisation, what is related and what separated, in a second form of understanding. While understanding by recasting is understanding of an antecedent piece of mathematics (often an informal one), this new form of understanding is understanding of the very formal theory itself. One could call it ‘understanding by interpretation’. iii ) In both understanding by recasting and understanding by interpretation, understanding could be seen as a process that allows to fix a content considered as invariant: a formal theory recasts the same, invariant content of a piece of antecedent mathematics; interpretation spread light on the way it does it, or can be taken to do it. Reflecting on the relations between different pieces of antecedent mathematics can, however, suggest ways for radically transforming a received content, and, thus, change the geography of mathematics itself. This provides a new form understanding. It might be called ‘understating by innovation’. Formalisation can help this process by fixing a new content and also suggesting new informal ways to think at it. After having distinguished these different forms of understanding, the talk will try to identify their possible interactions, and suggest how they might contribute to pursue epistemic economy: making as much as possible mathematical content dependent on as less as possible basic conceptual resources.

09:30
Marco Buzzoni (University of Macerata, Italy)
Mathematical Vs. Empirical Thought Experiments: between Informal Mathematics and Formalization

ABSTRACT. In spite of the special importance that Ernst Mach attached to mathematical thought experiments (henceforth MTE), the literature paid too little attention to the relationship between this kind of TE and that to be found in the experimental sciences (cf. Witt-Hansen 1976, Müller 1969, Brown 1999, 2007b and 2007c, Van Bendegem 2003, Sherry 2006, Starikova 2007, Cohnitz 2008, Starikowa and Giaquinto 2018; An indirect, though fundamental treatment of TEs in mathematics, based on the distinction between ‘formal’ and ‘informal’ (inhaltliche) mathematics, is to be found in Lakatos 1963-4: on Lakatos, see especially Yuxin 1990, Glas 2001 Kühne 2005, pp. 356-366, and Sherry 2006). To clarify the relationship between empirical and MTEs, in the first part of my talk I shall give a very brief outline both of what I take to be the essential characteristics of thought experiments in the experimental sciences and of what makes up the irreducibility of (formal) mathematics to empirical knowledge. After this preliminary preparing of the way, I shall approach the main question of my talk. It is usually taken as obvious, from Mach onwards that there are MTEs. But against this opinion I shall maintain that in an important epistemological sense, unlike in the natural sciences and applied mathematics, TEs cannot be regarded as a particular method within formal mathematics. In this sense, it would be misleading, and therefore not advisable, to speak of TEs in (pure) mathematics, since these cannot, in principle, come into direct conflict with experience, which is a function of the interactions between our bodies and the surrounding reality. Even though visualisation plays an important role in many TEs of pure mathematics, MTEs are to be considered, as far as an epistemologically fundamental aspect is concerned, more similar to formal proofs than to TEs in the natural sciences. But, in another sense, one can recognize the existence MTEs, without collapsing the distinction between formal analytic knowledge - which cannot directly conflict with experience - and experiential knowledge, which is always connected with the way in which we interact, through our body, with the world of reality around us. Indeed, the notion of a formal system does not express a state, but an ideal type, which is exemplified, but not exhausted, by the relatively well-insulated systems to which the formalizing activity of thought gives rise. Every time mathematics temporarily reopen themselves to the interactions between our actions and the environment around us, then it is liable to undergo change in accordance with some aspects of our empirical knowledge; and the more this reopening runs deeper the more TEs are fruitful and play a role that is not fundamentally different from that existing in the experimental sciences. Indeed, TEs have a role to play not only in cases where new branches or even new mathematical disciplines appear in human culture, but also in the effort to constitute the complete coherence of the network of connections between the various formal theories that makes up mathematics at a certain period of its historical development. It is no accident that Lakatos was at the same time fully aware of the historicity of mathematics and gave one of the most important impulses to the development of a theory of MTEs, although his philosophy is in many ways unsatisfactory, especially because it does not provide any criterion to clearly distinguish, at least in principle, mathematics from experimental sciences.

10:00
Michael Andrew Moshier (Chapman University, United States)
The Independence of Excluded Middle from Double Negation via Topological Duality

ABSTRACT. In standard approaches to logic, the Law of the Excluded Middle (LEM) is equivalent to the Law of Double Negation (LDN). A careful look at the proof that one of the laws renders the other admissible shows that, in fact, LDN from LEM hinges on a third law: the Law of Non-contradiction (LNC). Moreover, the proof in the other direction can be adapted to show that LDN renders both LNC and LEM admissible. In very rough summary, LEM+LNC = LDN.

We review topological semantics for propositional logic (essentially, Stone duality) and locate the actual source of the coincidence LEM+LNC=LDN in the zero-dimensionality of topological semantics for propositional logic. By a suitable generalization of Stone duality, moving to general compact Hausdorff spaces, we obtain a category of logical systems in which LDN holds, but neither LEM nor LNC do.

The well-known formalization of intuitionistic logic (in Heyting algebra) already shows that it is possible to have LNC without LEM, the dual of this also shows that it is possible to have LEM without LDN.

09:00-10:30 Session 28D: A2 SYMP Substructural epistemology 1 (SubStrE-1)

Organizers: Dominik Klein, Soroush Rafiee Rad and Ondrej Majer

Epistemic and Doxastic logic, on one hand, and probabilistic logics on the other, are the two main formal apparatus used in the representation of knowledge and (graded) belief. Both are striving fields that have allowed for many fruitful applications in philosophy and AI. In representing knowledge and belief, classic epistemic and deontic logic rely on a number of minimal assumptions. Agents are, for instance, usually taken to be logically omniscient and their informational states are assumed closed under logical equivalence. Within dynamic logic, also informational updates are often assumed to be correct, i.e. truthful. In the same manner in the probabilistic approach the agents beliefs are assumed to satisfy the Kolmogorov axioms for probabilities which in turn impose strong rationality and consistency conditions on these beliefs.

These assumptions are, of course, idealizations. Newly learned information can often be false or misleading and rarely satisfies classic strong consistency criteria. What is more, real agents frequently behave in ways that are incompatible with orthodox assumptions of logical and probability theory. To establish more comprehensive positive or normative theories about agents, there is hence a need for theories that are able to deal with weaker contexts where some standard assumptions are violated. 

This workshop brings together a number of approaches for weak, substructural epistemic logic. The approaches discussed apply to the merging of possibly contradictory information, probabilistic assignments based on contradictory or inconclusive information, intensional and hyper-intensional beliefs, i.e. belief states that are not closed under logical equivalence and collective epistemic states such as group knowledge among groups of weaker-than-classic agents.

Chair:
Ondrej Majer (Institute of Philosophy, Academy of Sciences of the Czech Republic, Czechia)
Location: Room 364
09:00
Marta Bilkova (Academy of Sciences of the Czech Republic, Czechia)
Common belief logics based on information

ABSTRACT. Substructural epistemic logics present an example of formal models of beliefs of rational agents, when perspective switches from a traditional, epistemic alternatives based semantical approach, to information based approach. In particular, we can interpret beliefs as based on available information or reasonable expectations, and capture them via diamond-like modalities interpreted over information states or probability distributions. In the former case, the corresponding notion of belief is that of confirmed-by-evidence belief. A logical account of belief along these lines needs to take into account inconsistencies and incompleteness of information, or uncertainty how likely an event is, based on the evidence locally available to agents. This naturally leads us to use and study in general modal extensions of non-classical logics such as substructural, paraconsistent or many-valued (Belnap-Dunn four-valued logic and Lukasiewicz logic especially). Particular examples of such epistemic logics have been investigated as modal extensions of distributive substructural logics [1,3].

As we think that understanding the notion of common belief is crucial to any formalization of group beliefs and their dynamics, the aims of this contribution is to present common belief extensions of some epistemic logics based on information states semantics, and prove their completeness. We will consider Hilbert style axiomatizations of those (both of finitary and infinitary nature), where common belief is formalized as a greatest fixed point expression. To approach the completeness problem we in particular use two different insights of which we provide theoretical accounts: one coming from abstract algebraic logic, the other from coalgebraic logic: First, to prove the strong canonical completeness of the infinitary versions of the logics we will use a proper version of extension lemmata such as Lindenbaum or Belnap’s pair-extension lemma. A general abstract algebraic perspective at both lemmata for infinitary logics, widening the area of their applicability beyond modal extensions of classical logic, and pointing at their limits, is given in [2]. Second, understanding the frame semantics of logics we consider as given by coalgebras, and generalizing insights available in flat fixed point coalgebraic logics based on classical logic, we prove the completeness of the finitary axiomatization of the logics.

[1] Bilkova, M., O. Majer and M. Pelis, Epistemic logics for sceptical agents, Journal of Logic and Computation, 26(6), 2016, pp. 1815-1841, (first published online March 21, 2015). [2] Bilkova, M., Cintula P., Lavicka T., Lindenbaum and Pair Extension Lemma in Infinitary Logics,Logic, Language, Information, and Computation. WoLLIC 2018. Springer, 2018. [3] Sedlar, I., Epistemic extensions of modal distributive substructural logics, Journal of Logic and Computation, 26(6), 2016, pp. 1787–1813.

09:30
Ondrej Majer (Institute of Philosophy, Academy of Sciences of the Czech Republic, Czechia)
Dominik Klein (Bayreuth Univeristy, Germany)
Soroush Rafiee Rad (Bayreuth Univeristy, Germany)
Non-classical probabilities over Dunn Belnap logic
PRESENTER: Ondrej Majer

ABSTRACT. Bellnap and Dunn [1] introduced a four valued propositional logic allowing, in addition to the classical truth values True and False, the attribution of non-classical truth values Neither and Both accounting for possibly incomplete or contradictory information concerning a particular proposition.

The Bellnap-Dunn four-valued logic has been extensively studied since its introduction and has been proved fruitful in the study of rational agency and the rational agents attitude towards the truth or falsity of propositions in more realistic contexts. More recently there has been attempts to study also the probabilistic extensions of this logic by Dunn [2] and Childers, Majer and Milne [3]. In particular Dunn investigates this probabilistic extension by introducing non-classical probability functions that assign to each proposition in the language a normalised four valued vector that encodes a probability mass function on the four possible truth values. This is in contrast to the classical case where the probability function on the language assigns to each proposition two values expressing a mass function on the proposition and its negation. Dunn [2] studies the logical structure of this probabilistic setting. However to define the logical connectives he makes some very strong independence assumptions that end up having undesirable consequences. In particular in that setting every proposition ends up probabilistically independent of every other proposition. Even of its logical consequences and even of itself. Our work picks up on the non-classical probability functions defined by Dunn but redefine the logical connectives in a way to avoid such undesirable independence consequences. In this new setting we introduce the necessary ingredients for defining conditional probabilities and will show the standard properties for it. Furthermore we propose strategies for aggregating these four valued probability assignments and show the standard properties for the proposed aggregation procedures. We also study the connection with the approach given in [3] and will show that the two setting are inter-translatable.

[1] Belnap, N. D., Jr, A useful four-valued logic: How a computer should think, §81 of Alan R. Anderson, Nuel D. Belnap, Jr, and J. Michael Dunn, Entailment: The Logic of Relevance and Necessity, Vol. II, Princeton NJ and Oxford: Princeton University Press, 1992.

[2] Dunn, J. M. ,Contradictory information: Too much of a good thing, Journal of Philosophical Logic 39 (2010): 425–452

[3] Childers, T., Majer, O., Milne, P., The (Relevant) Logic of Scientic Discovery, (under review)

10:00
Zoé Christoff (University of Bayreuth, Germany)
Olivier Roy (University of Bayreuth, Germany)
Norbert Gratzl (Ludwig Maximilian University of Munich, Germany)
Priority Merge and Intersection Modalities
PRESENTER: Zoé Christoff

ABSTRACT. Distributed knowledge, defined by taking the intersection of the individuals’ indistinguishability relations or information cells, is the standard notion of pooled information in epistemic logic. It is well-suited to represent the pooling of true information. In contrast, intersection of “softer” types of information, for instance plausibility orderings, yields counter-intuitive results. Indeed, the corresponding notion of “distributed belief”, defined as the intersection of the individual underlying plausibility orderings on possible worlds, becomes inconsistent as soon as individual opinions diverge too radically. To avoid such inconsistencies, we focus on an alternative way to pool beliefs, the so-called priority (or lexicographic) merge.

Priority merge aggregates plausibility pre-orders as follows. Assume that the group members are somehow ordered in terms of expertise or influence in the group. Then priority merge proceeds by lexicographically considering the strict preferences of each agent in order of priority. Every pair of states strictly ordered by the agent on top of the order is also strictly ordered for the group. For the pairs about which the topmost agent is indifferent, we move to the second agent and order them strictly if she does so, and so on until all pairs are strictly ordered, or we have gone through all agents. Priority merge has been generalized by [2] to arbitrary priority operators which pool any number of relations using priority graphs for agents, i.e., relations that need not be linear nor transitive. Most importantly, [2] shows that any such priority operator can be represented as a combination of two simple operations. The logical framework proposed here relies directly on this result.

We start with a systematic comparison between the logical behavior of priority merge and the more standard notion of pooling through intersection, for different notions of belief, on multi-agents plausibility models. We then provide a sound and complete axiomatization of the logic of priority merge, as well as a proof theory in labeled sequents that admits cut. To the best of our knowledge, the only previous logical study of lexicographic merge has been done in extended modal languages [3]. One of the contributions of the present paper is therefore to show that priority merge can also be captured by a modal logic without any hybrid machinery. Finally, we study Moorean phenomena and define a dynamic resolution operator for priority merge, in the same way as [1] does for distributed knowledge, and we provide a complete set of reduction axioms.

References: [1] Thomas Ågotnes and Yì N. Wáng. Resolving distributed knowledge. Artificial Intelligence, 252:1 – 21, 2017.

[2] Hajnal Andréka, Mark Ryan, and Pierre-Yves Schobbens. Operators and laws for combining preference relations. Journal of Logic and Computation, 12(1):13–53, 2002.

[3] Patrick Girard and Jeremy Seligman. An analytic logic of aggregation. In Proceedings of the 3rd Indian Conference on Logic and Its Applications - Volume 5378, ICLA 2009, pages 146–161, Berlin, Heidelberg, 2009. Springer-Verlag.

09:00-10:00 Session 28E: IS A1 Malliaris
Chair:
Christina Brech (University of São Paulo, Brazil)
Location: Room Krejcar
09:00
Maryanthe Malliaris (University of Chicago, United States)
Complexity and model theory

ABSTRACT. The ultraproduct construction gives a way of averaging an infinite sequence of mathematical structures, such as fields, graphs, or linear orders. The talk will be about the strength of such a construction.

09:00-10:30 Session 28F: Journal panel

Books and journals have a significant role in the scholarly disciplines, as means of disseminating work, as professional forums for debate, and as criteria for advancement in most research fields. Scholarly publishing is undergoing profound changes, which makes it all the more critical that researchers, especially junior researchers, stay abreast of the current state of scholarly publishing. 

To this end, the Editors of prominent journals in the history and philosophy of science will convene a panel on issues facing scholarly publishing. The forum will have a strong focus on providing advice and mentorship to junior scholars about selecting journals, placing their work in journals, best practices for navigating the review process, and obtaining a broad and engaged audience for scholarly work. These recommendations will be of interest to more senior researchers as well, including discussion of the role of referees and of the review process, and of recent changes to the landscape of journal publishing.

As part of the panel, some speakers will discuss current developments in, and prospects for, scholarly publishing. These may include the increasing role of open access publishing, including Plan S in Europe, and the changing relationships between book and journal publishing. 

The following Editors have agreed to take part: 

• Rachel Ankeny, Studies in History and Philosophy of Biological and Biomedical Sciences
• Otávio Bueno, Synthese
• Sabina Leonelli, History and Philosophy of the Life Sciences
• Thomas Reydon, Journal for General Philosophy of Science
• K. Brad Wray, Metascience

A 90 minute time slot will allow ample time for questions. 

Chair:
Hanne Andersen (University of Copenahgen, Denmark)
Location: Room Gočár
09:00-10:00 Session 28G: C3 Philosophy of neuroscience
Chair:
Konrad Rudnicki (University of Antwerp, Belgium)
Location: Room 250
09:00
Henk De Regt (Radboud University, Netherlands)
Linda Holland (Vrije Universiteit Amsterdam, Netherlands)
Benjamin Drukarch (Amsterdam UMC, Netherlands)
Modeling in neuroscience: Can complete and accurate understanding of nerve impulse propagation be achieved?
PRESENTER: Henk De Regt

ABSTRACT. Explanatory understanding of phenomena in neuroscience is typically achieved via the construction of models. A prime example is the celebrated Hodgkin-Huxley model (HH-model) of the action potential (Hodgkin and Huxley 1952, Craver 2007). The HH-model is central to the electricity-centered conception of the nerve impulse that dominates contemporary neuroscience. In recent years, however, the HH-model has been challenged because it is unable to account for non-electrical aspects of the nerve impulse, some of which have been known for decades. Consequently, alternative theories and models of nerve impulse propagation have appeared upon the scene, using a thermodynamic or mechanical framework instead of an electrical one. One of these models is the Heimburg-Jackson model (HJ-model) (Heimburg and Jackson 2005), according to which the nerve impulse is an electromechanical density pulse in the neural membrane. Its proponents assume that this model is potentially able to replace the HH-model.

Alternatively, one might think that these models of nerve impulse propagation should not be regarded as rivals but may be integrated in a general unifying model. An example of such a proposal is the model of Engelbrecht et al. (2018), which has been developed to unify all relevant manifestations of the nerve impulse and their interactions. The attempt of Engelbrecht et al. aligns with the widespread neuroscientific conviction that the ultimate goal of neuroscience is to develop models that represent neuroscientific phenomena accurately and completely. In this paper, however, we argue that the Engelbrecht model does not provide an accurate and complete representation of the nerve impulse. One reason for this conclusion is that the HH-model and the HJ-model, which the Engelbrecht model attempts to integrate, contain inconsistent assumptions.

We submit that the above-sketched approaches to modeling nerve impulse propagation are motivated by a misguided assumption, namely that accurate and complete representation is a unique, objective criterion for evaluating neuroscientific models. By contrast, we propose, in line with Giere (2004), to take into account the purpose for which a model is used when evaluating the value of models; models are tools that represent the nerve impulse accurately and completely enough to achieve a specified goal. Considering models as tools for specific purposes, and acknowledging that different purposes often require incompatible assumptions, suggests that it will be impossible to develop a consistent general unifying model of nerve impulse propagation (cf. Hochstein 2016, Craver and Kaplan 2018). Instead of aiming at explaining such a complex phenomenon in a single model, neuroscientists would do better to employ a ‘mosaic’ (cf. Craver 2007) framework of models. From this collection of models the explanation of nerve impulse propagation can be inferred based on the piecemeal and sometimes contradictory representation of it in distinct models.

References Heimburg and Jackson (2005). PNAS 102: 9790-9795. Craver (2007). Explaining the Brain. Oxford University Press. Craver and Kaplan (2018). BJPS: https://doi.org/10.1093/bjps/axy015. Giere (2004). Philosophy of Science 71: 742-752. Engelbrecht et al. (2018). Proc. Estonian Acad. Sci. 67: 28-38. Hochstein (2016). Synthese 193: 1387-1407. Hodgkin and Huxley (1952). J. Physiol. 117: 500-544.

09:30
Karen Yan (Institute of Philosophy of Mind and Cognition, National Yang-Ming University, Taiwan)
Understanding Causal Reasoning in Neurophysiology

ABSTRACT. Neuroscientists value the research that provides some causal understanding of the targeted system. In order to achieve that, they perform causal reasoning, a reasoning type of activity that aims at producing and/or evaluating causal claims about their targeted system. When they perform their causal reasoning within a specific context, they need to employ some standards to guide and justify their causal reasoning. But what are these standards? How we as philosophers analyze and evaluate them?

The questions get more complicated when you take the evolution and heterogeneity of neuroscientific practice into consideration. First, structures and standards for good experimental paradigms are co-evolving with technological innovation. For example, in neurophysiology, after the invention of the technique that allows geneticists to genetically modify neurons to express light-sensitive ion channels, a good experimental paradigm in neurophysiology often involves the component of using this genetical technique. Second, it is common in current neuroscience that, for a given set of experiments, it might combine various types of techniques from different areas. Each set of techniques brings in different methodological standards that may or may not be relevant to the success of causal reasoning.

These evolving and heterogeneous aspects of neuroscientific practice pose a particular challenge to the philosophers who aim to provide a normative framework in order to understand how causal reasoning in neuroscience works. We need a normative framework that accommodates these evolving and heterogeneous aspects. One way to meet the challenge is to reduce or subsume the heterogeneous practices under a single category/concept of mechanistic causal explanation that is flexible enough to accommodate the heterogeneity (Craver, 2007; Craver and Darden, 2013). Another way to meet the challenge is to adopt the framework of modest scientific pluralism on causal pattern explanations (Potochnik, 2017).

In this paper, I will first present a case study from neurophysiology. I will use the following methodology to analyze the case study: (1) delve into the details of the heterogeneous practices, (2) identify instances of causal reasoning, (3) analyze what the relevant standards are in use, (4) perform some literature analysis to assess how the community of neurophysiologists, in fact, evaluate the relevant standards. I will then apply both Craver’s framework and Potochnik’s normative framework to the case study. I aim to adjudicate which framework provides better conceptual tools for evaluating the success of the identified instances of causal reasoning. I will conclude that Potochnik’s framework does a better job than Craver’s one with respect to the case study from neurophysiology.

To that end, the paper will proceed as follows. In Section 2, I will present a case study to identify three instances of causal reasoning in neurophysiology and the relevant standards in use. In Section 3, I will argue that the conceptual tools from Craver’s framework are insufficient to complete the evaluative task. In Section 4, I will show that the conceptual tools from Potochnik’s framework adequately assist the evaluative task and help generate a better philosophical understanding of the evolving and heterogeneous aspects of causal reasoning in neurophysiology.

09:00-10:30 Session 28H: A4 History and philosophy of traditional logic 1
Chair:
Gabriela Besler (University of Silesia in Katowice, Poland)
Location: Room 301
09:00
Farrukh Khudoydodov (TAJIK NATIONAL UNIVERSITY, Tajikistan)
Similarities and Differences in the Logic of Aristotle and Avicenna

ABSTRACT. The article analyzes some similarities and differences in the logic of Aristotle and Avicenna. A brief analysis of the logical teachings of Aristotle and Avicenna shows that the main differences in the views of Aristotle and Avicenna are found only in determining the logic and range of problems that constitute the subject of this science. In his solution of this question, Avicenna was at the side of the Neo-Platonists and showed irrefutable evidence in support of their view that logic is both a part of philosophy and an instrument of science. Whereas, according to Avicenna, the meaning of logic as part of philosophy is to study the forms of thinking, its value as an organon is that, permeating all sciences as an organon, it connects and unites them into one single system. Avicenna also developed, more thoroughly than Aristotle, a theory of proposition, which includes the doctrines of both categorical and conditional propositions and conditional syllogisms.

09:30
Karel Šebela (Palacký University Olomouc, Czechia)
Sortal Interpretation of Aristotelian Logic

ABSTRACT. Though Aristotelian logic is interpreted as just a minor part of the canonical modern extensional logic, namely, first-order logic, there still remains one point deliberately left aside and unmentioned. That is the assumption of non-empty terms. Without this assumption, Aristotelian logic as a part of modern extensional first-order logic lost its essential features: the square of opposition collapses and several hitherto valid modes of syllogism yield invalid patterns of inference. The reason, as it is usually claimed, is that first-order logic does not require the non-emptiness of terms and in this sense is much broader than Aristotelian logic. However, this could be put to doubt with some success, for any possible grammatical subject of a first-order logical formula either denotes an object in the domain (constant) or ranges over a non-empty domain of individuals and therefore in a certain sense the latter logic also requires non-emptiness of its terms. I will follow the idea that the key lies in the “nature” of the domain of individuals. The point is to divide the domain into the so-called “sorts”. The logic which studies the domain of quantification conceived in such a way is called the sortal quantification theory. The key concept of the sortal quantification theory is, obviously, the concept of a sortal. The simplest and widely accepted interpretation of sortals is that they provide a criterion for counting items of a kind, as it is in Cocchiarella’s definition of a sortal concept – “a socio-genetically developed cognitive ability or capacity to distinguish, count and collect or classify things” (Cocchiarella 1977). Now, the sortal quantification theory introduces a so-called sortal quantification. The key idea could be stated as follows: in a sentence e.g. all men are mortal, its canonical interpretation in first-order logic tells us that for every object it is the case that if the object is S, then the object is also P. This sounds somewhat strange, because the original sentence does not seem to be “about” all objects. In the sortal quantification theory, the original sentence is reformulated in a way that the universal quantifier does not quantify over all individuals, but over the individuals which fall under S, put briefly, quantifies over all Ss. Moreover, in some versions of the sortal quantification theory, sortals are subordinated to other sortals, so we can build a hierarchy of sortals. Every sortal is subordinated to some ultimate sortal, i.e., a sortal which is subordinate to no other sortal. To sum up, with the help of the apparatus of the sortal quantification theory, it seems possible to reconstruct Aristotelian logic without losing its essential features. So, sorted universe of discourse and sortals can be successfully implemented into modern logic. What remains a problem, however, is the philosophical motivation and justification. So, at the conclusion, I would like to show that Aristotelian ontology, especially his theory of categories, could provide a satisfactory philosophical background for the questions of definition and relevance of sortals and generally for the belief that there are metaphysically distinct kinds of entities.

10:00
Ioannis Vandoulakis (Hellenic Open University, Greece)
Pythagorean arithmetic as a model for Parmenidean semantics

ABSTRACT. The question of the relation of mathematics and logic in ancient Greece has puzzled many historians, who view no connection between Greek geometrical demonstration and logical reasoning as conducted within Aristotle’s syllogistics and Stoic propositional logic [Mueller 1974]. However, Pythagorean number theory, as survived in later sources, has the following distinctive features: i) arithmetical reasoning is conducted over a 3-dimensional “domain” that extends indefinitely in the direction of increase. ii) The monas, denoted by an alpha, is taken to be a designated object, over which an iterative procedure of attaching an alpha is admitted. Thus, numbers are defined as finite suites. Various kinds of numbers can then be defined as suites constructed according to certain rules. iii) Arithmetic is then developed by genetic (recursive) constructions of various finite (plane or spatial) schematic patterns. Thus, Pythagorean arithmetic is advanced as a visual theory of counting over a combinatorial “domain.” iv) Arithmetical reasoning is conducted in the form of mental experiments over concrete objects of combinatorial character (e.g. pebbles). Any assertion about numbers utters a law, which can be confirmed in each case by combinatorial means. v) Arithmetic concerns affirmative sentences stating something ‘positive’ that can be confirmed by means of the construction of the corresponding configuration (deixis). None kind of ‘negative’ sentences is found [Vandoulakis 2010]. On the other hand, in Parmenides’ poem On Nature certain semantic views have made their appearance for the first time: a) only what is true is expressible (meaningful). b) a false statement names or expresses nothing (is meaningless). This leads to a peculiar semantic picture where truth is identified with expressibility (meaningfulness) while falsity is identified with inexpressibility (meaninglessness). Parmenides’ ontological universe therefore turns out to be a positive (negationless) true Being; there are no negative facts in it [Vandoulakis 2015]. Now, the relation between Parmenides’ semantic viewpoint and Neo-Pythagorean arithmetic can be expressed as follows: Parmenides’ theory of truth could be obtained by reflexion upon Pythagorean arithmetic, if truth is identified with genetic constructability (deixis). In this case, the Being is identified with the universe of all arithmetical constructible (‘experimental’) truths. In other words, Pythagorean arithmetic might have served as a model for Parmenidean semantics. References Mueller, L. 1974. “Greek mathematics and Greek logic,” in J. Corcoran (ed.), Ancient Logic and Its Modern Interpretations, Dordrecht, 35-70. Vandoulakis I.M. 2010. “A Genetic Interpretation of Neo-Pythagorean Arithmetic,” Oriens - Occidens Cahiers du Centre d’histoire des Sciences et des philosophies arabes et Médiévales, 7, 113-154. Vandoulakis I.M. 2015. “Negation and truth in Greek mathematics and philosophy”, 15th Congress of Logic, Methodology and Philosophy of Science CLMPS 2015 Book of Abstracts, 288.

09:00-10:00 Session 28I: SESSION CANCELLED, talk "Fitness incommensurability and evolutionary transitions in individuality" MOVED TO 17D
Chair:
Sophie Juliane Veigl (University of Vienna, Austria)
Location: Room 201
09:00
David Villena Saldaña (Department of Philosophy, Lingnan University, Hong Kong)
CANCELLED: Theoretical and methodological differences in the evolutionary analysis of human behavior

ABSTRACT. I am interested in the theoretical and methodological differences displayed by two arguably disconnected programs in the context of the evolutionary and Darwinian analysis of human behavior. Specifically, the contribution I propose is concerned with the philosophical aspects of the debate between evolutionary psychology and human behavior ecology. This topic pertains to the fields of philosophy of science (models for constructing and testing hypotheses) and philosophy of biology (applications of evolutionary biology, problems of adaptationism, and fitness maximization). It has also elements of philosophy of psychology (how to explain human behavior and emotions), philosophy of cognitive sciences (functionalism and information-processing systems), and philosophy of mind (mental modules).

Evolutionary psychologists have strong claims about the origin, architecture, and functioning of human mind (see Tooby & Cosmides 1992; Tooby & Cosmides 2016; Buss 1999). According to them, our behavior is produced by hundreds of psychological mechanisms (massive modularity thesis) that evolved during a period of time that is coextensive with the Pleistocene epoch (1.8 million years to 10,000 years ago). These mechanisms or modules (they are even called “apps” in the literature (e.g. Pinker 2016)) are said to be computational in nature and to respond to those statistically recurrent problems faced by our hunter-gatherer ancestors. The method used by evolutionary psychologists to discover the psychological mechanisms that underlie human behavior (and to find out consequently what they think it is the pan-human nature) is reverse engineering or functional analysis. Since the modules are adaptations designed by natural selection, the behavior they produce is supposed to be functional to the problems of the ancestral world and not necessarily to our current modern problems. So we should investigate to solving which specific problems the still operating modules that compose the human mind were designed for. This analysis is based on the idea that present behavior is not necessarily adaptive. That is why evolutionary psychologists reject fitness maximization as an explanation for behavior.

I contrast the above-described theoretical and methodological tenets with the view defended by human behavior ecology. This latter discipline is not committed with any hypotheses about psychological mechanism or modules. Its attempt to explain human behavior does not take into account such mechanisms. It is just concerned with manifest, present, and measurable behavior (see Downes 2001). Furthermore, human behavior ecologists criticize the massive modularity thesis for lacking relevant empirical evidence (see Smith et al. 2001). The goal they pursue is to explain how behavior changes in predictable patterns in relation to the environment (see Nettle et al. 2013; Cronk 1991). Unlike evolutionary psychology, human behavioral ecology works with the assumption of fitness maximization. This means their practitioners see present behavior as adaptive. That is why they try to construct complex formal models for interpreting present behavior as the optimal response for changes suffered by the environment.

References

Buss, D. M. (1999). Evolutionary psychology: The new science of the mind. Boston: Allyn and Bacon.

Cronk L. (1991). Human behavioral ecology. Annual Review of Anthropology, 20, 25-53.

Downes, S. M. (2001). Some recent developments in evolutionary approaches to the study of human cognition and behavior. Biology and Philosophy, 16, 575-595.

Nettle, D., Gibson, M. A., Lawson, D. W. & Sear, R. (2013). Human behavioral ecology: current research and future prospects. Behavioral Ecology, 24(5), 1031- 1040.

Pinker, S. (2016). Foreword. In D. M. Buss (Ed.), The handbook of evolutionary psychology, Vol. 1, 2nd ed. (pp. ix-xiii). Hoboken, New Jersey: Wiley.

Smith, E. A., Borgerhoff Mulder, M. & Hill, K. (2001). Controversies in the evolutionary social sciences: A guide for the perplexed. Trends in Ecology & Evolution, 16(3), 128-135.

Tooby, J. & Cosmides, L. (1992). The psychological foundations of culture. In J. H. Barkow, L. Cosmides & J. Tooby, The adapted mind: Evolutionary psychology and the generation of culture (pp. 19-136). New York: Oxford University Press.

Tooby, J. & Cosmides, L. (2016). The theoretical foundations of evolutionary psychology. In D. M. Buss (Ed.), The handbook of evolutionary psychology, Vol. 1, 2nd ed. (pp. 3-87). Hoboken, New Jersey: Wiley.

09:00-10:30 Session 28J: B3 SYMP Communication and exchanges among scientific cultures 1 (CESC-1). Sharing, recycling, trading, and other forms of circulation

Organizers: Nina Atanasova, Karine Chemla, Vitaly Pronskikh and Peeter Müürsepp

This symposium is predicated upon the assumption that one can distinguish between different scientific cultures. This is the founding hypothesis of the IASCUD commission. The distinction between these scientific cultures can be made on the basis of the bodies of knowledge actors uphold (which present differences depending on the culture) and the scientific practices they adopt; the distinct kinds of material environment that actors shaped to operate in these contexts and how they operate with them; and also on the basis of epistemological facets. Among the facets that appear to be useful to differentiate cultures, we include: epistemic and epistemological values; types of questions and answers that are expected; types of explanation and understanding that actors hope for. This approach to scientific cultures has the potential of allowing us to understand cultures as temporary formations and not as fixed entities.
The aim of this symposium is to focus on the types of circulation that can be identified between cultures conceived along these lines and also on how these various phenomena of circulation can help us approach the historicity of scientific cultures and of their relationship with one another. The issues we would like to address include the following: 
• What can circulate between scientific cultures? We are interested in cases when knowledge and practices migrate from one culture to another. We are also interested in the borrowing of material elements and practices, as well as the adoption of epistemological choices and practices from one context into another. Events of this latter type have perhaps been studied to a lesser extent, but they seem to us to deserve specific attention.
• How does the circulation affect what is circulated? If we all agree that the adoption of an element of knowledge or practice in a different environment transforms this element, we lack a systematic approach to these phenomena of “recycling”. 
• How does the circulation affect the adopting culture, and its relationship with the culture of origin? How can it elicit a reconfiguration of the scientific cultures in presence? The study of how actors revise their knowledge in the light of new elements falls for us under this broader category of questions. However, if we consider circulation in the wider perspective that we advocate, the issue of revision presents itself in a new light. In the symposium, we aim at promoting the study of revision more broadly.

Chair:
Nina Atanasova (The University of Toledo, United States)
Location: Room 152+153
09:00
Madeline Muntersbjorn (University of Toledo, United States)
Notations & translations as catalysts of conceptual change

ABSTRACT. In this talk I consider the role played by novel systems of notation, both as obstacles to direct translations as well as catalysts for conceptual change. Before we can answer the question, “How does knowledge travel?,” we must consider whether knowledge is a peripatetic thing. The idea that knowledge travels may mislead us into thinking that knowledge divides into so many seeds that may be carried away from one place only to be planted elsewhere. The image may be extended to absurdity as we imagine Euclid’s Elements going into Adelard of Bath’s baggage only to emerge, much the same as it went in, except in Latin rather than Arabic, in England rather than Spain. However, the more we study the history of mathematics, the more we realize that knowledge is not so much transmitted as transmuted and but rarely preserved unaltered over time. Ideas are not so much shared as appropriated; credit may not be given where due. Priority disputes erupt often but are only possible when each disputant claims to be the first to have gotten hold of the same piece of knowledge: “I entertained a somewhat similar notion at roughly the same time” is not how these arguments go. Taken too far, the distinction between the knower and the known misleads in other ways. For example, after Newton and Leibniz both discover the calculus, some might think that their followers had naught but nationalism to motivate their choice of notation as the mathematics itself was otherwise the same; those on the continent were lucky their guy had the more elegant and nimble symbolism. But as more recent scholars have shown, mathematical differences between Newton’s method of fluxions and Leibniz’s infinitesimal calculus support distinct conceptual approaches to the subject. Looking back further, to Adelard’s introduction of Arabic numerals to his European colleagues, we should question whether "widespread reluctance to adopt the new symbolism" should survive as an example of how cultural biases inhibit mathematical progress. This simple story seems unlikely now that scholars have shown how cross-cultural social networks had different degrees of enthusiasm for the newly imported system depending on how they put these numerals to use. At first my remarks may seem to undermine the history of rational inquiry by binding knowledge so tightly to particular communities of knowers such that only insiders ever know anything. However, I conclude there’s a case to be made for the growth of mathematics as a complex whole, if not its safe passage as easily packaged and readily transported knowledge seeds.

09:30
Xiaohan Zhou (The Institute for the History of Natural Sciences, Chinese Academy of Sciences, China)
Elements of Continuity in the Circulation of Mathematical Knowledge and Practices in Chapter "Measures in Square" in Mathematical Writings in China

ABSTRACT. : My aim in this talk is to analyze the process of circulation of a mathematical method, as it is evidenced by its occurrence in Chinese mathematical works from different periods which rely on one another. The earlier occurrence of this method is found in The Nine Chapters on Mathematical Procedures (thereafter, The Nine Chapters), which is one of the most important mathematical works from China. We place its completion date in the form handed down somewhere between the first century B.C.E. and the first century C.E. One chapter of this book was entitled “measures in square (fangcheng),” and it deals with what, in modern terms, are systems of linear equations, even though important differences between the two exist. A 13th-century scholar Yang Hui (fl. 1261 CE) wrote, under the title Mathematical Methods Explaining in Detail The Nine Chapters, (thereafter, Mathematical Methods), a commentary on The Nine Chapters relying on both the book and former commentaries on it. His commentary testifies to continuities as well as breaks with respect to his base text. Yang Hui commented on different mathematical procedures from the base text, including the “procedure of the positive and negative”. The second source I focus on is Great Compendium of Mathematical Methods of The Nine Chapters with Analogies (thereafter, Great Compendium), a book that a 15th-century scholar Wu Jing (fl. 1450 CE) composed relying on the text of Mathematical Methods. Wu Jing inherited the concepts and practices related to the “method of the positive and the negative”, which Yang Hui had introduced in Mathematical Methods. However, in relation to the culture in the context of which he wrote, and in particular the emphasis he placed on the value of uniformity of procedures, Wu Jing modified the way of solving problems in this chapter of The Nine Chapters. My case study on the basis of one mathematical procedure in Mathematical Method and Great Compendium shows one of the differences between the scientific cultures to which the two mathematical writings belong respectively. And taking advantage of this difference, I could reveal the circulation of mathematical knowledge and practices from different periods.

10:00
Nicolas Michel (Université Paris-Diderot, France)
Avatars of generality: on the circulation and transformation of list-making practices in the context of enumerative geometry

ABSTRACT. :

The talk will examine how a cultural practice, namely that of list-making, circulated from one scientific culture to another, and the factors in relation to which it underwent transformations. In the 1860s, French geometer Michel Chasles created a ‘theory of characteristics’ that, after German mathematician Hermann Schubert’s 1879 "Kalkul der abzählenden Geometrie" soon became the bedrock of a branch of geometry known as ‘enumerative geometry’. In the last years of his life, Chasles expanded on the methods at the heart of this theory and published dozens of papers, filled with thousands of propositions, each expressing a property of a geometrical curve. Following a practice that was closely related to his ideal about mathematical knowledge, Chasles merely listed these propositions one after the other, and sorted them into various categories, with very little in the way of commentaries, proofs, or examples. These lists were hardly read or referenced by anyone, save for Schubert himself, who expanded on Chasles’ results and created a symbolic calculus that would enable him to enumerate the curves satisfying various geometrical conditions. Schubert’s book inherits from Chasles’ list-making practice, but also alters it, as the lists it displays consist of huge tables of numbers and symbolic formulae, given without verbal descriptions or explanations.

Chasles’ and Schubert’s lists aim to address the same geometrical problems, but they differ both in their textuality and in the epistemic tasks they fulfil. Indeed, Chasles’ must be read against the backdrop of a specific epistemic culture, which one could trace back to the education he received at the Ecole Polytechnique. In Chasles’ epistemology of geometry, the generality of a method is demonstrated by the fact that large numbers of propositions can be derived, almost without proof, from its systematic and uniform application. In this case, generality was expressed through a specific list-making practice. Schubert, instead, viewed Algebra as a free human creation, bounded only by the requisite that certain symbolic forms, drawn from the realm of concrete, natural numbers, be regarded as valid when extended to more abstract entities. Consequently, Schubert’s lists of formulae answer to a different calling than Chasles’: they express the formal rules of a geometrical calculus, and become meaningful and relevant only once viewed through the lens a different set of epistemic virtues.

Therefore, as Chasles’ lists reached and informed Schubert’s mathematical practice, the transfer between epistemic cultures resulted in their rewriting. This transformation operates at the levels of both the textuality itself and the status of computations that these lists enable, extending to the values of generality they embody and even the images of Algebra they reflect. By untangling the complexities of this transformation, this case-study illuminates how the literary devices used to structure and convey mathematical knowledge change according to the concerns and values of different scientific cultures, and how such an investigation into changes in mathematical styles can shed new light on the transformations of the form and structure of mathematical knowledge.

09:00-10:30 Session 28K: B4 Time
Chair:
Delia Belleri (University of Vienna, Austria)
Location: Room 302
09:00
Cristian López (CONICET-University of Buenos Aires, Argentina)
What time symmetry can (and cannot) tell us about time’s structure

ABSTRACT. The relevance of symmetries to know what the world’s structure is like has greatly grown in the last years (Baker 2010). We are said, for instance, that by knowing the space-time symmetries of a theory’s dynamics, we are allowed to draw metaphysical conclusions about space-time’s structure (North 2009). Shamik Dasgupta (2016) has called this sort of inferences “the symmetry-to-reality inference”. Time symmetry is a special case of this: if a theory’s dynamics is invariant under time reversal, then the direction of time is superfluous. Therefore, the time’s structure of the world according to the theory is actually directionless.

In analyzing the inference for time symmetry thoroughly, we find a mix of premises. Given an equation of motion L, we first find formal premises claiming that a symmetry holds, that is, that a variable in L may freely vary preserving L’s structure. In this case, by freely varying the sign of time (t by –t), we also get physically equivalent solutions for L. The sign of t (standing for the direction of time) is hence said to be variant. Second, to the extent to which we adhere to the principle that symmetries in the laws must go hand-in-hand with the symmetries of the world’s structure (Earman 1989, North 2009), we interpret that a variant feature occurring in L is surplus structure. As we are advised to go with the least structure (by an Ockham’s razor), we infer that the direction of time does not belong to the world’s structure. Concluding, the direction of time is not part of the fundamental reality.

In this presentation, I shall analyze the symmetry-to-reality inference for the case of time symmetry, focusing on the formal premises. In particular, I shall first argue that there are actually two divergent ways to conceive symmetries in physics: either as contingent properties of the dynamics or as principles guiding theory construction (Brading and Castellani 2007). Whereas the inference would work well when symmetries are considered in the first way, being thus a powerful tool for metaphysicians of science, it might rather be viciously circular when they are understood as guiding principle, and metaphysicians of science should be very careful in drawing metaphysical conclusions from them. In the second place, I shall show that time symmetry is typically understood as a principle guiding theory construction in fundamental theories (Arntzenius and Greaves 2009). Therefore, the inference wouldn’t work for drawing metaphysical conclusions about time’s structure.

References

Baker, D. (2010). “Symmetry and the metaphysics of physics”. Philosophy Compass, 5: 1157-1166

Brading, K. and E. Castellani. (2007). “Symmetries and Invariances in Classical Physics”. Handbook of the Philosophy of Science, Philosophy of Physics, Part B (Butterfield and Earman). Netherlands: Elsevier, 331–367.

Dasgupta, S. (2016). “Symmetry as an epistemic notion (twice over)”. British Journal for Philosophy of Science, 67: 837-878.

Earman, J. (1989). World Enough and Space-Time. Cambridge, MA: MIT press. North, J. (2009). “The ‘structure’ of Physics: A case study”. Journal of Philosophy, 106: 57-88.

Arntzenius, F. and Greaves, H. (2009). “Time reversal in classical electromagnetism”. The British Journal for the Philosophy of Science, 60: 557-584.

09:30
Tatiana Denisova (Surgut State University, Russia)
Metaphysical issues in modern philosophy of time: V. I. Vernadsky’s idea of "cause" of time ("source" of time)

ABSTRACT. In the studies on the nature of time, its forms and instances, it is common to distinguish between scientific (instrumental) and philosophical (metaphysical) approaches. The former concerns mathematical and scientific knowledge of how to measure and use time, whereas the latter concerns philosophical reflexion upon what time is and what its nature might be. Aristotle considered that these approaches are not opposite, but mutually complementary. In the 20th century, a synthesis of these approaches was undertaken by V.I. Vernadsky (1863-1945). Although Vernadsky is commonly associated with the concept of biological time as an instance of time, his major contribution is more profound. Since his early works, he poses the metaphysical question of the nature of time and its origin. We can identify the following theses in his works [Denisova 2018]: 1. A complex object is a dynamic system, characterized by processes occurring in time that has its own internal, autonomous time. The specific features of the internal time of the system are determined by these processes, generated by them and do not exist outside of them. 2. Since a living object does not only exist (is present, existent), but also lives (is born, grows and perishes), it generates time by its whole existence. 3. Consequently, the source and cause of time is both an individual living organism and the biosphere as a whole. 4. Since inert matter is not capable of generating a living substance, the biosphere has no beginning in time; it is eternal, like Cosmos. 5. If the biosphere, as source and cause of time, is eternal, then the question of the “beginning of time” is meaningless and incorrect. 6. Then the only correct question is that of the cause of existence of time and the possibility of absolute time as universal point of reference of all time instances. Biological, but not astronomical time is such a kind of universal time, because all life processes (growth, becoming, aging of organism, reproduction, succession of generations) have a stable duration. Its absoluteness and universality consists not in the fact that it concerns directly any material object and is associated with any motion, but in its suitability for measuring any processes, since it always advances at the same tempo, regardless of external conditions and follows objective laws. The intuitive idea of unity and integrity of the world, pervading Vernadsky’s scientific and philosophical work, was thus consolidated by his scientific theory of biosphere that produces time [Vernadsky 1988]. References Denisova T.Y. 2018. Avatars of Time: Images and Concepts of Time in the History of Human Thought. Moscow. URSS. LENAND. (in Russian). Vernadsky V.I. 1988. Philosophical thoughts of a naturalist. Moscow. Nauka, (in Russian)

10:00
Lars-Göran Johansson (Uppsala University, Sweden)
The direction of time

ABSTRACT. The direction of time

A common belief is that the direction of time is based on a global directedness, a universal time with direction, and this is often thought of as based upon the second law of thermodynamics + the assumption that the entropy immediately after Big Bang had a very low value. There are several problems with this account, the most severe is perhaps Price’ observation that it presupposes a distinction between earlier and later events, thus in fact presupposing time directedness.

I will instead explain the directedness of time by considering how we measure time with clocks.

A clock is a system consisting of two parts, one that oscillates (a certain number of oscillations define the time unit) and one that counts the oscillations. Any counter which keeps track of the number of oscillations will do. This counter must by necessity undergo irreversible state changes when counting.

Before we introduce the notion of time we can determine which of two states is the earlier one when observing two states of a counter which counts the number of a clock’s oscillations, even if we don’t know which observation is done first. Hence irreversibility of state changes of partially isolated physical systems can be used to define the direction of time for limited periods. Then a global directed time can be constructed out this shorter time periods, since different clocks’ working periods in many cases partially overlap.

A clock is by necessity a macroscopic object. This can be inferred from the result of Misra et.al. (1979), viz., that no operator with monotonously increasing expectation value can be defined on Hilbert spaces. In other words, pure quantum systems cannot function as counters.

This tells us that the directedness of time is not based on properties of individual quantum systems. Hence, a clock must be a macroscopic system in order to be able to register and record a number of oscillations.

Sooner or later any particular clock mechanism will stop functioning. But we can use many such clock mechanisms, partially overlapping in time, when constructing a universal time, thus giving us a universal direction of time.

References Misra, B. and Prigogine, I. and Courbage, M.: Lyapounov Variable: Entropy and Measurement in Quantum Mechanics. Proceedings of the National Academy of Sciences of the United States of America, 4768-72, 76, 1979.

Price, Huw: Time's Arrow & Archimedes' Point. New Directions for the Physics of Time. Oxford Univ. Press, New York,1996.

09:00-10:00 Session 28L: C5 Philosophy of the cognitive and behavioral sciences
Chair:
Juraj Hvorecky (Institute of Philosophy, Czech Academy of Sciences, Czechia)
Location: Room 202
09:00
Paula Quinon (Department of Philosophy, Lund University, Sweden)
Peter Gardenfors (Department of Philosophy, Lund University, Sweden)
Situated Counting
PRESENTER: Paula Quinon

ABSTRACT. We present a model of how counting is learned based on three knowledge components: number as a property of collections; ordered sequence of numerals; and one-to-one mappings between collections. In the literature, the so-called cardinality principle has been in focus when studying the development of counting. We argue that identifying the ability to count with the ability to perform by fulfilling the cardinality principle is not sufficient, and that counting should be analyzed as the ability to perform a series of tasks. The tasks require knowledge of the three components. Some of these tasks may be supported by the external organization of the counting situation. Using the methods of situated cognition, we analyze how the balance between external and internal representations will imply different loads on the working memory and attention of the counting individual. This analysis will show that even if the counter can competently use the cardinality principle, counting will be more or less vary in difficulty depending on which kind of collection is to be counted and on its physical properties. The upshot is that a number of situated factors will influence the counting performance. This will determine the difficulty of the different tasks of counting.

09:30
Valentin Bazhanov (Ulyanovsk State University, Russia)
Tatyana Shevchenko (Ulyanovsk State University, Russia)
Numerical cognition in the perspective of the Kantian program in modern neuroscience

ABSTRACT. Hundreds - if not thousands - of works are devoted to the nature of the number. Meanwhile, no generally accepted and therefore acceptable understanding of the phenomenon of number and numerical cognition so far achieved. For instance, in the current Russian philosophy and psychology the concept of “numerical cognition” is virtually absent, as well as studies directly related to this kind of cognitive activity. However, the intensive development of neuroscience opens up prospects for analyzing the nature of number and the mechanisms of “numerical cognition” from the point of view of the Kantian program in neuroscience. This program stimulates this analysis by combining the principles of naturalism and sociocentrism, and allows us to look at the number as a cultural phenomenon due to the ontogenetic features of the human brain. What are the most important features of the modern Kantian program in neuroscience? What are the (neuro) biological prerequisites for the implementation of this program? What is the “sense of number” (or numerosity) and what is the role of this “feeling” in the genesis of ideas about number and numerical cognition? What are the features of the representation of digital information in symbolic and non-symbolic form, and what is the role of language here? When and under what circumstances did the ordering of a set of numbers occur with the help of a horizontal number axis and what was the meaning of culture for this process? What are “conceptual metaphors” and what is their role in numerical cognition? Finally, how do the ontogenetic foundations of the “sense of number” and the successes (or failures) in the education of children and their future career correlate? These questions are supposed to give some answers in the presentation. Research was supported by RFBR grant 19-011-00007a.

09:00-10:30 Session 28M: A2 Many-valued and probability logics 1
Chair:
Enrico Brugnami (University of La Laguna, Italy)
Location: Room 346
09:00
Pawel Pawlowski (University of Gdansk, Poland)
Rafal Urbaniak (University of Gdansk, Poland)
Combining truth values with provability values: a non-deterministic logic of informal provability
PRESENTER: Pawel Pawlowski

ABSTRACT. Mathematicians prove theorems and are not committed to any particular formal system. They reason in a semi-formal setting using informal proofs. According to the proponents of the standard view (Enderton, 1977; Antonutti Marfori, 2010) informal proofs are just incomplete formal derivations — in principle, an informal proof can be associated with a formal proof in a fully-formal system, usually some version of set theory. There are quite a few reasons not to reduce informal provability to formal provability within some appropriate axiomatic theory (Antonutti Marfori, 2010; Leitgeb, 2009). The main worry about identifying informal provability with formal provability starts with the intuition that what- ever is informally provable is true. This means that when we do informal proofs, we are committed to all the instances of reflection schema: if φ is informally provable, then φ is true. However, this principle is not provable in any decent axiomatic theory of its own formal provability predicate. Moreover, no such theory can even be extended with the schema, provided some other non-controversial principles for formal provability are retained. One of the approach to regain the reflection is to treat informal provability as a partial notion. Pawlowski and Urbaniak (2018) developed a framework in which informal provability is modeled by a non-dete- rministic three-valued logic CABAT. Semantics in CABAT is based on an intuitive partition of mathematical claims into provable (value 1), refutable (value 0), and neither (value n). The main reason to use non-deterministic semantics is that the value of a complex formula in deterministic logics is always a function of the values of its components. This fails to capture the fact that, for instance, some informally provable disjunctions of mathematical claims have informally provable disjuncts, while some other don’t. Alas, two main problems with the system they proposed appeared. First, the CABAT reading of the reflection schema does not connect informal provability with truth, since CABAT strictly speaking does not have truth-values. The other problem is related to the natural asymmetry between truth and informal provability. The latter implies the former but not the other way around. So, ideally, we want to have a difference between both: the reflection schema and provabilitation (φ → Bφ), and between strong NEC and CONEC. Unfortunately, CABAT cannot make all these distinctions. In this talk, we propose a logic that follows the same motivations as CABAT, but whose semantics incorporates truth values along with provability values. We develop a four-valued non-deterministic logic T-BAT (provable and true, refutable and false, provable and neither, and refutable and neither), which does a better job as the logic of informal provability than CABAT, since it can prove the reflection schema, can distinguish between truth values and provability values, and preserves the intuitive asymmetries between truth and provability. References Antonutti Marfori, M. (2010). Informal proofs and mathematical rigour. Studia Logica, 96:261–272. Enderton, H. (1977). Elements of set theory. Academic Press, New York. Leitgeb, H. (2009). On formal and informal provability. In Bueno, O. and Linnebo, Ø., editors, New Waves in Philosophy of Mathematics, pages 263–299. New York: Palgrave Macmillan. Pawlowski, P. and Urbaniak, R. (2018). Many-valued logic of informal provability: A non-deterministic strategy. The Review of Symbolic Logic, 11(2):207–223.

09:30
Matthew Parker (CPNSS, London School of Economics and Political Science, UK)
Comparative infinite lottery logic

ABSTRACT. Norton (2018) proposes an inductive logic for infinite lotteries and inflationary multiverse cosmologies that is radically different from Kolmogorov probability. In particular, both finite and countable additivity fail.

Norton calls the support values of this logic “chances” in order to distinguish from Kolmogorovian probabilities. The events to which he applies it are lottery outcomes (the ticket drawn is x, the ticket drawn is in set S, etc.), and random choices of pocket worlds (the chosen world is x, the chosen world is in set S, etc.). The cornerstone of his logic is the principle of Label Independence: Every fact about the chances of events remains true when the elementary outcomes (lottery tickets or pocket worlds) are relabelled by any permutation. From this Norton derives the failure of additivity. The intuitively attractive Containment Principle, which says that if a set of outcomes A is properly contained in a set of outcomes B then the chance of A is strictly less than the chance of B, is also shown to fail.

Norton argues that this logic is too weak to help us to confirm or disconfirm particular inflationary models. The Principle of Mediocracy says that we should assume that we live in a randomly chosen civilisation in the multiverse. If one model makes it more likely that a randomly chosen world is like ours than another model does, then the former model is better confirmed in that respect. However, given Label Independence, all infinite, co-infinite sets of worlds are equally likely. Any reasonable eternal inflation model predicts infinitely many worlds like ours and infinitely many unlike ours, so on Norton's logic, each such model is equally confirmed.

However, these results depend on a reification of chance, consisting in the postulation of a chance function Ch from events to things called chances. Thus we can say, for example, Ch(ticket 7 is chosen) = C, so by Label Independence, this must remain true no matter which ticket is labelled ‘7’. If instead chances are purely comparative, consisting in relations of the form ‘A is less likely than B’, etc., then we can have an infinite lottery logic that satisfies comparative versions of Label Independence, additivity, Containment, and also regularity. (Regularity here is the property, which some find appealing, that any strictly possible event is more likely than a strictly impossible event.)

Unfortunately, even this comparative infinite lottery logic will not help us to confirm or disconfirm inflationary models. Given one inflationary model, our comparative logic may tell us that our world is more likely to be of one kind than another based on that model. However, this gives us no basis on which to say that our world is more likely to be as it is on one model than it is on another model, and thus no basis to say which model is better confirmed.

Norton, John D. (2018). Eternal Inflation: When Probabilities Fail. Synthese. https://doi.org/10.1007/s11229-018-1734-7.

10:00
Sergey Pavlov (Institute of philosophy, Russia)
On conditions of inference in many-valued logic semantics of CL$_{2}$}

ABSTRACT. \documentclass[10pt]{article}

\usepackage{amssymb,amsmath,latexsym,amsfonts}

\begin{document}

\noindent The main aim of this paper is to find conditions of inference in many-valued logic semantics of two-valued CL$_{2}$.

\emph{J}-operators will be used due to J. Rosser and A. Turquette.

Let’s have a n-valued logic L$_{n}$ with one designated truth-value 1 and anti-designated truth-value 0. $J_{1}$-operator corresponds to 1. $J_{0}$-operator corresponds to 0. The set of L$_{n}$-formulae is L$_{n}$-\emph{For}.

If \emph{S} is L$_{n}$-formula, then $J_{1}(S), J_{0}(S)$ are TF-formulae.

If $\emph{P}_{1}, \emph{P}_{2}$ are TF-formulae, then $J_{1}P_{1}, J_{0}P_{1}$, \textbf{=}\!\raisebox{-1pt}{$\shortmid$}\emph{P}$_{1}$ and $(P_{1} \Rightarrow P_{2})$ are TF-formulae, where \textbf{=}\!\raisebox{-1pt}{$\shortmid$} -- It is false that, $\Rightarrow$ -- if...then .

$\emph{P}, \emph{P}_{1}, $ denote meta-variables for TF-formulae. Set of TF-formulae is TF-\emph{For}.

\noindent It is asserted CL$_{2}$(TF-\emph{For}, \textbf{=}\!\raisebox{-1pt}{$\shortmid$}, $\Rightarrow$).

If \emph{A} is L$_{n}$-formula or \emph{A} is TF-formula, then \emph{A} is formula. $A, A_{1}$, denote meta-variables for formulae. Set of formulae is \emph{For}.

\noindent Axioms for \emph{J}-operators: \\ \indent $(J_{1}P \Leftrightarrow P), \\ \indent (J_{0}P \Leftrightarrow$ \textbf{=}\!\raisebox{-1pt}{$\shortmid$}\emph{P}), \\ \indent $((J_{k}A \wedge J_{m}A) \Rightarrow (k=m))$.

Adding rules of inference: \begin{tabular}{c} $ \emph{A} $ \\ \hline $J_{1}A$ \\ \end{tabular} \hspace{1cm } , \begin{tabular}{c} $J_{1}A$ \\ \hline $\emph{A}$ \\ \end{tabular} \hspace{1cm }

\noindent Definition 1. $(A_{1} \supset A_{2}) =_{df} (J_{1}A_{1} \Rightarrow J_{1}A_{2})$, $\neg A =_{df} $ \textbf{=}\!\raisebox{-1pt}{$\shortmid$}$J_{1}A$.

Then formatting rules, axioms and modus ponens of logic CL$_{n}$(\emph{For}, \textbf{=}\!\raisebox{-1pt}{$\shortmid$}, $\Rightarrow$) with n-valued (non-main by A. Church) interpretation are inferred.

\noindent Theorem 1.

1.1. $(J_{1}A \vee J_{0}A) \Rightarrow (J_{1}A \Rightarrow J_{0}\neg A)$

1.2. $(J_{1}A \vee J_{0}A) \Rightarrow (J_{0}A \Rightarrow J_{1}\neg A)$

1.3. $((J_{1}A_{1} \vee J_{0}A_{1}) \wedge (J_{1}A_{2} \vee J_{0}A_{2})) \Rightarrow ((J_{0}A_{1} \vee J_{1}A_{2}) \Rightarrow J_{1}(A_{1} \supset A_{2}))$

1.4. $((J_{1}A_{1} \vee J_{0}A_{1}) \wedge (J_{1}A_{2} \vee J_{0}A_{2})) \Rightarrow ((J_{1}A_{1} \wedge J_{0}A_{2}) \Rightarrow J_{0}(A_{1} \supset A_{2}))$

Note that right part of theorem 1 correspond to semantic rules of two-values (main) interpretation of CL$_{2}$.

Let’s have a set Q2 which (Q2 $\subset \emph{For}$) and for all \emph{Q} if ($Q \in$ Q2), then $(J_{1}Q \vee J_{0}Q)$.

\noindent Theorem 2. \emph{If exist} Q2, \emph{then} CL$_{2}$(Q2, $\neg, \supset$) \emph{is inferred}.

Therefore conditions of inference in many-valued logic semantics of two-valued CL$_{2}$(Q2, $\neg , \supset$) are existence of Q2 which (Q2 $\subset \emph{For}$) and for all \emph{Q} if ($Q \in$ Q2), then $(J_{1}Q \vee J_{0}Q)$.

Reference Rosser J.B., Turquette A.R. Many-valued logics. — Amsterdam, 1952.

\end{document}

10:30-11:00Coffee Break
11:00-12:00 Session 29A: C6 SYMP Identity in computational formal and applied systems 2. HaPoC symposium (IdCFAS-2)
Chair:
Raymond Turner (University of Essex, UK)
Location: Room 402
11:00
Nicola Angius (University of Sassari, Italy, Italy)
Giuseppe Primiero (University of Milan, Italy)
Second order properties of copied computational artefacts
PRESENTER: Nicola Angius

ABSTRACT. This paper provides a logical analysis of second order properties of computational artefacts preserved under copies as defined in (Angius and Primiero 2018). Properties like safety or liveness assume special significance in computer security and for the notion of malfunctioning. While definition and model checking of these properties are extensively investigated (Kupferman and Vardi 2001, Padon et al. 2018), the study of their preservation under copies is less considered in the literature. Another context of application is in the computer ethics debate on software property rights (Johnson 1998), translated to the question of which formal characteristics are or should be preserved by copies.

Copies for computational artefacts are defined in (Angius and Primiero 2018) as set-theoretic relations holding between abstract machines x and y. For exact copies, behaviours prescribed by y are all and only the behaviours prescribed by x; for inexact copies, the behaviours of y are all, but not only, the behaviours of x; for approximate copies, the behaviours of y are some, but not only, the behaviours of x. Bisimulation and simulation are used to provide formal definitions of the three copy relations (Fokkink 2013). In this paper, we introduce CTL* temporal logic (Kröger and Merz 2008) for formulas of the form EX, EG, and EU (respectively: existential next, existential global and existential until) to which any other CTL* formula can be reduced using equivalence rules. We then analyse whether EX, EG, and EU formulas are preserved by exact, inexact, and approximate copies. We prove that any second order property is preserved by exact copies, since bisimulation implies CTL* equivalence. EX, EG, and EU formulas in positive form are satisfied by inexact copies, including liveness EU, provided that they admit infinite paths. They can or cannot satisfy any negation thereof, including safety ¬E¬U, so the copying machines need to be model checked against those formulas to determine whether they satisfy properties of interest of the copied machines. If y is an approximate copy of x, we prove that a definable subset of the behaviours prescribed by y preserves EX, EG, and EU properties expressed in negative form, provided that x and y only allow for finite paths. And a definable subset of EX, EG, and EU formulas in positive form satisfied by x is also satisfied by y, provided that both machines admit infinite paths.

References Angius, N., & Primiero, G. (2018). The logic of identity and copy for computational artefacts. Journal of Logic and Computation, 28(6), 1293-322.

Fokkink, W. (2013). Introduction to process algebra. Springer Science & Business Media.

Johnson, D. G. (1998). Computer ethics, 4th edition. Pearson.

Kröger, F., & Merz, S. (2008). Temporal Logic and State Systems. Springer.

Kupferman, O., & Vardi, M. V. (2001). Model Checking of Safety Properties. Formal Methods. System Design 19(3): 291-314.

Padon, O., Hoenicke, J., Losa, G., Podelski, A., Sagiv, M., & Shoham, S. (2017). Reducing liveness to safety in first-order logic. Proceedings of the ACM on Programming Languages, 2(POPL), 26.

11:30
Roberta Ferrario (Institute for Cognitive Sciences and Technologies - CNR, Italy)
Organisations and variable embodiments

ABSTRACT. Organisations are peculiar entities: during their lifetime, they can incur many changes, like acquiring or losing members, changing their organisational chart, changing the rules that regulate their activities and even changing the goal at which such activities aim. Nevertheless, there are many situations in which we refer to these apparently different entities as “the same organisation”. What are the criteria that allow us to re-identify organisations through time and changes? Can such criteria be considered as characterising the identity of organisations? These are the questions at the basis of an ontological analysis of the identity and persistence of organisations. In this contribution, we first analyse organisations and their elements and components at a given time and then we enquire some of the possible organisational changes.

We leverage Kit Fine’s theory of rigid and variable embodiment and see how such notions may be used to represent organisations, at a time and through change respectively.

In the attempt of specifying Fine’s theory to characterise the case of organisations, we propose the (history of the) decisions taken by its members as what “glues together” successive states of an organisation (an organisation at different times) to form the organisation as an evolving whole, thus making this the element that drives its re-identification through time. Finally, to exemplify our approach, we sketch a simple model in situation calculus.

11:00-12:30 Session 29B: B1/B5 SYMP Science as a profession and vocation. On STS's interdisciplinary crossroads 2 (WEBPROVOC-2)
Chair:
Ilya Kasavin (RAS Institute of Philosophy, Moscow, Russia)
Location: Room 401
11:00
Alexander Antonovskiy (Russian Academy of Sciense, Russia)
Max Weber’s distinction truth/value and "Old-European" semantics

ABSTRACT. In his paper Alexander Antonovskiy considers the communicative and social conditions of modern science as interpreted by Max Weber. Weber associates the modernity of science (unlike art) with the fundamental inaccessibility of “true being” and, as a consequence, with the transitory nature of any scientific achievement. As a result, Weber – partly explicitly and partly implicitly – formulated the meaning of modern science: Why does a scientist need science under (1) external alienation and (2) the inaccessibility of a scientific object? By analyzing the ideal-typical conditions specified by Weber, the concept of “the invariant modernity” of science is formulated. A substantiation is provided to the fact that the concept of scientific cognition in Weber’s interpretation bundles conceptually together all other basic concepts of social philosophy, primarily time, (scientific) object, sociality, truth, and values. In this work, Weber proposed the concept of modern science, resting on a certain temporary logic of human life, as it was formulated by Leo Tolstoy. He explicitly refers to the ideas of the Russian thinker, and this is rewarding because it supplements well a general multicultural perspective, characterizing the regional cultural specifics of his contemporary science: American science as an unpleasant but inevitable prospect of transforming a university into a state-run capitalist corporation, capable to produce only temporarily relevant products; French science with its unjustified claim to “eternal” truths and with its “immortal body” (the French Academy of Sciences). In addition, within the context of scientific cognition, the very concept of modernity obtains a definition and loses disposition associated with the perspective of a speaker, relative to whom the past, future, and modernity, as their boundary, obtained their situational definiteness. Now it is possible to speak objectively about modernity. This article by Weber has always been used as a methodological program for purification and demarcation of science, which, as is known, starts in parallel to be discussed by the Vienna Circle. However, there is a second plane, and, maybe, it can be considered as the main one: Weber’s article is valuable as a diagnosis of modern German and in part American science of his time (P. Duhem makes a parallel diagnosis, strangely not even mentioning Weber in his work [Duhem 1991]. It is “American science,” controlled by bureaucracy and business (recall Habermas here), that is closest, in Weber’s opinion, to the ideal type of modern science. This, if you prefer, is his empirical illustration. On the contrary, French science is farthest from this ideal type of modernity, which Weber develops with the help of Leo Tolstoy. This regionally differentiated diagnosis and hypothetical prospects for science in various countries allow us to see from the height of our time how and to what extent the trends that Weber noticed relative to German and partially to global science have been accomplished. It seems to me that this work has not been done yet. In his paper Antonovskiy considers the components of this diagnosis–prognosis in detail.

11:30
Anton Dolmatov (RAS Institute of Philosophy, Higher School of Economics, Russia)
Moral achievement of a scientist

ABSTRACT. Weber’s sociology, particularly its fact/value dichotomy and its concept of instrumental rationality, has been criticised by authors such as A. MacIntyre or M. Horkheimer for its narrow understanding of rationality, which is reduced to the choice of the most efficient means for an end. The validity of this end cannot, in turn, be established in a rational way. If the choice of values and ultimate ends is the result of an irrational and subjective decision, then ethics, as a discipline that presupposes the possibility of a rational study of values, concerning right and wrong, should be impossible. In particular, the ethics of science, as a discipline, which presupposes the importance of moral evaluations in scientific activity, should also be impossible. Weber’s claim that science should only deal with facts and abstain from value judgments would seem then to support nihilist and emotivist understanding of his works. In my presentation, arguments challenging this interpretation are proposed. Weber’s speech «Science as a Vocation» suggests the idea of «moral achievement» of a scientist. A good teacher’s primary task is to teach students to recognise facts, even if those facts are inconvenient to their political position. Success in this task is qualified by Weber as an «moral achievement». My thesis is that the ability to recognise inconvenient facts makes it possible to reconcile Weber’s concept of scientist with the ethics of science. The scientist position is similar to the political position in the way that both of them require certain presuppositions which depend on «ultimate attitude toward life». For the scientific position, these presuppositions include the significance of the results of scientific researches. If the recognition of inconvenient facts implies the sensitivity of one’s position to such facts (i.e. that one's position can be modified by these recognised facts), then there is a connection between facts and values. If the recognition of inconvenient facts does not imply that, then it is not clear what the difference is between recognising those facts and neglecting them. Inconvenient facts for scientists are those facts which challenge their presuppositions, such as the importance of the results of their investigations. For example, it is likely to be an inconvenient fact for a scientist that the results of their research can be used in weapons development. Although recognition of this fact does not imply that the presupposition of the importance of advances in a particular area in nuclear physics should be abolished, it does imply, however, that, when this fact is recognised, this presupposition should clearly be reevaluated with the regard to this fact. It should be at least possible that this position will be changed as the result of recognition of those facts. Weber suggests that clarification of the ultimate meaning of one’s actions is the task of philosophical disciplines. This is precisely what ethics of science is concerned with. However, although this connection between values and facts centred on the recognition of inconvenient facts to one's position can provide the basis for an ethics of science (and thereby prove wrong nihilist and emotivist understandings of Weber's thought), such an ethics would ultimately be limited. It would be impossible, for example, to justify the necessity of subjecting scientific methods and practices (such as experiments on animals) to moral evaluation.

12:00
Lada Shipovalova (Saint Petersburg State University, Russia)
M. Weber’s "inconvenient facts" and contemporary studies of science-society communication

ABSTRACT. In his well-known text “Wissenschaft als Beruf”, M. Weber associates understanding of science as a vocation and its value with the scientist’s ability to present to the audience “inconvenient” facts. The “inconvenience” here means the unwillingness or even inability to recognize such facts from a particular party position belonging to the audience or scientists themselves. Weber argues that this presentation provides “full understanding of the facts” and overcomes the personal value judgment. This overcoming refers to Weber’s understanding of the scientific objectivity. Moreover, Weber uses the expression “moral achievement” to describe this presentation not only as an intellectual task. In this description, he combines epistemological and ethical aspects of scientific normativity and opens the way to speak on the ethics of science and on epistemological virtues. I propose to interpret this double normativity in the context of STS. Firstly, in these studies, we can find an appeal to the difference, to the multiplicity of facts as a methodological premise, and discovering or inventing better ways of living together, “generating common responses to common problems in a common world” as their objective (Law). Secondly, in the contemporary historical epistemology, which is related to STS, we can discover the investigation of the objectivity not only as a scientific criterion but also as an epistemic virtue (Daston, Galison). Certainly, we should not equate Weber’s understanding of “inconvenient” facts presentation and the interpretation of scientific objectivity in the contemporary science studies. I will describe the difference of these understandings and emphasize how important it is to insist on similarities. In my talk, I would also like to question if the above-mentioned epistemic and ethical sides of the ‘inconvenient” facts presentation are currently compatible and condition the development of science, i.e. if they serve the production, the legitimization and the distribution of knowledge. There exist almost unambiguous positive answers in two cases: in the analysis of the interdisciplinary communication between different academic cultures; and in the study of the relationship between the teacher and the audience in the sphere of education. However, the contemporary communication between science and society, between experts and non-experts is more problematic. Different strategies and objectives of this communication are widely discussed in the studies of science, technology and society. In these studies, there are no unequivocal answers to the next questions: Should the scientists represent to the public “inconvenient” facts to generate responses to common problems or should they forget about openness, honesty and transparency in communication with public? Should the scientists take in this communication a position of “leader” or persist in a “teacher’s” position? Which epistemic virtues should determine science-society relationship? I describe different approaches to this subject in social studies of science, give examples of this relationship in the contemporary Russian context and explain the compatibility of epistemic and ethical sides of “inconvenient” facts presentation requirement in contemporary science communication. Law, J. STS as Method. In Handbook on Science and Technology Studies, eds. U. Felt, R. Fouché, C.A. Miller, L. Smith-Doerr, 2017, MIT Press. p. 31-57. Daston, L., Galison, P. Objectivity. Zone Book, New York, 2010.

11:00-12:30 Session 29C: C1 SYMP Formalism, formalization, intuition and understanding in mathematics: From informal practice to formal systems and back again 2 (FFIUM-2)
Chair:
Gerhard Heinzmann (Université de Lorraine, France)
Location: Room 112+113
11:00
Alberto Naibo (IHPST (UMR 8590) - Université Paris 1 Panthéon-Sorbonne, France)
A formalization of logic and proofs in Euclid’s geometry

ABSTRACT. According to Paul Bernays and other scholars (e.g. Ian Muller), Euclid's geometry is a theory of constructions, in the sense that the geometrical figures are considered to be constructed entities. Euclid's geometry thus opposes to contemporary axiomatic theories (e.g. Hilbert's geometry) which proceed from a system of objects fixed from the outset, and which just describe the relationships holding between these objects.

The aim of this talk is to provide a formal and logical analysis of Euclid's constructive practice as it emerges from the Book I of the Elements. In the first part of the talk we argue that this practice cannot be captured by the standard methods of intuitionistic logic, like the witness extraction for existential formulas, since in Euclid's Elements there is nothing like a fixed domain of quantification. On the contrary, it is the constructive activity itself that allows one to generate, step by step, the domain of the theory.

In order to give a formal and precise analysis of this point, in the second part of the talk we study the proof methods used in Euclid's Elements. We propose a reconstruction of these methods, according to which postulates correspond to production rules (acting on terms and) allowing one to introduce new objects starting from previously given ones. This is done by means of primitive functions corresponding to the actions of fixing a point, drawing a straight line, and drawing a circle, respectively. We argue that a combination of these production rules corresponds to a proof allowing one to solve a problem, i.e. to show that a certain construction is admissible from other primitive ones. The constructed objects are considered to be representable by diagrams, which in turn can be used to extract some information concerning the constructions (e.g. diagrams give evidence for the incidence between the two circles used to prove proposition I.1). Moreover, in order to demonstrate that the constructed objects possess certain specific properties, a method for keeping track of the relationships between the entities used during the construction process is proposed. This method consists in labelling proofs by means of relational atoms and by combinations of these relational atoms.

The language that we use for our formalization of Euclid's constructive practice is kept as minimal as possible. This marks some crucial differences with other already existent formal reconstructions of Euclid's geometry (e.g. the one by Michael Beeson or the one by Jeremy Avigad, Edward Dean and John Mumma). On the one hand, we claim that no identity relation is needed for points, lines, and circles. Identity is used instead only for abstract objects, like angles. On the other hand, we claim that no negation (as a propositional) operator is needed when reasoning on the properties of the constructed objects. The use of dual (incompatible) predicates is indeed already sufficient (e.g. "being strictly smaller in length" and "having the same length"). The logic of the Elements is thus taken to be weaker than usual standard logics, like intuitionistic or classical logic.

11:30
Mate Szabo (AHP Univ Lorraine, IHPST Paris 1, France)
Patrick Walsh (Carnegie Mellon University, United States)
Gödel's and Post's Proofs of Incompleteness
PRESENTER: Mate Szabo

ABSTRACT. In the 1920s, Emil Post worked on the questions of mathematical logic that would come to dominate the discussions in the 1930s: incompleteness and undecidability. To a remarkable degree, Post anticipated Gödel’s incompleteness theorem for 'Principia Mathematica', but did not attempt to publish his work at the time for various reasons. Instead, he submitted it for publication in 1941, adding an introduction and footnotes discussing how his results relate to the ones of Gödel, Turing and Church. In the Introduction, written in 1941, Post declared that “[t]here would be a little point in publicizing the writer's anticipation [...] merely as a claim to unofficial priority.” Although he saw his own work as “fragmentary by comparison” he emphasized that

“with the 'Principia Mathematica' as a common starting point, the roads followed towards our common conclusions are so different that much may be gained from a comparison of these parallel evolutions.”

However, his submission was declined for publication. In the rejection of March 2, 1942 Weyl wrote:

“I have little doubt that twenty years ago your work, partly because of its then revolutionary character, did not find its due recognition. However, we cannot turn the clock back; in the meantime Gödel, Church and others have done what they have done, and the American Journal is no place for historical accounts”.

Post's paper, 'Absolutely Unsolvable Problems and Relatively Undecidable Propositions - Account of an Anticipation', was published only posthumously in Davis’ (1965).

Altough it might be claimed that both Gödel and Post formalize the same informal idea, a diagonal argument, we agree with Post, that there is a lot to learn from a careful comparison of the two, quite different, formal proofs.

Examining their proofs presented in strikingly different formal frameworks, we distill and emphasize two key dissimilarities between the proofs of Incompleteness by Gödel and Post. The first considers the scope and generality of their proofs. Gödel was dissatisfied with the specificity of his (1931), i.e. being tied to 'Principia Mathematica' and related systems. On the other hand Post took a purely syntactic approach which allowed him to characterize a much more general notion of formal systems which are shown to be affected by the incompleteness phenomena. The second dissimilarity arises from the fact that Post was first and foremost interested in the decidability of symbolic logics and of 'Principia Mathematica' in particular. As a consequence he arrived at incompleteness as a corollary to undecidability. This “detour,” compared to Gödel’s more direct proof of incompleteness, convinced Post that his characterization of formal systems is not only very general, but gives the correct characterization. The argument that this characterization is the correct one mirrors how Kleene convinced himself that lambda-definability was a right characterization of `effective calculability’.

12:00
Pierre Wagner (Institut d'histoire et de philosophie des sciences et des techniques, France)
Gödel and Carnap on the impact of incompleteness on formalization and understanding

ABSTRACT. On the basis of the proof of the incompleteness theorem in 1931, Gödel and Carnap both drew conclusions on the open character of mathematics, noting either that “whichever system you chose (...) there is one more comprehensive” (Gödel, On the present situation of the foundations of mathematics, 1933) or that “mathematics requires an infinite series of always richer systems” (Carnap, Logische Syntax der Sprache, 1934, §60). However, such similar formulations concerning the immediate consequences of the incompleteness theorem are misleading because Gödel’s and Carnap’s respective reactions to incompleteness were actually fundamentally different. In this paper, it will be argued

1) that incompleteness had a deep impact not only on the general issue of the limitations of formalisms but more precisely on both Gödel’s and Carnap’s conception of formalization and understanding — but in completely different directions;

2) that Gödel’s and Carnap’s foregoing remarks on the open character of mathematics do not provide a sufficient account on their views on the impact of incompleteness on formalization and understanding (and it will be shown why it is so);

3) that a full account on theirs views concerning this point requires a distinction between kinds of incompleteness, and that this illuminates their divergent conceptions of mathematics and philosophy in a new way.

On the basis of this analysis, it will be shown how a new interpretation of Carnap’s reading of incompleteness can be provided.

On Gödel’s side, a satisfactory interpretation of his views on the incompleteness of theories and its impact on formalization and understanding requires the distinction between two kinds of incompleteness, which depend on the character of the independent sentences. On the one hand, let us consider for example the Gödel sentence G which Gödel’s incompleteness theorem proves is neither provable nor refutable in T (for any consistent and axiomatizable extension T of Robinson’s arithmetic); as Gödel himself argues, the arithmetical sentence G may be shown to be true (true in the standard interpretation of the language of T) but this sentence is completely uninteresting from the viewpoint of ordinary mathematics (although it has a tremendous logical significance). On the other hand, let us consider for example a sentence such as Cantor’s Continuum Hypothesis (CH), which is independent of a classical theory of sets such as ZFC; CH is mathematically highly significant although it is not known to be true by mathematical standards. The fact that Gödel’s reaction to these different kinds of incompleteness was different is highly significant for his views on mathematics, on what he thought about the impact on formalization and understanding, and about the connection between formalization and mathematical practice.

As for Carnap, it is well-known that like other logicians, he tried to circumvent incompleteness by devising a method for formalization to which Gödel’s theorem would not apply. His ideas was to have recourse to infinitary methods for the definition of such concepts as “consequence” and “analyticity”, which enabled him to prove some kind of Carnapian completeness theorem, to the effect that every logical (including mathematical) sentence in some language L for the reconstruction of science is either analytic or contradictory. This is possible because “analytic” does not reduce du “provable” and “contradictory” does not reduce to “refutable”. With this result in mind, what is not easy to understand is the impact Gödel’s incompleteness theorem really had on Carnap’s philosophy of mathematics in the Logical Syntax of Language and what the actual significance of Carnap’s remark to the effect that “mathematics requires an infinite series of always richer systems” is. A new interpretation will be proposed which will show that Carnap’s reaction to Gödel’s incompleteness theorem and its impact on formalization and understanding has to take into account the connection between formalization and the actual development not only of mathematics but of science in general (and why it is so).

11:00-12:30 Session 29D: A2 SYMP Substructural epistemology 2 (SubStrE-2)
Chair:
Ondrej Majer (Institute of Philosophy, Academy of Sciences of the Czech Republic, Czechia)
Location: Room 364
11:00
Vit Puncochar (Institute of Philosophy, Czech Academy of Sciences, Czechia)
Algebraic Semantics for Inquisitive Logics

ABSTRACT. Inquisitive semantics is a currently intensively developing approach to the logical analysis of questions (see Ciardelli, Groenendijk, Roelofsen 2018). The basic idea of inquisitive semantics is that, to be able to provide a semantical analysis of questions, one has to go beyond the standard approach that identifies sentential meaning with truth conditions. This approach is applicable only to declarative sentences but not to questions which have no truth conditions. The concept of ``truth in a world'' is replaced by a concept of ``support in an information state''. Support is suitable for a uniform logical analyses of both statements a questions.

Inquisitive semantics, as we just described it, is a relational semantics -- it is based on a relation of support between information states and sentences. Although this semantics has been explored thoroughly in the last decade, so far not much attention has been payed to the algebraic aspects of inquisitive logic (with the exceptions of Frittella et al. 2016 and Roelofsen 2013). But it is evident that the models of inquisitive semantics generate interesting algebras of propositions and it would be desirable to understand better the nature of these algebraic structures.

This paper is intended as a contribution to the algebraic study of inquisitive propositions. We will not focus solely on the algebras related to the basic inquisitive logic, i.e. the logic generated by standard inquisitive semantics. Our perspective will be more general and we will define a class of algebras that are suitable for a broad (in fact, uncountably large) class of propositional superintuitionistic inquisitive logics that were introduced in (Author, 2016). (Standard inquisitive logic is the strongest logic in this class having similar role as classical logic has among superintuitionistic logics.) We will call such algebras ``inquisitive Heyting algebras''. We will explain how questions are represented in these structures (prime elements represent declarative propositions, non-prime elements represent questions, join is a question-forming operation) and provide several alternative characterizations of inquisitive algebras. For example: A Heyting algebra is inquisitive iff it is isomorphic to the algebra of finite antichains of a bounded implicative meet semi-lattice iff it is join-generated by its prime elements and the set of prime elements is closed under meet and relative pseudo-complement iff its prime filters and filters generated by prime elements coincide and prime elements are closed under relative pseudo-complement. We will also explain how exactly are inquisitive algebras related to inquisitive logics.

References:

Author (2016)

Ciardelli, I., Groenendijk, J., Roelofsen, F. (2018). Inquisitive Semantics. Oxford University Press.

Frittella, S., Greco, G., Palmigiano, A., Yang, F. (2016). A Multi-type Calculus for Inquisitive Logic. In: V\"{a}\"{a}n\"{a}nen, J., Hirvonen,~\r{A}.,~de Queiroz,~R. Proceedings of the 23rd International Workshop on Logic, Language, Information, and Computation, Springer, pp.~215--233.

Roelofsen, F. (2013). Algebraic foundations for the semantic treatment of inquisitive content, Synthese, 190, Supplement 1, 79--102.

11:30
Igor Sedlar (Czech Academy of Sciences, Czechia)
Substructural propositional dynamic logic

ABSTRACT. Propositional dynamic logic, PDL, is a well known modal logic with applications in formal verification of programs, dynamic epistemic logic and deontic logic, for example [2]. More generally, PDL can be seen as a logic for reasoning about structured actions modifying various types of objects; examples of such actions include programs modifying states of the computer, information state updates or actions of agents changing the world around them.

In this contribution we study a version of PDL where the underlying propositional logic is a weak substructural logic in the vicinity of the full distributive non-associative Lambek calculus with a weak non-involutive negation. Our main technical result is a completeness proof for the logic with respect to a class of modal Routley-Meyer frames. The motivation for this endeavor is to provide a logic for reasoning about structured actions that modify situations in the sense of Barwise and Perry [1]; the link being the informal interpretation of the Routley-Meyer semantics for substructural logics in terms of situations [3]. In the contribution we inform on our partial progress in this area (the version of the Lambek calculus used in our paper is weaker than the logic related to situation semantics in [3]) and comment on the problems that need to be solved.

References [1] Barwise, J. and Perry, J.: Situations and Attitudes. MIT Press, 1983. [2] Harel, D., Kozen, D. and Tiuryin, J.: Dynamic Logic. MIT Press, 2000. [3] Mares, E.: Relevant Logic: A Philosophical Interpretation. Cambridge University Press, 2004.

12:00
Andrew Tedder (Institute for Computer Science, Czech Academy of Sciences, Czechia)
Igor Sedlar (Institute for Computer Science, Czech Academy of Sciences, Czechia)
Residuals and Conjugates in Positive Substructural Logic
PRESENTER: Andrew Tedder

ABSTRACT. In substructural, fuzzy, and many other logics, relations of residuation between conjunction-like and conditional-like connectives are central (Ono et. al. 2007). For instance, in the context of frame semantics, relations of residuation between connectives allows for those connectives to be interpreted by a single relation, as has been studied in the context of Gaggle Theory (Bimbó & Dunn 2008).

In Boolean algebras with operators (BAOs), residuation has a second face in the form of relations of conjugation (Jónsson & Tsinakis 1993; Mikulás 1996) – the residuals of an operator are definable in terms of its conjugates, and vice versa, by means of Boolean negation. An immediate result of this is that in BAOs, a collection of operations all conjugated with respect to each other may be interpreted by a single relation in the frame semantics.

This talk concerns relations of residuation and conjugation in a positive context – in particular, in the context of logics extending the positive non-associative Lambek calculus with distributive lattice operations. This logic is the extension of distributive lattice logic by means of a binary operator – sometimes called fusion – with left and right residuals, where fusion is assumed only to be monotone (or, equivalently when the residuals are present, to distribute over (and into) the lattice join). The extension of this logic with which we’re concerned is that resulting from the addition of two additional fusion-like connectives, where each fusion is conjugated with respect to the others, and left and right residuals for each additional fusion.

Our concern with this logic is motivated by the ternary relation frame semantics for the Lambek calculus – since the residuals and conjugates of an operator can be interpreted by means of one accessibility relation in BAOs, it is an interesting question whether the same is true in a positive context. Of particular interest here is that adding conjugates, and their residuals, to the language would allow for more expressive power in characterising ternary relation frames – in the language including the conjugates, simple frame correspondents can be found for some classes of frames which have otherwise only been characterised by means of negation. Furthemore, there are interesting connections to the semantics for other substructural and relevant logics.

This talk presents the logic in question as characterised by a class of ternary relation models – then we go on to consider the question of completeness.

References

Bimbó & Dunn Generalised Galois Logics: Relational Semantics of Nonclassical Logical Calculi, CSLI publications, 2008

Jónsson & Tsinakis Relation Algebras as Residuated Boolean Algebras, Algebra Universalis, Volume 30, 1993, pp. 469–478

Mikulás Complete Calculus for Conjugated Arrow Logic, in Arrow Logic and Multi-Modal Logic, ed. Marx, Polos, and Masuch, CSLI, 1996

Ono, Galatos, Jipsen, and Kowalski, Residuated Lattices: An Algebraic Glimpse at Substructural Logics, Elsevier, Studies in Logic Series, Volume 151, 2007

11:00-12:00 Session 29E: IS C5 Sullivan
Chair:
Mitsuhiro Okada (Keio University, Japan)
Location: Room Krejcar
11:00
Jacqueline Sullivan (University of Western Ontario, Canada)
Creating Epistemically Successful Interdisciplinary Research Infrastructures: Translational Cognitive Neuroscience as a Case Study

ABSTRACT. Mental illness and neurodegenerative diseases are widely understood to involve impairments in cognition. During the past two decades, neuroscientists have sought to characterize these impairments and identify the neural mechanisms that give rise to them. A common approach has been to investigate the neural mechanisms of cognitive functions in non-human animals using classic paradigms such as fear-conditioning and the Morris water maze and use findings to ground inferences about the causes of cognitive dysfunction in humans. One obstacle to progress in this research area, however, has been the lack of analogous experimental tools to assess cognitive function and dysfunction in experimental animals and humans. Specifically, it is unclear whether experimental tools used in human and non-human animal studies probe the same cognitive functions.

In this talk, I describe and evaluate a collaborative open-science research initiative, translational cognitive neuroscience (TCN), which began in the 1990s with an eye towards developing a set of complementary experimental tools to investigate cognition in humans and non-human animals. At that time, cognitive neuroscientists were using computer-based touchscreen tasks to assess cognitive impairments in mental illness and neurodegenerative diseases, which were known to correlate with underlying neural dysfunction. With an eye towards facilitating the translation of results from rodent studies to human studies, TCN researchers developed an experimental apparatus for use with rodents, the rodent touchscreen operant chamber and a set of experimental tasks that closely resemble human touchscreen tasks.

In this talk, I argue that the success of these tools for producing such data is contingent on them meeting a number of epistemic requirements (e.g., face validity, replicability, construct validity). Moreover, ensuring that these requirements are met involves an unprecedented amount of coordination within individual laboratories and across research groups. I examine the nature of this coordination and address the question of whether it provides any insights with respect to the correct recipe for cumulative science more generally.

11:00-12:30 Session 29F: Teaching panel

Organizer: Joeri Witteveen

This is a symposium proposal with a PANEL DISCUSSSION format. No individual symposium papers will be submitted.

Prospective participants: Mieke Boon, Hasok Chang, Hans Halvorson, Mikkel Johansen, Alan Love, Roy Wagner

Abstract:

The teaching of history and philosophy of science occupies a somewhat unusual position in many university curricula. It is typically offered to philosophy students as part of their program, but is sometimes also part of the science curriculum. Often philosophy of science courses are electives, but at some (European) universities, history and philosophy of science is a required course for science students and forms part of the core curriculum. The aim of this panel discussion is to reflect on the practice of teaching HPS to science students. This is a particularly fitting topic for discussion at CLMPST 2019 as it takes the conference theme, “Bridging across academic cultures,” beyond research, into teaching.

The symposiasts will share and reflect on their experience with teaching HPS to science students in the formal, physical, life, and engineering sciences. The session format will be open-ended and allow for a broad variety of inputs and contributions. It will provide room for sharing personal experiences, reflect on the institutional and organizational embedding of teaching science students, and allow for the presentation of sample teaching materials. The audience of the session is welcomed to join the discussion, which will touch on the following questions, among others. 

(1) What makes teaching science students different from teaching philosophy students and how should we (historians and philosophers) adapt to an audience of practitioners of a field of study that we are reflecting on? The goals of teaching science students will often be somewhat different from teaching philosophy students, which could affect the selection of topics, the teaching format and styles, and the modes of examination. 
(2) How can the teaching of philosophy of science to science students benefit from recent developments in integrated HPS, practice-oriented philosophy of science, and socially relevant philosophy of science? The increasing attention to case studies and scientific practice in contemporary HPS research is a rich source of teaching materials. Based on particular examples, panel members will discuss how these can be packaged and processed to make them suitable for teaching.
(3) What kind of teaching materials are useful for teaching HPS to science students? Many history and philosophy of science textbooks are written without an audience of scientists in mind, but some newer textbooks are particularly written for training scientists. If used, what role should a textbook occupy? What is the proper role of other teaching materials (articles, dedicated webpages, podcasts, vlogs) for exploring specific topics and examples? We discuss advantages and disadvantages of working with different kinds of textbooks and with collections of articles.
(4) What is the added value of having someone trained in HPS teach a course history and philosophy of a scientific subject? Does HPS teaching occupy a special niche, which HPS teachers do better than specialists in the field, and if this is claimed, what is the evidence for it? Reflection on these questions will be crucial to explain the importance of educational expertise in HPS to students and program managers.
(5) What are the best practices for co-teaching a philosophy of science course with a scientist? We consider best practices for developing co-taught courses and discuss how different academic backgrounds and teaching styles can be complementary and in conflict.
(6) What, if any, are the essential ingredients for a course in HPS for scientists? Should a brief twentieth-century history of philosophy of science from (say) logical empiricism to Feyerabend be part of any philosophy of science course, or should developments in the particular science under discussion be leading in the selection of topics? And what about teaching students about their own role as scientists: should an HPS course make space for discussion of responsible conduct of research, integrity, and social responsibility?

The outcomes of the panel discussion will be used in a project led by the University of Copenhagen to inventory, organize, and disseminate teaching materials and information about best practices on teaching philosophy of science to science students. To this end, we aim to open a web portal for philosophy of science teachers in the near future.

Chair:
Joeri Witteveen (University of Copenhagen / Utrecht University, Denmark)
Location: Room 250
11:00-12:00 Session 29G: B3 SYMP Communication and exchanges among scientific cultures 2 (CESC-2). Circulation of epistemological elements
Chair:
Vitaly Pronskikh (Fermi National Accelerator Laboratory, United States)
Location: Room 152+153
11:00
Xiaofei Wang (Institute for History of Natual Science, Chinese Academy of Sciences, China)
The circulation of epistemic values between mathematical cultures: The epistemic values pursued by J.-L. Lagrange in his teaching of analysis at the Ecole Polytechnique

ABSTRACT. Symposium :

In the context of the scientific and education reforms that took place during the French Revolution, J. –L. Lagrange (1736-1813) participated in many activities such as the reform of system of measurement units, and the teaching of analysis at the newly founded Institutions, notably the Ecole Normale of the year III and the Ecole Polytechnique. His teaching at the Ecole Polytechnique, which lasted from 1795 until 1799, gave rise to two important publications on analysis, that is, Théorie des fonctions analytiques (first appeared in 1797) and Leçons sur le calcul des fonctions (first published in 1801). In these two works, Lagrange advocates to ground the Calculus on his theory or method of “derived functions”. In the introductions of his books, Lagrange first examines the methods employed by past practitioners, including the methods of Newton, Leibniz and their followers. Through taking a close look at Lagrange’s historical treatment in these two publications, this talk aims at clarifying the purpose that Lagrange assigned to this historical narrative. I will argue that in his books on analysis, Lagrange pursues four epistemic values, that is, generality, rigor, clarity and simplicity and that his pursuit of these values can be correlated with the reasons he gives to reject all the conceptualizations by his predecessors. More importantly, my claim is that these four values brought together epistemic values that were prized in the 18th century practice of analysis (generality, simplicity, clarity), and a value for which Lagrange found inspiration in his reading of ancient Greek texts, that is, rigor. This can be related with Lagrange’s special interest in Greek mathematical writings. The key point is that, as I will further argue, these four epistemic values played an essential role in Lagrange’s determining which method should be used to deal with Calculus. How Lagrange put forward rigor and practiced this epistemic value in the context of his own practice will be at the core of my presentation. This circulation of a key epistemic value from one context to another will have a key impact on future research in analysis. Indeed, I will emphasize that Lagrange’s view had an impact on his successors who taught analysis at the Ecole Polytechnique and in particular initiated the rigorization of analysis before August Cauchy (1789-1857). For example, Joseph Fourier (1768-1830) took special care to proceed in a rigorous way in the teaching of an elementary course of analysis that he too delivered at the Ecole Polytechnique between 1795 and 1798.

11:30
François Lê (Institut Camille Jordan - Université Claude Bernard Lyon 1, France)
Communication and exchanges among scientific cultures: Sharing, recycling, trading, and other forms of circulation> Characterizing as a cultural system the organization of mathematical knowledge: a case study from the history of mathematics

ABSTRACT. This talk aims at challenging the use of the notion of “culture” to describe a particular organization of mathematical knowledge, a knowledge shared by a few mathematicians over a short period of time in the second half of the nineteenth century. This knowledge relates to “geometrical equations,” objects that proved crucial for the modalities of encounters between a part of algebra and a part of geometry at that time. The description of the mathematical collective activities linked to geometrical equations, and especially the technical aspects of these activities, will be made on the basis of a sociological definition of culture. More precisely, after an examination of the social organization of the mathematicians considered, I will argue that these activities form an intricate system of patterns, symbols, and values, for which I suggest a characterization as a cultural system. I will finally suggest that the cultural traits of this cultural system may be seen as cases of geometrical reinterpretations of algebraic patterns made by geometers trying to cope with a part of algebra, which they found difficult to understand. This will eventually raise the question of interpreting the cultural system as the outcome of a process of acculturation of geometers into an algebraic culture.

11:00-12:00 Session 29I: C3 Evolution and explanation
Chair:
Brice Bantegnie (Czech Academy of Sciences, Czechia)
Location: Room 201
11:00
Rodrigo Lopez-Orellana (Universidad, Spain)
David Cortés-García (Universidad, Spain)
A Scientific-Understanding Approach to Evo-Devo Models

ABSTRACT. The aim of this paper is to characterize Evolutionary Developmental Biology (evo-devo) models in order to show the role they fulfill in biology —especially how they can provide functional elements which would enrich the theoretical explanation of large-scale evolution. For this purpose, we analyze two evo-devo models: (i) Polypterus model, which explains anatomical and behavioral changes in the evolution of ancient stem tetrapods (Standen et al. 2014), and (ii) Lepidobatrachus model, which accounts for the processes of embryogenesis and morphogenesis of early vertebrates (Amin et al., 2015).

In the last two decades, evo-devo has represented an interesting shift in the way we understand evolution —mainly driven by experimental research in Causal Embriology and Genetics of Development. At the same time, evo-devo has also inspired new challenges in the study of scientific explanation, modeling, experimentation, and as well on the ontological commitments that scientists assume when they undertake theoretical generalizations. Specifically, explanatory models in evo-devo attempt to represent emergent phenomena, such as phenotypic plasticity. This kind of complex phenomenical relationships, which are of a causal-functional kind, prevents the analysis of scientific explanation from being restricted to the syntactic structure or the semantic content of the theories and models. Thus, we assert that it is required to include the notion of understanding, in order to account the salient role that models play in evo-devo (Diéguez 2013; de Regt et al. 2009). Understanding belongs to a cognitive but also pragmatic domain: the analysis of models must include issues such as the scientist's intentions (to explain/understand a phenomenon) (Knuuttila and Merz 2009), and the material and abstract resources she uses to achieve her goals.

Thereby, from a pluralist and pragmatic approach of the meaning and use of models in science, our ultimate goal is to provide some minimum criteria that evo-devo models must fulfill to provide a genuine or effective understanding that must be conceived as the state of a cognitive subject, but it refers to the utility and manipulability of theories and models evaluated by the scientific community.

References:

Amin, N. et al. (2015). Budgett’s frog (lepidobatrachus laevis): a new amphibian embryo for developmental biology. Developmental Biology, 405 (2), 291-303.

Diéguez, A. (2013). La función explicativa de los modelos en biología. Contrastes. Revista Internacional de Filosofía, (18), 41–54.

de Regt, H. W., Leonelli, S., & Eigner, K. (2009). Focusing on scientific understanding. In H. W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific Understanding. Philosophical Perspectives (p. 1-17). Pittsburgh: University of Pittsburgh Press.

Knuuttila, T., & Merz, M. (2009). Understanding by Modeling. An Objectual Approach. In H. W. de Regt, S. Leonelli, & K. Eigner, eds., Scientific Understanding. Philosophical Perspectives (pp. 146–168). Pittsburgh: University of Pittsburgh Press.

Elgin, C. Z. (2009). Is Understanding Factive? In A. Haddock, A. Millar, & D. Pritchard, eds., Epistemic Value (pp. 322–330). Oxford University Press.

Standen, E. M., Du, T. Y., & Larsson, H. C. E. (2014). Developmental Plasticity and the Origin of Tetrapods. Nature, 513, 54–58.

11:30
Thomas Reydon (Leibniz University of Hannover, Germany)
How do evolutionary explanations explain?

ABSTRACT. In this talk I address the explanatory scope of evolutionary theory and the structure of evolutionary explanations. These issues are in need of clarification because of (at least) two factors. First, evolutionary theory has been undergoing continuous, profound change from Darwin’s formulation via late nineteenth-century Neo-Darwinism to the mid-twentieth century Modern Synthesis, to a possible Extended Synthesis that is being considered today. These changes, which are likely to continue after the current debate on a possible Extended Synthesis has been settled, affect both what evolutionary theory is thought to explain and the ways in which it can perform explanatory work. Second, investigators in areas outside biology, such as evolutionary economics and the various evolutionary programs in the social sciences (e.g., Aldrich et al., 2008; Hodgson & Knudsen, 2010), are increasingly attempting to apply evolutionary theory to construct scientific explanations of non-biological phenomena, using different kinds of evolutionary models in different domains of research. This raises a number of questions: Exactly how much can be explained by applying an evolutionary framework to non-biological systems that differ widely from biological ones? Can applications of evolutionary theory outside biology can achieve a similar explanatory force as when applied to cases in biology? What – if any – basic explanatory structure unifies the different evolutionary models used in biology and the different areas of the social sciences? And so on.

I will try to achieve more clarity on these questions by treating them as a set of questions about the ontology of evolutionary phenomena. My claim is that practices of applying evolutionary thinking in non-biological areas of work can be understood as what I call “ontology-fitting” practices. For an explanation of a particular phenomenon to be a genuinely evolutionary explanation, the explanandum’s ontology must match the fundamental ontology of evolutionary phenomena in the biological realm. This raises the question what elements this latter ontology consists of. However, there is no unequivocal answer to this question. There is ongoing discussion about the question what the basic elements in the ontology of biological evolutionary phenomena (such as the units of selection, the units of replication, etc.) are and how these are to be conceived of. Therefore, practitioners from non-biological areas of work cannot simply take a ready-for-use ontological framework from the biological sciences to fit their phenomena into. Rather, they pick elements from the biological evolutionary framework that seem to fit their phenomena, disregard other elements, and construct a framework that is specific to the phenomena under study. By examining cases of such “ontology fitting” we can achieve more clarity about the requirements for using evolutionary thinking to explain non-biological phenomena, as well as about the question how evolutionary explanations explain.

References Aldrich, H.A., Hodgson, G.M., Hull, D.L., Knudsen, T., Mokyr, J. & Vanberg, V.J. (2008): ‘In defence of generalized darwinism’, Journal of Evolutionary Economics 18: 577-596. Hodgson, G.M. & Knudsen, T. (2010): Darwin’s Conjecture: The Search for General Principles of Social and Economic Evolution, Chicago & London: University of Chicago Press.

11:00-12:00 Session 29J: A1 Model theory
Chair:
Elisángela Ramírez-Cámara (Instituto de Investigaciones Filosóficas, Universidad Nacional Autónoma de México, Mexico)
Location: Room 347
11:00
Ziv Shami (Ariel University, Israel)
On the forking topology of a reduct of a simple theory

ABSTRACT. Let $T$ be simple and $T^-$ a reduct of $T$. For variables $x$, we call an $\emptyset$-invariant set $\Gamma(x)$ of $\CC$ with the property that for every formula $\phi^-(x,y)\in L^-$: for every $a$, $\phi^-(x,a)$ $L^-$-forks over $\emptyset$ iff $\Gamma(x)\wedge \phi^-(x,a)$ $L$-forks over $\emptyset$, a \em universal transducer\em. We show that there is a greatest universal transducer $\tilde\Gamma_x$ (for any $x$) and it is type-definable. In particular, the forking topology on $S_y(T)$ refines the forking topology on $S_y(T^-)$. Moreover, we describe the set of universal transducers in terms of certain topology on the Stone space and show that $\tilde\Gamma_x$ is the unique universal transducer that is $L^-$-type-definable with parameters. In the case where $T^-$ is a theory with the wnfcp (the weak nfcp) and $T$ is the theory of its lovely pairs we show $\tilde\Gamma_x=(x=x)$ and give a more precise description of all its universal transducers in case $T^-$ has the nfcp.

11:30
Yiannis Kiouvrekis (National Technical University of Athens, Greece)
Remarks on Abstract Logical Topologies: An Institutional Approach.

ABSTRACT. In “Logical Topologies and Semantic Completeness” [10] V. Goranko established a master note about the connection between logic and topology. He proposed a topological approach to prove semantic completeness of a logical system with respect to a class of “standard models”, provided a weaker completeness result with respect to a larger class of “general models”. The author has pointed that there is no general method for solving the problem described above, but usually some specific model-theoretic constructions are applied which transform general into standard models while preserving satisfiability and that topological methods and results have so far been under-utilized for solving purely logical problems. Our goal is to establish an appropriate framework for all above within an axiomatic setting, for this reason, we appeal to Institution Theory. Institution theory [2] is an important field in the so-called Universal Logic. It is a categorical abstract model theory, which formalizes the notion of logical system, including syntax, semantics and the satisfaction relation between them. One of the many achievements of Institution theory has been to provide a conceptually elegant and unifying definition of the nature of logical systems. It provides a complete form of abstract model theory, including signature morphisms, model reducts, mappings between logics noted as Institution - independent model theory. We propose the concept of topological semantics at the level of abstract model theory provided by an institution-independent framework. Our abstract topological logic framework provides a method for systematic topological semantics extensions of logical systems from computer science and logic. Also, provides us with several appropriate model theoretical tools for proving semantic completeness on arbitrary logics. Furthermore, we can show how to develop an extension of the institution-independent method of ultraproducts [5] to topological semantics, to topological satisfaction and develop an ultraproduct fundamental theorem for the modal satisfaction. Furthermore, is easy to show that this scheme consists of fundamentals notions of logics in topological framework, like Craig's interpolation theorem, Completeness, Bisimulation, Filtration etc.

Bibliography [1] Paul M.Cohn. Universal Algebra. Harper and Row. 1965. Revised edition 1991 [2] Joseph Goguen and Rod Burstall. Institutions: Abstract model theory for speci_cation and programming. Journal of the Association for Computing Machinery 39 (1):95-146, 1992 [3] Rod Burstall and Joseph Goguen. The semantics of Clear, a speci_cation language 1979 Copen Winter School on Abstract Software Speci_cation volume 86 of Lectures Notes in Computer Science, pages 292-332. Springer 1980. [4] Razvan Diaconescu. Institution-independent ultraproducts. Fundameta Informatica, 55(3-4):321-348 [5] Petros Stefaneas and Razvan Diaconescu. Ultraproducts and possible worlds semantics in institutions. Theoritical Computer Science, 379(1):210-230, 2007 [6] Razvan Diaconescu. Institution-independent Model Theory Studies in Universal Logic Springe Birkhauser, 2008 [7] Tarski, A. (1938). Der Aussagenkalkul und die Topologie. Fund. Math., 31: 103{134. [8] McKinsey, J. and Tarski, A. (1944). The algebra of topology. Annals of Mathematics, 45:141{191. [9] McKinsey, J. C. C. (1941). A solution of the decision problem for the lewis systems S2 and S4, with an application totopology. J. Symbolic Logic, 6: 117{134.} [10] V. Goranko. Logical Topologies and Semantic Completeness. In J. van Ei- jck, V. van Oostrom, and A. Visser, editors, Logic Colloquium’99, pages 68–79. Lecture Notes in Logic 17, AK Peters, 2004.

11:00-12:00 Session 29K: A4 History and philosophy of traditional logic 2
Chair:
Gabriela Besler (University of Silesia in Katowice, Poland)
Location: Room 301
11:00
Jari Palomäki (Tampere University of Technology, Finland)
The Intensional and Conceptual Content of Concepts

ABSTRACT. A famous logic text, the Port Royal Logic, composed by two leaders of the Port Royal movement Antoine Arnauld and Pierre Nicole in 1662, made a distinction between the comprehension [compréhension] and the extension [étendue or extension] of an idea. The comprehension of an idea consists of “the attributes which it includes in itself, and which cannot be taken away from it without destroying it,” (Arnauld and Nicole, 1996, I, 6; II, 17). The extension of an idea consists of “the subjects with which that idea agrees,” or which contain it. Both the comprehensions of ideas and the extensions of ideas are used in the Port Royal Logic in justifying the basic rules of traditional logic, (ibid., II, 17-20) . However, nowadays there are at least two different ways to interpret “the comprehension of an idea”, i.e. either as “the intension of a concept” or as “the conceptual content of a concept”. These two things are to be distinguished as well, which will be shown in this paper below.

1. Limits of the Traditional Conceptual Content of a Concept

In traditional approach the conceptual content and the extension of a concept can be defined as follows:

I The conceptual content of a concept consists of all those attributes, i.e. concepts, which are contained in it. II The extension of a concept consists of all those objects which fall under it.

From these two definitions the rule of inverse relation between the extension and the conceptual content of a concept follows:

# The lesser the extension of a concept, the greater is its conceptual content, and vice versa.

However, Bernard Bolzano in his Wissenschaftslehre (1837, §120) gives the following examples in order to show that the rule (#) is not always the case:

1. ‘A man, who understands every European language’,

and

2. ‘a man, who understands every living European language’.

The conceptual content of the concept (1) is lesser than the conceptual content of the concept (2), for the concept (2) has in addition the concept of ‘living’ as its conceptual content. Also, the extension of the concept (1) is also lesser than the extension of the concept (2), for there are fewer people who understand every European language (including e.g. Latin) than who understand every living European language. Thus, according to Bolzano, the concepts (1) and (2) contradicts the rule (#).

3. The Intensional and Conceptual Content of a Concept

Given the Bolzano’s ‘counter-examples’ (1) and (2), it is now possible to distinguish between the intensional and the conceptual content of concepts as well as the extension of concepts. These differences are illustrated by means of Bolzano’s ‘counter-examples’ (1) and (2) as follows: Firstly, the intensional content of the concept (1) is greater than the intensional content of the concept (2), for the man, who understands every European language, understands also every living European language, whereas the man, who understands every living European language, does not necessarily understand every European language. Secondly, the conceptual content of the concept (1) is smaller than the conceptual content of the concept (2), for the concept (2) has in addition the concept of ‘living’ as its conceptual content. Thirdly, the extension of the concept (1) is smaller than the extension of the concept (2), for there are fewer people who understand every European language than who understand every living European language. Hence, it is now possible to distinguish between the intensional and the conceptual content of concepts as well as the extension of concepts.

References

Arnauld, A. and Nicole, P., 1996: Logic or the Art of Thinking. Trans. J. V. Buroker. Cambridge: Cambridge University Press. Bolzano, B., 1837: Wissenschaftslehre I. Sulzbach.

11:30
Yusuf Dasdemir (University of Jyvaskyla, Finland)
Reception of Absolute Propositons in the Avicennian Tradition: Ibn Sahlān al-Sāwī on the Discussions of the Contradiction and Conversion of Absolute Propositions

ABSTRACT. There is no doubt that Avicenna (Ibn Sina, d. 1037) is one of the most important logicians of the Middle ages, and one of the most interesting parts of his logic is his theory of absolute propositions, which roughly correspond to Aristotle’s categorical sentences. While Avicenna takes a sharp departure from the First Master, Aristotle, in his views on the definition, contradiction, and conversion of his absolutes, some of his disciples and commentators feel need to express their objections to, or reservations about, these views of his. Ibn Sahlān al-Sāwī (d. 1145), one of the mediate disciples of Avicenna, is among those who are not content with his accounts of absolute propositions. This paper will deal with Sāwī’s theory of absolute propositions, and particularly their contradiction, and conversion. It will also discuss his objections, or qualifications, against Avicennian theory, upon which the former builds his own logical ideas. I think, this paper is important because it aims to shed light on Sāwī as a logician, who seems to have been so far overlooked in the literature despite the fact that he is known to have influence over such prominent, and influential figures as Shahāb ad-Dīn al-Suhrawardī (d. 1191), the founder of Ishrāqī philosophy, and the philosopher-theologian, Fakhr al-Dīn al-Rāzī (d. 1209). This paper is based on a treatise by Sāwī, exclusively devoted to the discussion of contradiction of absolute propositions, and to clarifying some of his ideas on the issue put forward in his magnum opus, al-Basāir al-Nasīriyya, fame of which reached so much further than its author’s that Sāwī has been often referred as ‘the author of al-Basāir’ in the philosophical sources of the Islamic world.

11:00-12:30 Session 29L: C2 Relativity and spacetime
Chair:
Ladislav Kvasz (Czech Academy of Sciences, Czechia)
Location: Room 302
11:00
Lorenzo Cocco (Université, Switzerland)
Joshua Babic (University of Geneva/Université de Genève, Switzerland)
Theoretical equivalence and special relativity
PRESENTER: Joshua Babic

ABSTRACT. Quine [1975] proposes an attractive criterion - later refined by Barrett and Halvorson [2016] - for when two first order systems count as formulations of the same [scientific] theory: a specific sort of translation function must exist between the two languages so as to map one set of axioms into a logical equivalent of the other. Barrett and Halvorson [2016] ask also for a reverse function that returns for each formula a logical equivalent of the original. Elementary philosophical considerations - about the way in which the reference of theoretical terms gets fixed - suggest that equivalent theories are simply notational variants of each other. No rational preference can be had and no distinction about ontology can be made between pairs of intertranslatable theories. Unfortunately few of the interesting cases of theories believed to be equivalent - such as Lagrangian and Hamiltonian mechanics, matrix and wave quantum mechanics or the manifold and the algebraic formulations of general relativity - have been examined under this strict notion of equivalence. A lack of axiomatisations is mainly to blame. In the present work we will begin by liberalizing the class of translation functions admitted by Quine [1975] to include mappings that send a predicate of a fixed arity to predicates of a fixed larger arity. We illustrate the need for this extension by some mathematical examples that we will employ later: the interpretation of the theories of matrices and of polynomials of a fixed degree in the theory of their [field of] coefficients and also that of rational numbers in that of the integers. We will then consider two systems of axioms for light rays moving in an empty Minkowski spacetime. One is a revision of the system of [Andréka, Németi et al. 2011], in which what appear to us as several mistakes in the formulation are corrected. This first system does not appear, at least at first, to assume the existence of spacetime points and merely assigns coordinate values to physical objects. It treats mainly of frames of reference and of the transformations between frames. The second set of axioms is our own and attempts to formalize the geometric account of spacetime given in [Maudlin 2012] and [Malament, unpublished]. This second system assumes the existence of spacetime points, but its vocabulary is purely geometric and it makes no reference to coordinates or to numbers. We will present the axioms of both system and then proceed to prove their equivalence by constructing an appropriate translation manual.

References Andréka, H., Madarász, J. X., Németi, I. and Székely, G., On logical analysis of relativity theories,. Hungarian Philosophical Review 54,4 (2011): 204-222. Thomas W. Barrett and Hans Halvorson. Glymour and Quine on Theoretical Equivalence. J Philos Logic (2016) 45: 467 David Malament, Geometry and spacetime, unpublished notes Tim Maudlin. Philosophy of physics: space and time. Princeton University Press. 2012 Willard V. Quine, On Empirically Equivalent Systems of the World. Erkenntnis (1975): 313-28.

11:30
Pablo Acuña (Universidad, Chile)
Dynamics and Chronogeometry in Spacetime Theories

ABSTRACT. A recent debate on the foundations of special relativity (SR) concerns the direction of an alleged arrow of explanation between Minkowski structure and Lorentz covariant dynamics. Harvey Brown (2005) argues that the latter explains the former, whereas Michel Janssen (2009) argues that the arrow points in the opposite direction. I propose a different view concerning the relation between dynamics and spacetime structure, drawing a lesson from Helmholtz’s (1977) work on the epistemology of geometry. Helmholtz’s insight was that for the question of the geometric structure of physical space to make sense at all, dynamical considerations must be involved from the outset. If the notions of congruence and rigidity are not previously defined and operationalized in terms of dynamical principles, the measurements that can tell about the geometric structure of physical space are neither defined nor possible. Geometric structure cannot refer to the physical world unless dynamical principles define congruence and rigidity. The converse is also true: dynamics makes definite sense only on a geometric structure background. This is why measurements with rigid bodies constitute empirical evidence for a certain geometric structure in the first place. If the dynamics of measuring rods were geometrically neutral, measurements would be idle with respect to the geometric structure of physical space. I illustrate this point comparing SR and Lorentz’s ether theory. The mathematical form of the dynamical laws in both theories is the same, but they have a different meaning. In Lorentz’s theory, ∆x'=∆x⁄γ refers to the longitudinal contraction of an object that moves with respect to the ether with velocity v. In SR the formula refers to different measurements of the length of the same object in two frames in relative motion. This difference is grounded in that ∆x'=∆x⁄γ is setup on different chronogeometric structures. For the ether theory to be able to pick a privileged ether-rest frame, Galilean spacetime must be the chronogeometric background. On the other hand, in SR the formula is about kinematics in different frames since the background chronogeometric structure is Minkowski spacetime. If the law were chronogeometrically neutral we could not assign it any of the two meanings—or any physical meaning at all. In conclusion, for a chronogeometric structure to have a physical meaning, dynamical principles that operationalize it are necessary, and if dynamical laws are to have a definite physical meaning, they must be setup on a chronogeometric structure background. Thus, the connection between them cannot be explanatory, and then the debate mentioned above gets dissolved. This thesis is a development of the argument in (Acuña 2016): there it is argued that in SR Minkowski spacetime and Lorentz covariance are inextricably connected. Here I argue that the same relation holds between spacetime structure and dynamics in spacetime theories in general.

REFERENCES

Acuña, P. (2016). ‘Minkowski Spacetime and Lorentz Invariance: the cart and the horse or two sides of a single coin?’ SHPMP 55: 1-12.

Brown, H. (2005). Physical Relativity. OUP.

Helmholtz, H. (1977). Hermann von Helmoltz’s Epistemological Writings. Reidel.

Janssen, M. (2009). ‘Drawing then Line between Kinematics and Dynamics in Special Relativity’. SHPMP, 40: 26-52.

12:00
Noah Stemeroff (Tel Aviv University, Israel)
Symmetry, General Relativity, and the Laws of Nature

ABSTRACT. In modern physics, the laws of nature are derived from the fundamental symmetries of physical theory. However, recent attempts to ground a philosophical account of natural law on the symmetries and invariances of physical theory have been largely unsuccessful, as they have failed to address the framework-dependence of symmetry and invariance, the disparate nature of physical theory, and the recalcitrance of the natural world (e.g. see van Fraassen, 1898; Earman, 2003; and Brading and Castellani, 2003). In response, this paper provides a detailed study of the mathematical foundation of symmetries and invariances in modern physics in order to present a novel philosophical account of the constitutive role that mathematics plays in grounding the laws of nature.

In the first section of the paper, I provide a discussion of the geometrical connection between symmetries, invariances, and natural law within modern physical theory. In this context, I present an account of Noether's theorem, which states that every continuous symmetry in a set of dynamical equations corresponds to a conserved quantity. In fact, Noether’s theorem is often taken to establish the connection between symmetries and conservation laws within modern physics (Butterfield, 2006). However, what is often let out of this discussion is the fact that the symmetries in a set of dynamical equations are not sufficient, on their own, to establish the relevant invariant quantities as the symmetries must also be present in the underlying spacetime structure as well (Shutz, 1999). This leads me to a discussion of the spacetime structures of modern physical theory. Through analyzing these structures, I show that in the case of classical mechanics, quantum mechanics, and special relativity, the spacetime structures are all maximally symmetric and Noether's theorem is sufficient to characterize natural law. But the situation drastically changes when we allow for the possibility of a dynamical non-Euclidean geometry. In general relativity, the symmetries in a set of dynamical equations are not sufficient to establish a conserved quantity, as these symmetries are not typically present in the underlying spacetime structure. In fact, the relevant symmetries often have to be imposed on the spacetime in contradiction to the causality constraint of general relativity. The grounding of natural law in the symmetry structure of physical theory appears to be undermined by general relativity.

In the second section of the paper, I consider one possible solution to this concern that emerges from the fact that the mathematical structures of classical, quantum, and relativity theory are all formulated within a more general mathematical conception of nature characterized by the Lie calculus. I suggest that this mathematical framework may be able to provide a viable foundation to ground a philosophical account of natural law. Here I take my motivation from Feynman (1964, p. 59) who notes that mathematics may act as ``a great sweeping principle'' from which all laws of modern physics are derived. Despite the fact that each mathematical formalism for physical theory will entail a slightly different conception of symmetry and invariance, they all share a common understanding of natural law grounded in a geometrical-mathematical representation of reality. In this sense, each theory may provide a representation of reality from a particular mathematical vantage point. The common features of the geometrical-mathematical formalism of physical theory may offer the possibility of grounding a viable perspectival account of natural law. To conclude, I consider whether this perspectival account of natural law is able to ground a viable scientific realism, and discuss its broader implications for the philosophy of physics.

11:00-12:00 Session 29M: A2 Many-valued and probability logics 2
Chair:
Zuzana Rybaříková (University of Hradec Králové, Czechia)
Location: Room 346
11:00
Adam Edwards (University of Illinois at Urbana-Champaign, United States)
Seeing and Doing, or, why we should all be only half-Bayesian

ABSTRACT. In his 2001 paper, "Bayesianism and causality, or, why I am only a half-Bayesian" Judea Pearl claims that the language of probability is "not suitable for the task" of representing scientific knowledge. In doing so, Pearl not only implicitly eulogizes probabilistic accounts of causation (e.g. Reichenbach 1956, Suppes 1970) but also motivates his own account of the do-calculus. If Pearl is correct, then philosophers and practitioners of science who care about causation face a choice: supplement your formal language or abandon any formal representation of causation. The flourishing literature on causal modeling and Pearl's do-calculus (see Hitchcock 2018 for an overview) is a testament to the promise of the former. However, one question remains unanswered: exactly what is missing from the language of probability that renders it unsuitable for capturing causal relationships?

In this paper we argue that the answer to this question is the language of probability is fundamentally propositional but causal structure is fundamentally non-propositional. Following Cartwright (1979) we argue that the role causal structure plays in effective action suggests that the formal language for causation is not descriptive, but prescriptive. In support of this position we develop a non-propositional imperative logic which takes the axioms of probability theory as a fragment and supplements them with an imperative logic operator. We introduce the operator and consider its metalogical properties. Finally, we compare our approach to Pearl's do-calculus.

11:30
Luca San Mauro (Vienna University of Technology, Austria)
Inductive Inference and Structures: How to Learn Equality in the Limit

ABSTRACT. (Joint work with Ekaterina Fokina and Timo Kötzing)

Algorithmic learning theory (ALT) is a vast research program, initiated by Gold and Putnam in the 1960's, that comprises different models of learning in the limit. It deals with the question of how a learner, provided with more and more data about some environment, is eventually able to achieve systematic knowledge about it. Classical paradigms of learning concern either learning of formal languages or learning of total functions.

In this work, we want to make sense of the following question: what does it mean to learn a structure? To do so, we combine the technology of ALT with notions coming from computable structure theory, and develop a formal framework for learning structures in the limit. We focus on the following case-study.

Consider a learner observing (a countably infinite number of) different items to be equivalent or not equivalent. The learner would like to arrive at a conjecture about the structure of this equivalence relation, that is, the learner would like to determine the isomorphism type of the equivalence structure embodied by the items. If the first guess has to be correct, we call the setting finite learning (denoted Fin), if the conjecture may be changed an arbitrary (but finite) number of times before stabilizing on a correct conjecture, we call the setting explanatory learning (denoted Ex). In each case, the data available to the learner is a complete accurate list of which elements of the structure are equivalent and which are not. Following standard convention in ALT, we call this learning from informant (Inf), where both positive and negative information is available.

We carefully distinguish many learning criteria and completely characterize which equivalence structures are learnable according to the (arguably) most natural one. We also discuss the philosophical significance of this approach and its relation with induction.

12:30-14:00Lunch Break
14:00-15:00 Session 30A: A1/A4/B1/C1 SYMP Symposium on John Baldwin's model theory and the philosophy of mathematical practice 1 (Baldwin-1)

Organizer: John Baldwin

This book serves both as a contribution to the general philosophy of mathemat-
ical practice and as a specific case study of one area of mathematics: model theory.
It deals with the role of formal methods in mathematics, arguing that introduction
of formal logic around the turn of the last century is important, not merely for the
foundations of mathematics, but for direct impact in such standard areas of tra-
ditional mathematics as number theory, algebraic geometry, and even differential
equations. Finding informative axiomatizations of specific areas of mathematics,
rather than a foundation which is impervious to the needs of particular areas drives
this impact. Some of the many uses of the tools of modern model theory are de-
scribed for non-specialists.


The book outlines the history of 20th century model theory, stressing a
paradigm shift from the study of logic as abstract reasoning to a useful tool for
investigating issues in mathematics and the philosophy of mathematics. The book
supports the following four theses that elaborate on this shift.


Theses
1. Contemporary model theory makes formalization of specific mathematical
areas a powerful tool to investigate both mathematical problems and issues
in the philosophy of mathematics (e.g. methodology, axiomatization, purity,
categoricity and completeness).
2. Contemporary model theory enables systematic comparison of local formal-
izations for distinct mathematical areas in order to organize and do mathe-
matics, and to analyze mathematical practice.
3. The choice of vocabulary and logic appropriate to the particular topic are
central to the success of a formalization. The technical developments of first
order logic have been more important in other areas of modern mathematics
than such developments for other logics.
4. The study of geometry is not only the source of the idea of axiomatization
and many of the fundamental concepts of model theory, but geometry itself
(through the medium of geometric stability theory) plays a fundamental role
in analyzing the models of tame theories and solving problems in other areas
of mathematics.

The book emphasizes the importance in formalization of the choice of both
the basic notions of a topic and the appropriate logic, be it first order, second
order, or infinitary logic. Geometry is studied in two ways: the analysis of the
formalization of geometry from Euclid to Hilbert to Tarski and by describing the
role of combinatorial geometries (matroids) in classifying models. The analysis
of Shelah’s method of dividing lines for classifying first order theories provides a
new look into the methodology of mathematics. A discussion of the connections
between model theory and axiomatic and combinatorial set theory fills out the
historical study.

Chair:
Juliette Kennedy (University of Helsinki, United States)
Location: Room 364
14:00
Juliette Kennedy (University of Helsinki, Finland)
Symposium on John Baldwin’s Model Theory and the Philosophy of Mathematical Practice

ABSTRACT. John Baldwin’s monograph Model Theory and the Philosophy of Mathematical Practice (Cambridge) ([Bal18]) provides a novel setting within which various philosophical questions can be raised.

A metaphilosophical question comes immediately to mind: As a work in the emerging area of the philosophy of model theory, how does Baldwin’s approach define the central philosophical concerns of this new field? Also, what light does the Baldwin approach to the philosophy of mathematical practice shed on that practice, that is not available in other approaches?

The general philosophical point of view on which Model Theory and the Philosophy of Mathematical Practice relies is built on the idea of localization; of seeking targeted, local foundations for mathematics, as opposed to a global foundation. What is the nature of this seemingly anti-foundationalist view, and how deep does this anti-foundationalist stance reach? Does localizing in this case mean rejecting any kind of global framework? If so, how does [Bal18] shape this seeming pluralism into a coherent philosophical view? Or are questions of this kind bracketed in favor of the focus on methodology?

Shelah’s dividing lines, established by the Main Gap Theorem, play a central role in the book, the author expounding Shelah’s dividing line strategy as a general methodology in [Bal18, Chapter 13]. How do we know that this or indeed any classificatory scheme in mathematics tracks the actual contours of the subject? For example, theories on the structure side of Shelah’s Main Gap Theorem admit dimension-like geometric invariants. Do these theories track our geometric or spatial intuition more closely than theories on the non-structure side, if they can be said to track these intuitions at all?

How does [Bal18] weigh in on the question whether the Main Gap Theorem is a foundational theorem in the sense that Hilbert imagined, demarcating tractable vs the untractable in mathematics? Or are the theories on the non-structure side tractable from some other point of view?

References [Bal18] John T. Baldwin. Model Theory and the Philosophy of Mathemat- ical Practice: Formalization without Foundationalism. Cambridge University Press, 2018.

14:30
Andrew Arana (Université Paris 1 Panthéon-Sorbonne, France)
Symposium on John Baldwin’s Model Theory and the Philosophy of Mathematical Practice

ABSTRACT. I will be commenting on Baldwin's book from my point of view as a philosopher of mathematics. I will concentrate on the question of how model theory sheds light on what is geometry. While geometry was once identified with the study of visualizable figures, since the nineteenth century it has been generalized to a study of space and its properties, where these properties are not necessarily visualizable. One thinks for example of complex projective space, where both points at infinity and imaginary points seem to elude visual representation. Model theorists, starting from the brilliant insights of Zilber, have suggested that logical features of our study of space may be connected to---indeed, may account for---what geometry is and has been become. Our logical aims to study uncountably categorical theories seems to lead inexorably to classically geometrical structures. Can one then say that at its heart, geometry as we study it is constrained by the logics we prefer? If so, then one here finds a kind of neo-Kantianism, where one replaces forms of intuition by logical forms. Baldwin's book frames this logical work in a fruitful way, and in my discussion I will make more explicit these tantalizing themes.

14:00-15:00 Session 30B: B1/B5 SYMP Science as a profession and vocation. On STS's interdisciplinary crossroads 3 (WEBPROVOC-3)
Chair:
Ilya Kasavin (RAS Institute of Philosophy, Moscow, Russia)
Location: Room 401
14:00
Elena Chebotareva (Saint Petersburg State University, Russia)
An Engineer: Bridging the gap between mechanisms and values

ABSTRACT. Emerging discourse in the philosophy of engineering within the modern field of the philosophy of science and technology is obviously focusing on a task of defining the "engineer concept". Author argues that the philosophy of engineering unlike the philosophy of technology highlights much attention to the engineering community that declares and documents its goals and values. In this context, the M. Weber’s understanding of the term Beruf in his “Wissenschaft als Beruf” gives a contribution to the concept of modern engineer. On the one hand, the engineer possesses specialized skills, on the other hand, these skills have a broad ethical dimension since they are associated with the values of freedom and responsibility, with the relationship between the state and the individual. (For example, American philosopher of engineering M. Davis answering the question who are the engineers notes that the working functions they perform cannot help with the answer. The first, to equate designing, building, or the like with engineering makes distinguishing engineers from other technologists impossible. Second, to equate engineering with designing, building, or the like gives a misleading picture of what engineers in fact do). Different national cultures use their methods to work with this engineer concept; one of the most convincing in my opinion is the American one, in which the philosophy of engineering is based on the study of engineering associations and their statements and codes. Among such approaches, I’d like to note E. Layton’ famous work "The Revolt of Engineers", 1971. M. Davis, which I mentioned above, argues that the concept of an engineer cannot be understood through the functions that engineers routinely perform, the philosophical sense is important, so he, for example, seeks it in the comparative structure of engineering education, focusing on the role of humanitarian subjects there. Davis is also engaged in philosophical interpretations of engineering codes, seeking which tasks are higher priority for engineers (as well borrowing the approaches of the philosophy of law). In my report, I use the approaches of M. Davis - the analyzing the declarations of the Association for Engineering Education of Russia (AEER) (founded in 1992), which deals with the procedure of accreditation for the educational programs for future engineers. The emphasis on the analyzing the structure of engineering education seems to be more relevant than the work with engineering codes due to the historical specifics of Russia. The obtained results can be compared with those presented by Davis, that helps to answer on the key questions of the philosophy of engineering: a) what is the most adequate research methodology for the work with problem of engineer concept, b) does this “concept” have a national cultural “face”, despite the globalization of scientific and technological progress.

Layton E. The Revolt of the Engineers: Social Responsibility and the American Engineering Profession. The Johns Hopkins University Press (February 1, 1986)

Davis M. (2009) Distinguishing Architects from Engineers: A Pilot Study in Differences Between Engineers and Other Technologists. In: Poel I., Goldberg D. (eds) Philosophy and Engineering:. Philosophy of Engineering and Technology, vol 2. Springer, Dordrecht

Davis М. Three myths about codes of engineering ethics // IEEE Technology and Society Magazine. Volume: 20, Issue: 3, Fall 2001).

14:30
Tatiana Sokolova (RAS Institute of Philosophy, Russia)
Title What’s in a Name? To the History of a ‘scientist’

ABSTRACT. In his notorious lecture “Science as a Vocation” (Weber, 1989) given in 1919, Max Weber attempted to catch such an escaping and changing entity as the very nature of the scientific profession. According to Weber, the ambivalent position of a scientist consists of an ambivalent feature of science itself. On the one hand, a scientist has to have a very special state of mind to proceed with the scientific activity. On the other hand, science has a specific academic structure, which, in its turn, depends on its links to political and economic institutions. By comparing German and American academic careers, Weber demonstrates the failures and benefits of both academic foundations for scientists and scholars. However, this analysis became one of the most fruitful tools to explicate the Academy in its links to (a) a special pursuit of a scientist; (b) the struggle for academic achievements; (c) the perpetual progress of academic work; and (d) the role of politics, this was not the first case for a continuous debate on the nature of a scientist and its role in a society. In this talk, I propose to dig deeper into the debates on the definitions of a scientist in the period, when the science found itself in a process of becoming a complex institutional structure, i.e. to the XIXth century. In this talk, I would like to focus on three cases from the debates on (1) the social and political role of a scientist, and (2) the definition of a scientist within this role.

Case 1.The Demand of Governmental Paternalism: Charles Babbage and David Brewster’s criticism of the conditions of scientific development in England (Brewster 1830). Case 2. The Controversy of a Name. In 1834 William Whewell coined the term ‘scientist’ in the purpose to create a ‘general term by which these gentlemen could describe themselves with reference to their pursuits’ (Whewell 1834, 61) and ‘a name to describe a cultivator of science in general’ (Whewell 1840, cxiii). Case 3. The Nurture of a Scientist. Alphonse de Candolle’s analysis of the European Academies of Sciences, where by scientists he defined ‘those who have a noble desire to make a discovery or to publish something new, [and] must concentrate their efforts on the only one scientific discipline, and sometimes even on the only one section of this discipline’ (de Candolle 1873, 74).

These debates on the role and pursuit of a scientist lead to the XXth century debates on the phenomenon of public intellectuals and more recent debates on the demands of the society to the scientific community as well as the requirements to the scientists to fulfill these demands. By reconstructing the formation of a contemporary image of a scientist via XIXth century I will follow the path of the creation of a contemporary ‘scientist’.

Bibliography

1. Brewster D. (1830) Review on: Reflexions on the Decline of Science in England, and on some of its Causes. By Charles Babbage. London, 1830 // Quarterly Review. 1830. Vol. XLIII. № LXXXVI. Pp. 304-342. 2. de Candolle A. (1873) Histoire des sciences et des savants depuis deux siècles. Genève, Bale, Lyon: H. Georg. 3. Weber M. (1989) Science as Vocation, in Lassman P., Velody I., Martins H. (ed.) "Max Weber's 'Science as a Vocation'", London: Unwin Human. Pp. 3-31. 4. Whewell W. (1834) Review on The Connexion of the Physical Sciences by Ms. Somerville, Quarterly Review, Vol. LI. Pp. 54-66. 5. Whewell W. (1840) The Philosophy of the Inductive Sciences. London: J. & J.J. Deighton. Vol. I.

14:00-15:00 Session 30C: C1 SYMP Formalism, formalization, intuition and understanding in mathematics: From informal practice to formal systems and back again 3 (FFIUM-3)
Chair:
Gerhard Heinzmann (Université de Lorraine, France)
Location: Room 112+113
14:00
Silvia De Toffoli (Princeton University, United States)
The Epistemic Basing Relation in Mathematics

ABSTRACT. In my talk, I will discuss different ways in which mathematicians base a mathematical belief on a proof and highlight some conditions on the basing relation in mathematics. The (proper) basing relation is a relation between a reason---in this case a proof---and a doxastically justified belief. I will argue that in mathematics if a subject bases a belief on a proof then she recognizes the proof as a good reason for that belief. Ceasing to recognize the argument as a proof (that is, as a sound argument), she would often be disposed to weaken her confidence in the belief or even to abandon it. Moreover, the basing relation for theorems in mathematical practice (as opposed to other domains) is put in place by a conscious rational activity: grasping how a proof supports a claim. This constraint will lead me to explore in the case of mathematics Leite ‘s (2004) general proposal of how justification is tied to the practice of justifying.

As has been pointed out (see for example Azzouni (2013)), there are different ways of grasping how a proof supports its conclusion and therefore the basing relation can assume different forms. It is possible to identify at least two broad types of grasping, leading to different types of basing: 1) a local, step-by-step grasping and 2) a holistic grasping. These are not mutually exclusive, and often basing in practice will be a combination of the two. In some cases, the strength of informal proofs lies in providing us with a holistic grasping, whereas formal proofs often underwrite the possibility of checking the validity of all the inferential steps, since these are generally decomposed into basic steps and thus allow us to gain a local grasp of how the conclusion is supported. At one end of the spectrum, there is a formal derivation of a complex result in an interpreted formal system: we can grasp how the derivation supports its conclusion, but at the same time fail to be aware of the over-all structure of the argument. At the other end, we gain a holistic grasp of how an informal proof supports its conclusion just by appreciating its large-scale structure, without going into all the details, which we accept on testimony or authority.

We often grasp how a proof supports its conclusion through a perceptible instantiation of one of its presentations. I will argue that in certain cases a proof presentation (or even a proof) facilitates a type of grasping, while making the other type of grasping difficult. For example, a purely formal proof will tend to be presented in ways which support only the local grasping. This in turn implies that basing the belief in the theorem on the proof involves such grasping.

BIBLIOGRAPHY

• Jody Azzouni (2013), “The Relationship of Derivations in Artificial Languages to Ordinary Rigorous Mathematical Proof,” Philosophia Mathematica (III) 21, 247–254.

• Adam Leite (2004), “On Justifying and Being Justified,” Philosophical Issues, 14, Epistemology, 219-253.

14:30
Benedict Eastaugh (Munich Center for Mathematical Philosophy, LMU Munich, Germany)
Marianna Antonutti Marfori (Munich Center for Mathematical Philosophy, LMU Munich, Germany)
Epistemic aspects of reverse mathematics

ABSTRACT. Reverse mathematics is a research programme in mathematical logic that determines the axioms that are necessary, as opposed to merely sufficient, to prove a given mathematical theorem. It does so by formalising the theorem in question in the language of second order arithmetic, and then proving first that the theorem follows from the axioms of a particular subsystem of second order arithmetic, and then “reversing” the implication by proving that the theorem implies the axioms of the subsystem (over a weak base theory corresponding to computable mathematics). The standard view of reverse mathematics holds that a “reversal" from a theorem to an axiom system shows that the set existence principle formalised by that axiom is necessary to prove the theorem in question. Most of the hundreds of ordinary mathematical theorems studied to date have been found to be equivalent to just five main systems, known as the “Big Five”. The five systems have all been linked to philosophically-motivated foundational programmes such as finitism and predicativism.

This view of reverse mathematics leads to an understanding of the import of reverse mathematical results in ontological terms, showing what mathematical ontology (in terms of definable sets of natural numbers) must be accepted by anyone who accepts the truth of a particular mathematical theorem, or who wants to recover that theorem in their foundational framework. This is most easily seen in connection with “definabilist” foundations such as predicativism, which hold that only those sets of natural numbers exist which are definable “from below”, without reference to the totality of sets of which they are a member.

In this talk, we will argue that this view neglects important epistemic aspects of reverse mathematics. In particular, we will argue for two theses. First, the connections between subsystems of second order arithmetic and foundational programmes is also, if not primarily, motivated by epistemic concerns regarding what sets can be shown to exist by methods of proof that are sanctioned by a certain foundational approach. In this context, reverse mathematics can help to determine whether commitment to a certain foundation warrants the acceptance of a given mathematical result. Second, reversals to specific subsystems of second order arithmetic track the conditions under which theorems involving existential quantification can be proved only in a “merely existential” way, or can provide criteria of identity for the object whose existence is being proved in the theorem in question. These epistemic aspects of reverse mathematics are closely related to computability-theoretic and model-theoretic properties of the Big Five systems.

We will conclude by arguing that in virtue of these features, the reverse mathematical study of ordinary mathematical theorems can advance our understanding of mathematical practice by making explicit patterns of reasoning that play key roles in proofs (compactness style arguments, reliance on transfinite induction, etc.) and highlighting the different virtues embodied by different proofs of the same result (such as epistemic economy, the ability to provide identity criteria for the object proved to exist, etc.).

14:00-15:00 Session 30D: C4 SYMP Epistemic and ethical innovations in biomedical sciences 1 (EAIBS-1)

Organizer: David Casacuberta

About 90% of the biomedical data accessible to researchers was created in the last two years. This certainly implies complex technical problems on how to store, analyze and distribute data, but it also brings relevant epistemological issues. In this symposium we will present some of such problems and discuss how epistemic innovation is key in order to tackle such issues.


Databases implied in biomedical research are so huge that they rise relevant questions about how scientific method is applied, such as what counts as evidence of a hypothesis when data can not be directly apprehended by humans, how to distinguish correlation from causation, or in which cases the provider of a database can be considered co-author of a research paper. To analyze such issue current characterizations of hypothesis formation, causal link, or authorship do not hold, and we need some innovation in the methodological and epistemic fields in order to revise these and other relevant concepts.


At the same time, due to the fact that a relevant deal of such biomedical data is linked to individual people, and how some knowledge from biomedical sciences can be used to predict and transform human behavior, there are ethical questions difficult to solve as they imply new challenges. Some of the them are in the awareness field, so patients and citizens understand these new ethical problems that didn’t arise before the development of big data; others relate to the way in which scientists can and can’t store, analyze and distribute information, and some others relate to the limits on which technologies are ethically safe and which bring erosion of basic human rights.
During the symposium we will present a coherent understanding on what is epistemic innovation, some of logical tools necessary for its development, and then we will discuss several cases on how epistemic innovation applies to different aspect of the biomedical sciences, also commenting its relevance when tackling ethical problems that arise in contemporary biomedical sciences.

Chair:
David Casacuberta (Universitat Autonoma de Barcelona, Spain)
Location: Room 346
14:00
Alger Sans (Universitat Autònoma de Barcelona, Spain)
The Incompleteness of Explanatory Models of Abduction in Diagnosis: The Case of Mental Disorders

ABSTRACT. Abduction is known as “procedure in which something that lacks classical explanatory epistemic virtue can be accepted because it has virtue of another kind (Gabbay and Woods: 2005, Magnani: 2017). In classical explanations this lack implies that specific explanation model of abduction should be considered as special case of abduction. That is because to be an explanation implies something more than to be an abduction: a conclusion, in the sense that the burden of proof falls on the abductive product. To have a conclusive form means that explanation and the theory that needed it are already attuned and, of course, this case eliminate the possibility of accept something because it has virtue of another kind. It is interesting to note that this causal transformation is the cause of the confusion between the explanation model of abduction and inference to the best available inference, which it is also known IB(A)E. On the other hand, the difference between each other is the role of the conclusion. This last point is important because the special case of explanatory abduction is also suitable to conceptualize medical diagnosis, while IB(A)E not. The reason is that medical diagnosis is only possible if the relation with the medical knowledge of the doctor is tentative. This is, only if there is the lack that abduction implies. In other words, the causality form of abduction is substantially different than IB(A)E because diagnosis needs a virtue of another kind for being accepted (Aristotle: Rh, I, 1355b10-22). However, the other face of this situation is that the specific and causal form of explanatory abduction is only useful in specific medical diagnosis: in the cases where it is possible to draw a causal course of facts as Neurology (Rodríguez, Aliseda and Arauz: 2008). I want to use this last medical area as example because in it is possible to see one mechanism for do it diagnosis around brain problems. From this medical area I want to do a contrast with another medical area, which studies the brain too but from a different point of view: Psychiatry. When trying to explain psychiatry diagnosis through classical explanatory abduction is possible to see that there is something wrong. One the one hand, the generalization from enumeration is more difficult than other kind of medial areas and, on the other hand, here is more visible that a difference between simple diagnosis and diagnosis plus prescription is needed in the characterization of abduction. The reason is that abduction is one form of human reasoning and if there is one area which diagnosis does not has causal dependency, then it is possible that classical explanatory model of abduction: a) is a more specific kind of diagnosis (some part of general abduction) or b) diagnosis needs something more for their good conceptualization. I want to try to defend b from an analysis of EC-Model of abduction in which I try to defend the necessity to imply moral values

14:30
Angel Puyol (Universitat Autònoma de Barcelona, Spain)
Solidarity and regulatory frameworks in (medical) Big Data

ABSTRACT. The use we make of digital technologies produces a huge amount of data, known as big data, whose management is becoming increasingly more difficult. One of the problems in this regard is the possibility of data controllers abusing their situation of power and using the available information against the data subject. This abuse can have several faces. Prainsack (2015) identifies at least three types of abuse: hypercollection, harm, and humiliation. Hypercollection means that just because institutions can collect information about customers or citizens for purposes other than the ones for which it was collected in the first place, they do so. Harm occurs when the information obtained is used against the interests or rights of the data subject. This damage is accompanied by humiliating effects when making people partake in their own surveillance. In the face of this new reality, the question of how to govern data use has become more important than ever. The traditional way of governing the use of data is through data protection. Recently, the European Union has published a new General Data Protection Regulation (GDPR) that follows this approach. However, authors such as Prainsack and Buyx (2016) rightly point out that the strictly regulation approach is insufficient for dealing with all abuses related to the use of big data. On the one hand, excessive control can curb the opportunities and benefits of digital technologies for users and society as a whole. And on the other, control and regulation may be insufficient in controlling all risks associated with the use of big data. In opposition to the strict regulation approach, Prainsack and Buyx propose a new one, based on solidarity. The solidarity approach entails the acceptance of the impossibility of eliminating the risks of modern data usage. The authors base their proposal on a solidarity formula whose objective is to compensate those affected by possible abuses: harm mitigation funds. Such funds would help to ensure that people who accept those risks and are harmed as a result have appropriate support. The paper does not question the adequacy of harm mitigation funds, but rather the conception of solidarity that Prainsack and Buyx choose to justify them. I would argue that this conception of solidarity, based on psychology and moral sociology, has less normative force than exists in the strict regulation approach, which is based on the defence of fundamental rights. If we want the policy of harm mitigation funds to have a normative force similar to that of the strict regulation approach, then we must choose a conception of solidarity based on respect for fundamental rights. In the paper, I first present the context in which it makes sense to oppose a solidarity-based perspective to a strictly regulation one. Then I review what I believe are the weak points in Prainsack and Buyx’s ideas regarding solidarity. And finally, I introduce an alternative conception of solidarity that normatively better justifies any public solidarity policy addressing the risks of big data, including harm mitigation funds.

14:00-15:00 Session 30E: IS A2 Wansing
Chair:
Yaroslav Shramko (Kryvyi Rih State Pedagogical University, Ukraine)
Location: Room Krejcar
14:00
Heinrich Wansing (Ruhr-Universität Bochum, Germany)
Sergey Drobyshevich (Sobolev Institutte of Mathematics, Russia)
Proof systems for various FDE-based modal logics
PRESENTER: Heinrich Wansing

ABSTRACT. We present novel proof systems for various FDE-based modal logics. Among the systems considered are a number of Belnapian modal logics introduced in [2] and [3], as well as the modal logic KN4 with strong implication introduced in [1]. In particular, we provide a Hilbert-style axiom system for the logic BK$^{\Box-}$ and characterize the logic BK as an axiomatic extension of the system BK$^FS$. For KN4 we provide both an FDE-style axiom system and a decidable sequent calculus for which a contraction elimination and a cut elimination result are shown.

[1] L. Goble, Paraconsistent modal logic, Logique et Analyse 49 (2006), 3{29. [2] S.P. Odintsov and H. Wansing, Modal logics with Belnapian truth values, Journal of Applied Non-Classical Logics 20 (2010), 279{301. [3] S.P. Odintsov and H.Wansing, Disentangling FDE-based paraconsistent modal logics, Studia Logica 105 (2017), 1221-1254.

14:00-15:00 Session 30F: C1 SYMP Toward the reconstruction of linkage between Bayesian philosophy and statistics 1 (TRLBPS-1)

Organizer: Masahiro Matsuo

In philosophy of science, Bayesianism has long been tied to subjective interpretation of probability, or probability as a degree of belief. Although several attempts have been made to construct an objective kind of Bayesianism, most of the core issues and controversies concerning Bayesianism have been biased to this subjectivity, particularly to the subjective priors. Along this line of argument, philosophers currently seem to implicitly assume that Bayesian statistics, which is increasingly getting popular in many fields of science, can be treated legitimately as a branch of subjective Bayesianism. 


Despite this comprehensive view, which could be partly traced back to the interpretation of Savage’s ‘Likelihood Principle’, how subjectivity is involved in Bayesian statistics is not so obvious. On the contrary, scientists who use Bayesian statistics are inclined to think of it rather as based on an objective methodology, or else merely as a mathematical technique, without even knowing much of arguments of philosophical Bayesianism. These suggest that there is a considerable gap between typically discussed Bayesianism in philosophy and Bayesian statistical method used in science. The problem is no longer simply the distinction about subjective or objective but more importantly, the present situation where this linkage is almost neglected by both philosophers and statisticians despite the common use of the term “Bayesian”. Bayesian philosophy without statistics and Bayesian statistics without philosophy are both epistemically unsound, and undoubtedly philosophers of science should have responsibility for the restoration of this linkage.


In this symposium, we present some perspectives which could presumably help this restoration. Although an approach trying to examie the history of Bayesianism minutely is certainly necessary in some part of the analysis to achieve this goal, there must be a risk of losing our way if we focus too much attention on this, because the history of it, particularly of the rise of Bayesian statistics, is tremendously complicated to unravel. In order to grasp appropriately the relation between current Bayesian philosophy and statistics, it seems a more plausible way to start from the current situation we are placed in and to investigate it from multiple philosophical and statistical perspectives available, with some help of historical ones when in need. This is the basic strategy we have in this symposium. Accordingly, our focus is not just upon restoration, but rather on (in a positive sense) reconstruction of the linkage between the two Bayesian camps. The perspectives we present are: a parallelism found between Bayesianism and inductive logic; a complementary relation between Bayesian philosophy and statistics; a solution to the conflict between Bayesian philosophy and frequentism through Bayesian statistics; and a linkage between Bayesian philosophy and statistics through statistical theories based on both Bayesianism and frequentism. In this symposium, we have time to discuss after each speaker’s presentation.

Chair:
Masahiro Matsuo (Hokkaido University, Japan)
Location: Room 250
14:00
Kazutaka Takahashi (Hokkaido University, Japan)
TRLBPS: Examination of the linkage between Bayesian philosophy and statistics from a logical point of view

ABSTRACT. Recent Bayesian statisticians emphasize that prior specification as subjective or objective is meaningless (Gelman, A. and Hennig, C. 2017). According to their suggestion, the concepts of subjectivity and objectivity should be replaced with other concepts such as transparency, consensus, and so on. Their suggestion seems feasible, but there still remains a problem. What is the linkage between Bayesian philosophy and statistics? To answer this question, I think one of the promising ways is to focus on the logical relations between them. Although Bayesian philosophy and statistics are often associated with inductive inference, its meaning is in many cases not so clear. In order to reconstruct a good linkage between the two Baysianisms, we need to think about on what logical basis, specifically on what inductive logic, Bayesianism can be seen established. Historically, this kind of attempt was made for the first time by Carnap (but this might be traced further back to Laplace in his famous ‘rule of succession’). As is well known, Carnap’s system of inductive logic consists in the idea that deductive inference can be defined as a special case of inductive inference (Carnap, R. 1971, 1980). And what is important, his argument indicates not only the logical or empirical assumptions which seem to be implicitly used in our inductive inferences, but also hints a close relation which could be found between inductive logic and Bayesianim, through his confirmation measure c, orλ-continuum. Recently, following this line of argument, Festa explicated logical and empirical conditions necessary for choosing prior distribution in Bayesian statistics, and tried to show a parallel relation (or rather, reductive relation) between Bayesian statistics and Carnap’s system of inductive logic, which holds in the case of multinomial distribution (Festa, R. 1993). In this talk, I examine how this attempt can be extended any further. There are two stages of the extension, if it is possible. First, we can try to extend Festa’s argument to other cases of likelihood than multinomial distribution (in this latter case, the prior distribution is fixed to Dirichlet distribution). And second, which would be more difficult, we can try to extend this to the more general Bayesian philosophy, in which priors are our degrees of belief. A complete reduction of Bayesianism to a system of inductive logic is very hard to achieve, but if we can show some parallelism which holds between Bayesianisms as a whole and Carnapian logic, then that would be a great help to the reconstruction of the linkage between two Bayesianisms.

References Carnap, R. 1971. A Basic System of Inductive Logic, Part I. In Carnap, R. and Jeffrey, R.C. (eds.) Studies in Inductive Logic and Probability, Vol I. University of California Press. Carnap, R. 1980. A Basic System of Inductive Logic, Part II. In Jeffrey, R.C. (ed.) Studies in Inductive Logic and Probability, Vol II. University of California Press. Festa, R. 1993. Optimum Inductive Methods. Kluwer Academic Publishers. Gelman, A. Hennig, C. 2017. ‘Beyond subjective and objective in statistics’, Statistics in Society Series A, pp. 1-67.

14:30
Masahiro Matsuo (Hokkaido University, Japan)
TRLBPS: Constructing a complimentary relation between Bayesian philosophy and statistics

ABSTRACT. Historically speaking, despite their having ‘Bayes’ rule’ in common, Bayesian philosophy and statistics should probably be taken as having developed independently rather than diverged from a common ancestor. This is particularly conspicuous when we see how empirical Bayesian methods have flourished ever since 1960s, after Neo-Bayesian Revival was achieved (Fienberg). Although we might be able to trace back to other key persons involved in both philosophical and statistical foundations of Bayesianism, there is no guarantee that we can find directly from their thoughts a truly meaningful linkage of them. It seems more promising to search for the linkage based on how they are actually used in or could be applied to science. Seeing Bayesianism in this way, we should recognize Bayesian philosophy, concerning updating of degree of belief, is not confined to philosophical arguments or historical reviews of past theories, but is also concerned with actual inferences made by scientists, particularly when the degree of uncertainty matters. For example, in the guidance paper for IPCC’s third report, they officially endorsed a subjective Bayesian method in evaluation and prediction of the global climate change. What is interesting about this example is that in it a distinction of analysis is made, depending on whether probability distribution is available or not. This means, they distinguish the qualitatively summarized part of their analysis, which roughly corresponds to a philosophical version of Bayesianism, from the quantitative one, which is statistical. In their view, the relation between the two is neither independent nor inclusive, but rather complementary, in that each will be required as a reference point to the other. Such a practical attitude in science might well be refined philosophically, but this can be a good model to reconstruct the linkage from scratch. If we argue strictly along this line, some constraints need to be put on our traditional views of Bayesianism. Bayesian philosophy has long tried to establish a formal inductive inference in a complete manner based on subjective prior probabilities. But in the complementarity view, this inference should be given as a good reference to making a statistical model, not just for obtaining subjective posteriors. Then perhaps we need some constraints on the way we give priors. Likewise, Bayesian statistics should shift their goal to a broader one in which they contribute to evaluation of higher levels of hypotheses. Stricter conditions should probably be put in determining or reexamining prior distributions, for one thing (such an attempt is partly being made by philosophical statisticians like Gelman). In this talk, I examine how this complementarity could be achieved, by focusing on the conditions newly required for priors and also by reexamining the ‘Likelihood Principle’ from other perspectives than previously had.

References Fienberg, S. E. (2006) ‘What Did Bayesian Inference Become “Bayesian”?,’ Bayesian Analysis, no1, pp.1-40. Gelman, A. (2017) ‘Beyond subjective and objective in statistics’, Statistics in Society Series A, pp.1-67. Pachauri, R. and Tanaka, K. (ed.) (2000) ‘Guidance Papers on the Cross Cutting Issues of the Third Assessment Report of the IPCC’.

14:00-15:00 Session 30G: B3 SYMP Communication and exchanges among scientific cultures 3 (CESC-3). Practices of communication
Chair:
Peeter Müürsepp (Tallinn University of Technology, Estonia)
Location: Room 152+153
14:00
Galina Sorina (Lomonosov Moscow State University, Russia)
Irina Griftsova (Moscow Pedagogical State University, Russia)
The alienated/subjective character of scientific communication
PRESENTER: Galina Sorina

ABSTRACT. In this presentation, we will address some features of scientific communication, which, on the one hand, which we claim are, from a methodological viewpoint, universally valid and, on the other, characterise exchanges between scientific cultures. To do so, we will focus on Peter Galison’s [Galison 1999] notable concept of trading zones and aim at clarifying their definition. We will argue that these zones could be considered ‘alienated forms of scientific communication’ that depend on the characteristics of the actor’s actions. This definition can be understood in the context of Hegel’s concept of alienation [Hegel 2018]. For Hegel, the problem of alienation is that of the spirit that alienates the results of its actions. An analysis of the [Hegel 2018] shows that such results may include those of the activities of self-consciousness that aim at obtaining new knowledge, in particular, in the course of research. We wish to stress that the history of scientific activity testifies to how such processes have continuously taken place. Our analysis accounts, for example, equally for Kuhn’ paradigm, where the results of knowledge are alienated from a concrete individual. Within the latter framework, the representation of scientific activity is that a researcher has to solve only puzzles. Kuhn’s paradigm accounts for forms of stability in science. This stability is embodied in the concept of ‘normal science’. In accepting this paradigm, Kuhn suggests, the scientific community embraces the idea that the basic and fundamental problems have been solved. The results are alienated from concrete studies. Within a paradigm, the world of scientific and theoretical knowledge loses its agility and ability to develop. It freezes within the established terminology and the forms that were once alienated and thus can be transferred between cultures. As a result, the paradigm prevents the researcher from acting as an independent force that regulates and controls his or her investigations. Communication regarding the alienated results that are enshrined in scientific texts can be reduced to researcher’s individual work with text. Each individual asks the text her or his own questions. These questions are affected by the cultural tradition, the individual’s knowledge and competencies, and her or his attitudes and abilities. Thus, communication in science can be considered as a continuous process of interactions between alienation and subjectivity, which betrays the assumption of unity of science. Hence, we define trading zones as alienated (and, therefore, intercultural) forms of communication. Acknowledgements The study is supported by the Russian Foundation for Basic Research (RFBR) within ‘the Informal Text Analytics: A Philosophical and Methodological Approach‘ project (№ 17-03-00772). References Hegel G.W. The Phenomenology of Spirit. Cambridge University Press 2018 Galison, P. 1999. Trading Zone Coordinating Action and Belief. The Science Studies Reader. Edited by M. Biagioli. London and New York: Routledge.

14:30
Polina Petrukhina (Lomonosov Moscow State University, Russia)
Vitaly Pronskikh (Fermi National Accelerator Laboratory, United States)
High-energy physics cultures during the Cold War: between exchanges and translations

ABSTRACT. We conduct a sociological and historical analysis of the collaboration between American and Soviet scientists during the chain of experiments in high energy physics that took place in the 1970s (Pronskikh 2016). We consider each of the laboratories of the 1970s as a culture in and of itself and we investigate processes of exchange of material objects and translations of values and interests between different types of individual actors (scientists, politicians, managers) as well as between cultures (Soviet and US high-energy physics laboratories). Bearing upon a case study of a collaboration between Joint Institute for Nuclear Research (JINR, Dubna) and Fermi National Accelerator Laboratory (Fermilab, USA) in 1970-1980 (the period of the Cold War often referred to as “détente”), we examine how the supersonic gas jet target manufactured in JINR and delivered to Fermilab for joint use influenced the scientific culture at Fermilab, contributed to the birth of long-standing traditions as well as contributed to changing scientific policy toward an alleviation of the bi-laterality requirement in scientific exchanges between the Eastern and the Western blocks. We also focus on how processes in scientific cultures that at the time arose from their interactions influenced some of the epistemic goals of the endeavor.

We examine the experiment that is premised on an international collaboration within three frameworks: as a trading zone (Galison 1997), where various types of objects and values (not only epistemological) are being exchanged through intermediate languages forming; as a network composed of heterogeneous actors and their interactions accompanied by translations of interests (Latour 1987); as a locus of exchanges, sharing, and circulations which make cultures temporary formations (Chemla 2017). The developments that took place within the experiment can be described as the translation of different types of interests, including political, business, private and public aims. Moreover, the same actors could pursue different types of interests at different times. That prevents us from drawing rigid boundaries between “content” and “context” in science. As a consequence, one more problem arises: if one does not acknowledge such a distinction a priori, how can one eventually identify scientific cultures that act in the course of these translations and distinguish them from other cultures? One possible answer lays in the investigation of the values that circulate in a particular culture or between two or more interacting cultures and shape them, which reveals their historical variability.

The reported study was funded by RFBR according to the research project № 18-011-00046

References

Chemla Karine (2017) Cultures without culturalism: the making of scientific knowledge / Karine Chemla and Evelyn Fox Keller, editors, Duke University Press, Durham. Galison Peter (1997) Image & logic: A material culture of microphysics, The University of Chicago Press, Chicago. Latour Bruno (1987) Science in Action: How to Follow Scientists and Engineers Through Society, Harvard University Press, Cambridge, Massachusetts. Pronskikh Vitaly S. (2016) ‘E-36: The First Proto-Megascience Experiment at NAL’, Physics in Perspective, Vol. 18, №. 4, pp. 357-378.

14:00-14:30 Session 30H: B4 Metaphysical issues in the philosophy of science
Chair:
Pablo Vera Vega (University of La Laguna, Spain)
Location: Room 302
14:00
Nikolay Milkov (University of Paderborn, Germany)
Towards an Analytic Scientific Metaphysics

ABSTRACT. In the last decades, philosophy of science often concentrated its attention on abortive epistemological problems: scientific explanations as different from scientific descriptions, scientific truth, the relation between phenomena and data of science, the relation between scientific theories and scientific models; etc. As a result, the interest towards philosophy of science waned. (Don Howard) Analytic metaphysics, that follows a priori methods, was advanced as an alternative to this development. Unfortunately, it fails to keep touch with the latest achievements of science. As a result, it doesn’t really contribute to human knowledge. (Ladyman & Ross) The present study tentatively tracks down an alternative direction of this sub-discipline. Against these developments, we suggest a program for scientific metaphysics that closely follows the new results of science but at the same time also uses the method of conceptual analysis. To be more exact, we see its objective as advancing new ontologies of the phenomena investigated by science that are elaborated with logical means. In support of what we mean as analytic scientific metaphysics we shall draw the idea of “connective” conceptual analysis. Modified this way, scientific metaphysics holds, among other things, that every new significant discovery in science can be conceptually connected with discoveries of other sciences. There are clear historical precedents of this approach. An example: Inspired by Ernst Cassirer, in the 1920s Kurt Lewin developed the concept of “genidentity”, adopted by both Carnap and Reichenbach, this “evangelist of science” (van Fraassen). It was applied interdisciplinary, both in physics and in biology. Further historical remarks go in order at this place. After Hegel, philosophers were—correctly—shy to intervene in the scientific theories. In fact, they effectively stopped to explore scientific problems head on. (Hempel 1966) Unfortunately, the result is that nowadays “philosophy is dead”. (S. Hawking) Our project zeroes on exactly this problem, without, however, to try to develop autonomous “philosophical” science. In support of our project we refer to the pioneer of the “scientific method in philosophy”, Bertrand Russell. Among other things, he maintained that “philosophy consists of speculations about matters where exact knowledge is not yet possible” (Bertrand Russell speaks his mind, p. 11); and that philosophy “should be bold in suggesting hypothesis as to the universe which science is not yet in a position to confirm or confute” (“Logical Atomism”, p. 341). Scientists develop some of these speculations and hypotheses further examining their application to reality.

14:00-14:30 Session 30I: C3 Philosophy of the life sciences
Chair:
Grzegorz Trela (Jordan University College, Tanzania)
Location: Room 201
14:00
Michal Hladky (Université de Genève, Switzerland)
Mapping vs. representational accounts of models and simulations

ABSTRACT. Philosophers of science often analyse scientific models and simulations as representational tools (Frigg and Nguyen 2017, 2016; Weisberg 2013; Giere 2010; Suárez 2010). These accounts clarify how scientists use and think about models. This however does not mean that scientific models have to be defined in terms of representation. In this paper, I expose arguments against the representational accounts of models and illustrated them with case studies from the Blue Brain Project (BBP) (Markram et al. 2015).

1. Non-represented entities The BBP simulations aimed at reconstruction of a small portion of connectome in the neocortical tissue of a rat and its dynamical properties. It was observed that the general behaviour of the biological circuit was dependent on the concentrations of extracellular calcium. The justification of the results of this project were based on the assumption that the tissues were simulated correctly. However, the representation of calcium concentrations was not necessary in order to conduct these simulations.

2. Directionality Even if one accepts Suárez's (2010) directionality argument against isomorphisms in the analysis of scientific representation, it is not clear if it should be applied to models. Subsequently, it is not certain that models should be understood as representational entities. There are two different reasons to reject the directionality argument based on scientific practice. HBP has several sub-projects in which brains are either modelled and simulated or brains serve as models to develop alternative computational methods and architectures. If these techniques should succeed, the model relation should support the switch in directionality. The second observation comes from the building of computer systems that are intended to perform simulations of brains. In the construction phase, biological tissues serve as models for the construction of these systems. In the exploration phase, the relation is inverted.

3. Epistemic justification There is a strong contrast between the levels of justification provided by the unrestricted notion of representation and the notion of isomorphism. Without qualification, anything can represent anything else. If model and simulation relations are analysed in terms of representation, the level of epistemic justification they provide is very low.

Bibliography Frigg, Roman, and James Nguyen. 2016. “Scientific Representation.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Winter 2016. Metaphysics Research Lab, Stanford University. ———. 2017. “Models and Representation.” In Springer Handbook of Model-Based Science, edited by Lorenzo Magnani and Tommaso Bertolotti. Dordrecht Heidelberg London New York: Springer. Giere, Ronald. 2010. “An Agent-Based Conception of Models and Scientific Representation.” Synthese 172 (2): 269–281. Markram, Henry, Eilif Muller, Srikanth Ramaswamy, Michael W. Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, et al. 2015. “Reconstruction and Simulation of Neocortical Microcircuitry.” Cell 163 (2): 456–92. Suárez, Mauricio. 2010. “Scientific Representation.” Philosophy Compass 5 (1): 91–101. Weisberg, Michael. 2013. Simulation and Similarity: Using Models to Understand the World (Oxford Studies in Philosophy of Science). Oxford University Press.

14:00-15:00 Session 30J: A3 Epistemic and philosophical logic
Chair:
Nadiia Kozachenko (Kryvyi Rih State Pedagogical University, Ukraine)
Location: Room 301
14:00
Oliver Kutz (KRDB Research Centre for Knowledge and Data, Free University of Bozen-Bolzano, Italy)
Nicolas Troquard (Free University of Bozen, Italy)
A logic for an agentive naïve proto-physics
PRESENTER: Nicolas Troquard

ABSTRACT. We discuss steps towards a formalisation of the principles of an agentive naïve proto-physics, designed to match a level of abstraction that reflects the pre-linguistic conceptualisations and elementary notions of agency, as they develop during early human cognitive development. To this end, we present an agentive extension of the multi-dimensional image schema logic ISL based on variants of STIT theory, which is defined over the combined languages of the Region Connection Calculus (RCC8), the Qualitative Trajectory Calculus (QTC), Ligozat's cardinal directions (CD), and linear temporal logic over the reals (RTL), with 3D Euclidean space assumed for the spatial domain. To begin to formally capture the notion of `animate agent', we apply the newly defined logic to model the image schematic notion of `self movement' as a means to distinguish the agentive capabilities of moving objects, e.g. study how to identify the agentive differences between a mouse (an animate animal) and a ball (an inanimate yet causally relevant object). Finally, we outline the prospects for employing the theory in cognitive robotics and more generally in cognitive artificial intelligence and questions related to explainable AI.

14:30
Daniel Álvarez Domínguez (Universidad, Spain)
Splicing Logics: How to Combine Hybrid and Epistemic Logic to Formalize Human Reasoning

ABSTRACT. We advocate in this paper for splicing Hybrid and Epistemic Logic to properly model human reasoning.

Suppose Wittgenstein wants to meet Russell to discuss some (possibly weird) philosophical issues at 2 a.m. of April 26th. Wittgenstein knows p (“we meet to discuss at 2 a.m. of April 26th”) (symbolically: Kwp, where K is the knowledge operator) at the morning of April 25th, whereas Russell does not: ¬Krp. And at the same time Wittgenstein knows that Russell does not know it: Kw¬Krp.

At the afternoon of April 25th Wittgenstein types Russell to communicate him p. But Russell does not reply his message. Thus, at that time we have Kwp ∧ ¬KwKrp ∧ ¬Kw¬Krp ∧ KwKr¬KwKrp, for Wittgenstein does not know whether Russell has read his message but knows that, if he has indeed read it, Russell knows Wittgenstein does not know it.

Being considerate, Russell would resolve this situation by answering Wittgenstein´s message. Let us suppose that he does it at night. Then at that moment we have Krp, but it cannot be assumed that KrKwKrp for Wittgenstein could not have read the reply, so ¬KrKwKrp ∧ ¬KrKwKrKwp. Tired of so much ambiguity, Russell phones Wittgenstein to set up the meeting, and then Kwp ∧ Krp ∧ KwKrp ∧ KrKwp ∧…

By means of Epistemic Logic we have been able to formalize both Wittgenstein and Russell´s knowledge and their communication. And if we spliced it with Temporal Logic (see [2] and [4]), we could reflect how their knowledge change over time.

Nevertheless, even splicing that two systems it is not possible to properly model the whole situation. Neither of them are able to name points (inside a model). They cannot formalize and evaluate, for instance, that a formula such as Kwp is true at exactly the morning of April 25th, i.e., at the point which stands for that moment. So actually, just by means of Temporal-Epistemic Logic none of the formulae which come into play in Wittgenstein-Russell communication can be interpreted. To accomplish it it is necessary to turn to Hybrid Logic.

This paper aims at introducing a Hybrid-Epistemic Logic (resulting from splicing both logical systems) in order to improve Temporal-Epistemic Logic expressivity. A proper semantic will be provided, and it will be showed that the only way of accurately modeling how we talk about our knowledge and beliefs, and how they change over time, is via Hybrid-Epistemic Logic.

References [1] Blackburn, Patrick (2006), “Arthur Prior and Hybrid Logic”, Synthese, 150(3), pp. 329-372. [2] Engelfriet, Joeri (1996), “Minimal Temporal Epistemic Logic”, Notre Dame Journal of Formal Logic, 37(2), pp. 233-259. [3] Gabbay, Dov; Kurucz, Agi; Wolter, Frank and Zakharyaschev, Michael (eds.) (2003), Many-Dimensional Modal Logics: Theory and Applications, Amsterdam, Elvesier. [4] Surowik, Dariusz (2010), “Temporal-Epistemic Logic”, Studies in Logic, Grammar and Rhetoric, 22(35), pp. 23-28. [5] van Ditmarsch, Hans; Halpern, Joseph; van der Hoek, Wiebe and Kooi, Barteld (eds.) (2015), Handbook of Epistemic Logic, Milton Keynes, College Publications.

14:00-15:00 Session 30K: C2 Interpretation of quantum physics 1
Chair:
Giulia Battilotti (Department of Mathematics University of Padua, Italy)
Location: Room 402
14:00
Sebastian Fortin (CONICET-UBA, Argentina)
Jesús A. Jaimes Arriaga (CONICET-UBA, Argentina)
Hernán Accorinti (Universidad, Argentina)
About the world described by Quantum Chemistry
PRESENTER: Sebastian Fortin

ABSTRACT. One of the oldest problems associated with the interpretation of Quantum Mechanics is the role of the wave function in the ontology of the theory. Although Schrodinger himself posed the problem from the beginning of the theory, even today the meaning of the wave function remains the subject of debate. In this context, the problem of the 3N dimensions of the wave function is of particular interest in the philosophy of physics. As the wave function associated with a system of N particles is written in a space of 3N dimensions, it is necessary to ask about the meaning of this space. The debates around the issue have an important impact on the way in which we conceive the world around us. This is clearly manifested by the intense discussions that have taken place in recent years (Albert 2013, Allori 2013, Monton 2006). In this work we will introduce a new perspective, coming from the way in which the wave function is used in scientific practice. Our objective is to investigate the ontology of quantum chemistry emerging from the analysis of the use of the wave function when quantum mechanics is applied to specific cases in the chemical domain. In the field of quantum chemistry there was not much discussion about the meaning of the wave function, and for this reason we consider that it can offer a fruitful context for a philosophical analysis. The typical many body system in this context is a molecule, and the typical problem is to find the energy spectrum of an electron that is in interaction with many other particles. To find such a spectrum, the so called orbital approximation is usually appealed to, which makes it possible to write the total wave function of the system as a product of mono-electron wave functions (Atkins & de Paula 2006). Under this approximation, the wave function of a given electron depends only on the variables of this electron; therefore, it evolves in the space of three dimensions (Lowe & Peterson 2006). In this presentation we will show that the procedure performed by chemists when they use the orbital approximation can be formalized as the result of the application of two mathematical operations: first, a projection onto a subspace of the Hilbert space, and second, a change of variables. With the help of this formalization, we will go beyond the approximation itself and propose an ontology specific to quantum chemistry.

Albert, D. (2013). “Wave Function Realism”, in Ney, A. & Albert, D. Z. (eds.), The Wave Function. New York: Oxford University Press. Allori, V. (2013). “Primitive Ontology and the Structure of Fundamental Physical Theories”, in Ney, A. & Albert, D. Z. (eds.), The Wave Function. New York: Oxford University Press. Atkins & de Paula (2006). Physical Chemistry. New York: Oxford University Press. Lowe, J. P. & Peterson, K. A. (2006). Quantum Chemistry. Burlington, San Diego and London: Elsevier Academic Press Monton, B. (2006). “Quantum Mechanics and 3N-Dimensional Space”, Philosophy of Science, 73: 778-789.

14:30
Vladislav Terekhovich (Institute of Philosophy, Saint Petersburg University, Saint Petersburg, Russia)
Does the reality of the wave function follow from the possibility of its manipulation?

ABSTRACT. There are various approaches to the issue of the reality of such unobservable object as the wave function. I examine whether the issue can be resolved by taking the manipulative criterion proposed by J. Hacking in the framework of his experimental realism (Hacking, 1983, 262). According to Hacking, if we can influence some observable objects by manipulating an unobservable object, then the latter is the cause, which means it is real. Accordingly, if we can influence some existing objects by manipulating the wave function, then the wave function is a real entity. I examine the strengths and weaknesses of the manipulative criterion concerning the wave function. I consider one of the experiments with ‘quantum eraser’ and causally disconnected delayed choice. Experimenters ‘label’ the wave function of a system photon by ‘which-way information’ with the help of auxiliary entangled photon. Afterward the measurement of the system photon (no interference) this information is erased that appears to be free or random. Thanks to such manipulations, they restore the wave function of the system photon and observe the interference in the measurement again. It is known that almost all quantum technologies within the second quantum revolution based on the manipulating of the wave function of either single quantum object or entangled ones. For instance, in a quantum computer, by manipulating entangled qubits, you can force them to perform computations. If the wave functions of qubits do not exist, where does a result of the calculation come from? Another example is quantum cryptography. It would seem that these cases confirm the existence of the wave functions. However, I argue that Hacking’s criterion is not a sufficient argument in favor of the reality of the wave functions. First, any laboratory manipulations suggest theoretical loading of unobservable objects. It is possible that in some years the modern quantum theory is found to be a limiting case of some new theory. Then the wave function would be a manifestation of some other fundamental theoretical object. Second, Hacking’s experimental realism is based on a causal relationship between events. However, at the quantum level, causality is something unusual. The uncertainty principle, quantum non-locality, measurement problem – all of these lead to a new notion of causality. Sometimes we cannot accurately identify which of the two correlating quantum events is the cause and which is the effect. It means that a concept of causality also depends on theory. I suppose that manipulation the wave function can only confirm that it represents either a certain real fundamental entity or some real internal structure of the quantum system. It can look like a picture described by ontic or constructive versions of structural realism for the quantum field theory (Cao, 2003; French & Ladyman, 2003).

References: Hacking, I. (1983). Representing and intervening. Cambridge: Cambridge University Press. Cao, T. Y. (2003). Structural realism and the interpretation of quantum field theory. Synthese, 136(1), 3-24. French, S., & Ladyman, J. (2003). Remodelling structural realism: Quantum physics and the metaphysics of structure. Synthese, 136(1), 31-56.

14:00-15:00 Session 30L: C5 Philosophy of the cognitive and behavioral sciences
Chair:
Olha Simoroz (Taras Shevchenko National University of Kyiv, Ukraine)
Location: Room 202
14:00
Nina Atanasova (The University of Toledo, United States)
Eliminating Pain

ABSTRACT. I defend pain eliminativism against two recent challenges, Corns (2016) and van Rysewyk (2017). Both challenge Dennett’s (1978) and Hardcastle’s (1999) critiques of the common-sense notion of pain and its inadequacy for scientific study of pain. The two converge in their interpretation of eliminativism as a prediction about the replacement of folk-psychological vocabulary by the vocabulary of a mature neuroscience (of pain in particular). They differ in that Corns admits that eliminativism for pain has had success in science but not in everyday contexts, whereas van Rysewyk shows that contemporary pain research still makes use of the folk-psychological notion of pain for the purposes of research and treatment of pain. Both conclude that this falsifies the radical claims of pain eliminativism. I will show that both Corns’ and van Rysewyk’s positions are in fact compatible with eliminative materialism as originally articulated by Churchland (1981, 1985). I argue that Churchland’s version of eliminativism makes the case for the necessity of mature neuroscience vocabulary, but not its sufficiency, for eliminating folk psychology. Corns’ and van Rysewyk’s arguments only go against the sufficiency of mature neuroscience for the replacement of folk psychology. Following Machery (2009), Corns distinguishes between “scientific eliminativism” and “traditional eliminativism” for pain. Her main claim is that Dennett’s and Hardcastle’s theories provide, at best, sufficient reasons to accept scientific eliminativism. Therefore, it is justified to expect scientists to abandon the folk-psychological notion of pain. However, Corns argues, scientific eliminativism does not entail traditional eliminativism. In other words, mature neuroscience of pain may not eliminate “pain” from everyday discourse. I will not take issue with Corns’ account of the relative success of scientific eliminativism. I will argue, however, that her account of the relevance of scientific eliminativism, or rather lack thereof, to everyday use of “pain” is oversimplified. Van Rysewyk, on the other hand, argues that the folk-psychological notion of pain is compatible with mature neuroscience of pain. He shows that it has been and is still used in psychological and clinical research. His example is of the success of educational neurophysiological and pain-management programs that provide accurate neurophysiological information to trained patients by means of metaphors (p. 79). In my interpretation, however, the success of these programs is due to the presentation of accurate neuroscientific information and this is consistent with eliminativism. The usefulness of the folk-psychological vocabulary about pain does not undermine the conceptual replacement of an inadequate theory with a better theory about pain. Traditional eliminativism for pain is a viable option. We have good reasons to maintain that the folk-psychological notion of pain as a subjective and private experience is in fact eliminated from neuroscientific vocabulary and we have good reasons to expect that the folk-psychological notion of pain will be replaced with a more refined notion. Even if this prediction is not fulfilled, pragmatic considerations for preserving folk-psychological vocabulary about pain don’t prove that the purported referents of the terms in that vocabulary exist. Folk talk about ghosts and demons. This does not make them real.

References Churchland (1981). “Eliminative Materialism and the Propositional Attitudes”. The Journal of Philosophy, Vol. 78, No. 2: pp. 67-90. Churchland, P. M. (1985). “Reduction, Qualia, and the Direct Introspection of Brain States”. The Journal of Philosophy, Vol. 82(1): 8-28. Corns, J. (2016). “Pain eliminativism: scientific and traditional”. Synthese, Vol. 193: 2949-2971. Dennett, D. (1978). “Why You Can't Make a Computer That Feels Pain”. Synthese, Vol. 38, No. 3, Automaton-Theoretical Foundations of Psychology and Biology, Part I (Jul., 1978), pp. 415-456. Hardcastle, V. (1997). “When Pain is Not”. The Journal of Philosophy, Vol. 94, No. 8 (Aug., 1997), pp. 381-409. Machery, E. (2009). Doing without Concepts. Oxford University Press. van Rysewyk, S. (2017). “Is Pain Unreal?”. S. van Rysewyk (ed.) Meanings of Pain. Pp. 71-86.

14:30
Andrea Guardo (University of Milan, Italy)
The Privilege Problem for Semantic Dispositionalism

ABSTRACT. Semantic dispositionalism is the view that meaning can be analyzed in dispositional terms. Classic instances of the view are due to Dretske (1981), Fodor (1990), and Heil and Martin (1998). Following Kripke (1981), its enemies focus on three arguments: the finitude argument, the mistake argument, and the normativity argument. I describe a fourth anti-dispositionalist argument, which I dub "the privilege problem". The best way to introduce the privilege problem is on the background of the finitude and mistake arguments, since while the finitude argument says that dispositionalism fails because we don't have enough dispositions and the mistake argument says that it fails because we have the wrong dispositions, the privilege problem claims that dispositionalism fails because we have too many dispositions. In particular, my strategy will be to first describe a careful implementation, due to Warren (forthcoming), of the standard way to deal with the mistake argument and then show how the privilege problem naturally arises from a gap in that answer. For simplicity's sake, I'll focus on the case of "+". The dispositionalists' standard approach to the mistake argument has two steps. Dispositionalists start by arguing that there's a kind of dispositions k such that our dispositionsk track addition; then, they put forward a dispositional analysis according to which the meaning of "+" depends only on our dispositionsk. Warren develops this strategy by arguing that the right dispositionsk are our "general dispositions to stably give a certain response in normal conditions". However, dispositionalists themselves grant that there are kinds of dispositions j such that our dispositionsj don't track addition. Of course, they also assure us that this doesn't matter, since the meaning of "+" depends only on our dispositionsk. But why should we privilege our dispositionsk over all the other dispositions of ours? The privilege problem is precisely the problem of answering this question. I discuss four strategies to deal with the privilege problem: the answer I'm disposedk to give is privileged because of something we do, the answer I'm disposedk to give is privileged because it's the right answer, the answer I'm disposedk to give is privileged because it's the answer I'd give in ideal conditions, and the answer I'm disposedk to give is privileged because it's the answer I'd give in standard conditions.

References Dretske, Fred I. (1981). Knowledge and the Flow of Information. Oxford: Blackwell. Fodor, Jerry A. (1990). A Theory of Content, II: the Theory. In Jerry A. Fodor, A Theory of Content and Other Essays. Cambridge-London: MIT Press. Heil, John; Martin, C. B. (1998). Rules and Powers. Philosophical Perspectives 12 (1), 283-312. Kripke, Saul (1981) [1982]. Wittgenstein on Rules and Private Language – An Elementary Exposition. Oxford: Blackwell. Warren, Jared (forthcoming). Killing Kripkenstein’s Monster. Noûs.

14:00-15:00 Session 30M: C1 Formal philosophy of science and formal epistemology
Chair:
Marta Bilkova (Academy of Sciences of the Czech Republic, Czechia)
Location: Room 347
14:00
Leander Vignero (Katholieke Universiteit Leuven, Belgium)
A Computational Pragmatics for Weaseling

ABSTRACT. Probabilistic expressions (PEs) have long been studied in linguistics and related fields, and are noted for their vagueness (Kent 1964). Such expressions include words like 'possibly' and 'probably'. They express degrees of belief in propositions. Put differently, they lexicalize judgments about uncertainty. Their semantics proves highly elusive; the central problem being that, even though PEs lexicalize uncertainty, they rarely correspond to precise probabilities. To make things more complicated, there is hard evidence that the scopes of possible interpretation of PEs are governed by probability distributions themselves (Mosteller and Youtz 1990). This gives rise to higher-order probability distributions. The interpretation of vague language is the purview of pragmatics, which can be studied at the intersection of a plethora of fields, including cognitive science, computer science, linguistics, and philosophy of language. In this paper, the usage and understanding of probabilistic expressions is viewed through the lens of recent developments in computational pragmatics. Specifically, several enriched Rational Speech Act (RSA) frameworks are developed (Frank and Goodman 2012). The RSA framework is a Bayesian, computational model of communication and should be understood as a low-level implementation of the key premise of Gricean pragmatics (Grice 1975) and Relevance Theory (Sperber and Wilson 1996). Put simply, this framework operationalizes the Gricean idea of optimal communication probabilistically. In spite of its relative novelty, the RSA framework has been used to model a vast array of linguistic phenomena: metaphor (Kao et al. 2014), politeness (Yoon et al. 2016, 2017, 2018), scalar implicature (Goodman and Stuhlmüller 2013), and adjectival interpretation (Lassiter and Goodman 2017). Moreover, the framework has also been used to offer solutions to philosophical problems, like the sorites paradox (Lassiter and Goodman 2017). The setup of this paper is to develop several RSA-style frameworks in order to model the pragmatics of PEs. Such a model is not just of interest to linguists; I also provide a formal treatment of phenomena like plausible deniability and avoiding accountability, which are also studied studied in argumentation theory (e.g., Walton 1996). and cognitive science (e.g., Lerner and Tetlock 1999) respectively. This kind of deceitful use of vague verbiage is colloquially known as weaseling. I therefore argue that an RSA-style model of PEs can easily provide a theory of how rational agents can and should engage in weaseling. From a high-level perspective, rational weasels should exploit Grice’s principle of cooperation to deceive their interlocutors. It is worth noting that Yoon et al. (2016, 2017, 2018) already applied an RSA model to disingenuous speech, specifically to politeness. In total, I propose six models to think about PEs and situations in which they naturally occur. Moreover, I also provide data from simulations to back up the claims I make about the power of my framework for understanding the pragmatics of PEs.

References: Frank, M. C. and Goodman, N. D. (2012). Predicting pragmatic reasoning in language games.

Science, 336(6084):998–998. Goodman, N. D. and Stuhlmüller, A. (2013). Knowledge and implicature: Modeling language understanding as social cognition. Topics in cognitive science, 5(1):173–184.

Grice, H. P. (1975). Logic and conversation. In Cole, P. and Morgan, J. L., editors, Syntax and semantics 3: Speech arts, pages 41–58. New York: Academic Press.

Kao, J., Bergen, L., and Goodman, N. (2014). Formalizing the pragmatics of metaphor understanding. In Proceedings of the annual meeting of the Cognitive Science Society, volume 36.

Kent, S. (1964). Words of estimated probability. Intelligence Studies, (8):49–65.

Lassiter, D. and Goodman, N. D. (2017). Adjectival vagueness in a bayesian model of interpretation. Synthese, 194(10):3801–3836.

Lerner, J. S. and Tetlock, P. E. (1999). Accounting for the effects of accountability. Psychological bulletin, 125(2):255.

Mosteller, F. and Youtz, C. (1990). Quantifying probabilistic expressions. Statistical Science, pages 2–12.

Sperber, D. and Wilson, D. (1996). Relevance: Communication and Cognition. Oxford: Blackwell, 2nd edition.

Walton, D. (1996). Plausible deniability and evasion of burden of proof. Argumentation, 10(1):47–58.

Yoon, E. J., Tessler, M. H., Goodman, N. D., and Frank, M. C. (2016). Talking with tact: Polite language as a balance between kindness and informativity. In Proceedings of the 38th annual conference of the cognitive science society, pages 2771–2776. Cognitive Science Society.

Yoon, E. J., Tessler, M. H., Goodman, N. D., and Frank, M. C. (2017). I wont lie, it wasn't amazing: Modeling polite indirect speech. In Proceedings of the thirty-ninth annual conference of the Cognitive Science Society.

Yoon, E. J., Tessler, M. H., Goodman, N. D., and Frank, M. C. (2018). Polite speech emerges from competing social goals. PsyArXiv. November, 20.

14:30
Yuki Ozaki (Faculty of Science, Hokkaido University, Japan)
Sensory perception constructed in terms of Carnap's inductive logic: developing philosophy of computational modeling of perception

ABSTRACT. A central issue in contemporary analytic epistemology is whether there is a species of belief that is basic and whether there is a species of justification that is immediate. Pryor (2000) argues that (a) sensory perception has propositional content, that (b) the perceptual experience that has a propositional content P immediately justifies the content P, and that (c) the justification can be used to deny the skepticism of the external world. Afterward, Pryor also defends these claims from the arguments that perceptual experience is non-propositional and therefore it cannot be a justifier. In this paper, I critically discuss Pryor's argument and defend the following two claims; (1) there is a species of infallible perceptual mental state that is basic and the perceptual mental state has a propositional content, and (2) the relation between a sensory perception of a particular (or demonstratives such as "this" and "that") and a belief is logical and the perceptual mental state immediately justifies its content. First of all, I will briefly introduce the problem and point out that the Pryor's argument has some problems which cannot be overlooked. Secondly, I defend the claim (1) by arguing that particulars are a certain kind of higher-order propositional items constructed from the basic mental state and that they have conceptual content. Thirdly, I defend the claim (2) by arguing that both the object of veridical perception and the object of illusion are a certain kind of higher-order propositional items constructed from identical infallible basic mental state. I attempt to give the construction of perception in terms of Carnap's inductive logic. The whole arguments are expected to defend foundationalism from the argument in Sellars (1997) and that in Davidson (1986). Recent years, computational modeling of human perception is pursued in the discipline of natural science such as brain science and cognitive psychology. In such naturalistic pursuit, the problem of how different sense modalities such as visual and haptic are combined with is widely tackled by using the method of statistics. Lastly, in contrast to such naturalistic approach, I attempt to show how the conclusion of the paper will be helpful to those who are concerned with philosophical (utilizing traditional armchair methods of analysis) computational modeling of perception.

Pryor, J. 2000. The skeptic and the dogmatist. Nous, 34 (4):517-549. -- 2005. There is Immediate Justification. In M. Steup & E. Sosa (eds.).Contemporary Debates in Epistemology, Blackwell, 181-202. Sellars, W. 1997. Empiricism and the Philosophy of Mind, Robert Brandom (ed.), Harvard University Press.

15:15-16:15 Session 31A: A1/A4/B1/C1 SYMP Symposium on John Baldwin's model theory and the philosophy of mathematical practice 2 (Baldwin-2)
Chair:
Juliette Kennedy (University of Helsinki, United States)
Location: Room 364
15:15
M. Malliaris (University of Chicago, United States)
Should a mathematician read this book?

ABSTRACT. [Please note: This abstract is submitted as part of the proposed "author and critics" symposium "JBaldmtpmp" with J. Baldwin, J. Kennedy, A. Arana and M. Malliaris]

ABSTRACT:

I myself am a mathematician, specifically a model theorist. In this presentation, I want to address the question: Should a mathematician read this book, a book which, at first glance, appears to be a book explaining model theory to philosophers?

Here are some answers I am not going to defend.

• the book is of general intellectual interest and we should read it for culture.

• it is interesting to see how our field interacts with another field.

• it is a good opportunity for mathematicians to get a feel for how philosophers think because the examples analyzed are ones we know well.

• it can call our attention to philosophical aspects of our own work.

These are not unreasonable points but when we weigh them against the work we have in front of us on any given day they may have little urgency. There are many things each of us `should’ do for culture.

To me, the answer which does have urgency is the most interesting answer: for its mathematics.

This is a book written by a mathematician who has been doing core work in the subject for almost fifty years and who, under the umbrella of illustrating various philosophical ideas, gives detailed mathematical information to illustrate his impressions of how various breakthroughs arose. I will discuss (with commentary) some of the interesting examples covered in the book, likely including:

• some long-term influences of Hilbert’s work in the field

• some reasons the mathematics around first-order logic is so developed

• some key moves in the work of Robinson and Shelah

• how we might regard appearance of set theory in uncountable models and in contexts like AECs

and suggest some consequences for future work.

15:45
John Baldwin (University of Illinois at Chicago, United States)
Mathematical and Philosophical Problems arising in the context of the book

ABSTRACT. Here are some examples of the kinds of philosophical/mathematical questions that arise in the framework of the book. 1) Zilber wrote in 2000: ‘The initial hope of this author that any uncountably categorical structure comes from a classical context (the trichotomy conjecture), was based on the belief that logically perfect structures could not be overlooked in the natural progression of mathematics.’ In more concrete terms Zilber proposed that all ‘logically perfect’ (categorical in all uncountable powers) structures will naturally arise as ‘canonical structures’ in mathematics. The natural examples were the integers under successor, vector spaces, and algebraically closed fields. Hrushovski’s construction destroyed his precise conjecture. But the conjecture raises philosophical issues. While ‘classical’ might be construed as a descriptive historical term, logically perfect and canonical are normative philosophical terms. What can they mean? Here is a related mathematical issue. Shelah’s dividing line methodology led to astrikingsolutionoftheproblemofclassifyingfirstordertheoriesintothosewhich admit countable trees of invariants and those which have the maximal number of modelsinalluncountablecardinals. But, thecounterexampletoZilber’sconjecture shows that the basic building blocks of this classification, the strongly minimal sets, themselves admit immense diversity. Can one find finer dividing lines to better understand these atoms of the classification? 2) The ordinary mathematician considers a structure as a unique object defined in set theory (if he considers the issue at all). But then he treats all structures isomorphic to the given one as equals and regards the isomorphism type by ne- glecting, as Dedekind says, ‘the special character of the elements, simply retaining their distinguishability and taking into account only the relations to one another in which they are placed by the order-setting mapping’. The mathematicians work continues by making clear when a representative is chosen from the isomorphism type. Structuralism insists on reifying in one way or another the isomorphism type as object of study. Can a philosopher explain to mathematicians, why he is not content with the usual approach which can be carried out in (a weak subsystem of) ZFC? 3) Angus Macintyre pointed out an historical fact with philosophical implica- tions. By the time Hilbert wrote his Geometry, it had been seventy years since the independence of the parallel postulate was proved and in that time the meanings of ‘geometry’ had multiplied. A single description of space that was seen as a foun- dation for all mathematics had been replaced by not only Riemannian, hyperbolic, and elliptic variants of the description but such orthogonal notions as projective, algebraic, and differential geometry. Despite a few stirrings, there is not a similar situation in set theory, a hundred years after Zermelo’s formulation of the axioms. How does this fact impact the argument for Hamkin’s multiverse? It appears that the varieties of geometry I have recounted are different in kind from the various ex- tensions of ZFC that have been proved consistent. Is there a cogent way to express this distinction?

15:15-16:15 Session 31C: C1 Philosophy of the formal sciences
Chair:
Priyedarshi Jetli (University of Mumbai, India)
Location: Room 112+113
15:15
Sakiko Yamasaki (Tokyo Metropolitan University, Japan)
What is the Common Conceptual Basis of Gödel Embedding and Girard Embedding?

ABSTRACT. It is well-known that via Gödel translation the theorems of intuitionistic logic (IL) can be soundly and faithfully embedded into classical modal logic CS4: the formulae of IL are provable iff their translations are provable in CS4. Recently, several authors have elucidated much more specific details of the embedding by adopting various new systems of sequent calculus. Employing a system of G3-style multi-succedent sequent calculus for IL and Troelstra and Schwichtenberg's G3-style sequent calculus for CS4 [1], I constructed another proof-theoretic proof of the embedding.

On the other hand, it is also well-known that there is another very interesting embedding of IL, i.e. Girard's embedding of IL into classical linear logic (CLL). I invented a new translation (different from that of Girard) of the IL formulae into the CLL formulae and by adopting the same multi-succedent sequent calculus for IL as mentioned above, proved the soundness and faithfulness of this translation. Girard's translation interprets Conjunction and Disjunction as additive and Implication as multiplicative, whereas our translation interprets all of them as multiplicative. In interpreting sequents, Girard attached to the all the formulae occurring in the antecedent of a sequent the modal operator !, while in our translation, although we similarly attach ! to the all the formulae occurring in the antecedent of a sequent, we attach ? to those in the succedent of a sequent (note that we employ the same multi-succedent sequent calculus for IL). Our translation, one might say, is more coherent in that the connectives employed in the translation are all multiplicative. It shows that the additive aspects of IL connectives can be represented by multiplicative connectives together with ! and ? (i.e. Weakening and Contraction).

Let us compare Gödel embedding and Girard embedding (and also our embedding) more closely. In the former, Box dominated by S4 axioms plays a central role, while in the latter (also in our embedding) ! do that (note that in our embedding also ? dose an important role). From a proof-theoretical view point ! behaves completely similarly with S4 Box (except that ! is also dominated by Weakening and Contraction). Besides Weakening and Contraction, why does ! obey the same inference rules as S4 Box?

In this presentation, we will take up this question and attempt to clarify the common conceptual basis sherd by Gödel Embedding and Girard Embedding. Among philosophers of logic, it seems to be generally regarded that the content (i.e. meaning) of a formula occurring in a sequent is different between LL and IL. While in the latter, it is nothing but a ordinary constructivist proposition (e.g. what a mathematician attempts to prove constructively), in the former it is a certain data token of some data type represented by the formula. By clarifying the common basis mentioned above we hope that we can make their relationship more intelligible.

[1] A. S. Troelstra and H. Schwichtenberg. Basic Proof Theory. Cambridge University Press. Second edition. 2000.

15:15-16:15 Session 31D: C4 SYMP Epistemic and ethical innovations in biomedical sciences 2 (EAIBS-2)
Chair:
David Casacuberta (Universitat Autonoma de Barcelona, Spain)
Location: Room 346
15:15
David Casacuberta (Universitat Autonoma de Barcelona, Spain)
innovative tools for reaching agreements in ethical and epistemic problems in biosciences

ABSTRACT. This talk will present several innovative methodological tools that are being used in biomedical sciences when epistemic and/or ethical problems arise, and there are different stakeholders with different values, priorities, and aims who need to reach an agreement. Biomedical sciences may imply scientists and technologists from very different fields, that therefore have a different language, aims, methodology, and techniques. Reaching an agreement when there are so many differences between them can become very complex. Besides, biomedical sciences either when applied or when gathering information about human subjects can generate complex ethical problems which imply reaching agreements among very different agents, such as scientists, doctors, nurses, politicians, citizens or animal rights activists. After a brief presentation of the state of the art in the subject, we will discuss two main methodological tools: The Ethical Matrix. Firstly developed to discuss when is ethically admissible to introduce GMO foods in a specific environment, this is a very powerful tool to find agreements in lots of different ethical problems in biomedical sciences and can be helpful also when analyzing epistemically complex situations where agreements among very different disciplines have to be made.

Value maps. Built in a collaborative manner, these maps can help researchers to realize ethical implications of their work they weren't currently aware, and also to discover non-epistemic values that, nonetheless, can be helpful to improve innovate processes in scientific research.

15:45
Anna Estany (Universityt Autonoma Barcelona, Spain)
Design epistemology as innovation in biomedical research

ABSTRACT. The idea of design has reached our theories of epistemology: a field that, at first glance, seems to be quite far removed from the analysis of practical situations. However, we should bear in mind that epistemology has shifted from an a priori perspective to a naturalized one, in the sense that we cannot engage in epistemology without taking into account the empirical results of science when it comes to configuring methodological models. In addition, the philosophy of science has expanded its field of analysis beyond pure science, and this has made it necessary to consider the epistemology of applied science. At this point it is relevant design epistemology, as an alternative to analytic epistemology. The objective is to explore just how far design epistemology (DE) can be adopted as a methodological framework for research in biomedical sciences and in some of their applications as public health. To this end, we will analyze different approaches to DE and related terms and expressions such as “design thinking”, “design theory” and “designedly ways of knowing”. One of the issues that we need to address is precisely the polysemy that exists in the field of design, related to many different concepts. Thus, it seems impossible not to engage in a certain amount of conceptual analysis before we can embark on the study of the role of DE in public health research. Another of the questions that we will consider here is where to place biomedical sciences within the field of academic knowledge and research. The disciplines involved in this field of research range from biology to medicine and also to the applications of these knowledges, as in the case of public health. We will examine some of the definitions provided by international organizations and we will locate public health within the framework of healthcare services and their organization. Finally, we will see how DE can offer proposals and solutions to the challenges that a phenomenon as complex as public health currently faces. That is, we will measure up DE proposals against public health research needs. Design epistemology asks a whole series of questions which, at one and the same time, constitute different perspectives and proposals concerning how to understand the subject of DE itself. On the one hand, we have DE as an alternative to classic epistemology, which is often described as “analytic” and juxtaposed with “synthetic,” which is how DE would be described as it would also cover the applied sciences. On the other hand, DE is said to have a series of defining characteristics, among which we can highlight interdisciplinary as a means of addressing dynamic and complex problems; and a prominent element of social concern expressed through "design thinking" that revolves around human-scale design. Around these principal axes, we are going to examine a series of proposals and considerations relevant to biomedical sciences.

15:15-16:15 Session 31F: C1 SYMP Toward the reconstruction of linkage between Bayesian philosophy and statistics 2 (TRLBPS-2)
Chair:
Masahiro Matsuo (Hokkaido University, Japan)
Location: Room 250
15:15
Ohkubo Yusaku (Dept. Environmental Science, Hokkaido University, Sapporo, Japan., Japan)
Revisiting the two major statistical problems, stopping-rule and the catch-all hypothesis, from the viewpoint of neo-Bayesian statistics.

ABSTRACT. Statistics has been not only an indispensable tool for scientific inquiry but also the field of intense controversies among researchers and philosophers (Sober 2008). In the histories of the statistics and the philosophy of statistics, a fundamental conflict between Frequentist (or error statistician) and Bayesian theory had long been the interpretation of probability. These days, however, some Frequentist (or error statistician) does not cling to frequentist interpretation of probability nor Bayesian statistician subjective interpretation adopting frequentist properties to justify procedures (Mayo 2018; Gelman et al. 2013, 2018). It shifts attention from the interpretation of probability to the role and/or objective of the statistic. Here in this presentation, I will review the relationship between two schools of statistics, Frequentist and Bayesian, from the nature of an assessment.

First, I revisit the stopping-rule problem and the catch-all problem, well-known criticism respectively against Frequentism and Bayesianism. In general, any assessment procedure needs both the target and information for making an assessment. While we can update or change the latter, we cannot change the former because it spoils the objective. On this view, the two problems are parallel. For frequentists, the target of assessment is the data: while we evaluate the given data under any hypothesis, increase in data size post hoc is not permitted. It is the source of the stopping-rule problem. For Bayesianism, the target is the hypothesis: while we evaluate the given hypothesis under any data, changing the set of hypothesis post hoc is not permitted. It is the source of the catch-all problem. Both, the increase in data size and the emergence of novel hypothesis, do happen in the scientific practice and did happen in its history. It means, I argue, any of Frequentism and Bayesianism are not enough for scientific practice.

Second, I consider an apparent solution to the catch-all problem from the perspectives of Bayesian statistics, where Pr(D) is substituted by the marginal likelihood of the model, the likelihood averaged by prior distribution over the parameter space. However, another problem, I argue, emerges since it requires a too strong assumption that we have the true model at hand, rarely the case in scientific practice. Again, the problem is we have no room for changing the set of hypothesis or model.

Third, and finally, I examine the role of Bayesian statistics in practice and point out that it involves assessment of both data and hypothesis as the target of the probe. I show the Bayesian statistics could offer not just a pragmatic eclectic but positive reason to adopt since it avoids the two fundamentally parallel problems.

Reference: Gelman, A., & Shalizi, C. R. (2013). Philosophy and the practice of Bayesian statistics. British Journal of Mathematical and Statistical Psychology, 66(1), 8-38. Mayo, D. G. (2018). Statistical inference as severe testing: How to get beyond the statistics wars. Cambridge University Press. Sober, E. (2008). Evidence and evolution: The logic behind the science. Cambridge University Press.

15:45
Kenichiro Shimatani (The Institute of Statistical Mathematics, Japan)
The linkage between Bayesian and frequentism statistics is easier than between Bayesian statistics and philosophy

ABSTRACT. Bayesian statistics and Bayesian philosophy (Bayesianism) originated from the same Bayes rule; a prior and a statistical model are transformed to a posterior when data are given. Despite the increasing popularity, the current Bayesian statistics is biased to computation, numerical estimation and classification, and most of the recent Bayesian statistics studies do not intend to test a hypothesis such as H0: unknown parameter = 0, H1: > 0. In fact, examples of applied studies are, in cases of ecology, estimation of a population size, modeling population dynamics (unifying mortality, reproduction and growth), identification of animal behaviors, hierarchical classification of ecological communities, and so on. On the other hand, the primary targets of the Bayesianism are, in many cases, logic and inference. During these decades, the gap between the Bayesianism and Bayesian statistics have been expanding. Conventionally, Bayesian statistics have suffered from conflicts to frequentism because the two have couple of essential differences in fundamental concepts. (1) In Bayesian statistics, a dataset is given and a posterior is obtained from that one dataset, whereas in frequentism, data are assumed to be repeatedly obtainable and expectations over datasets are compared with the data. (2) Unknown parameters should be optimized for frequentism, while random samples from a posterior play crucial roles in Bayesian statistics. (3) For model evaluation, the expectation over hypothetically repeatedly obtainable datasets plays a central role when deriving the equation of Akaike information criterion (AIC), while the marginal likelihood of given one dataset is the key concept in Bayesian (e.g. BIC, Bayes factor). (4) A true model is assumed to exist and to produce the data in the frequentism, whereas the Bayesian statistics does not require the existence of a true model. Even though, some of recent developments in Bayesian statistics use concepts in both Bayesian statistics and frequentism. For example, WAIC (widely applicable information criterion, Watanabe (2010)) extends the idea of AIC to posterior predictive models in Bayesian statistics. The Stein paradox was proposed under the contexts of the frequentism, and is now reformulated and intensively investigated under the Bayesian framework. On the other hand, very few Bayesian statisticians take efforts to import a concept in Bayesian philosophy. Presumably, nearly none has interests with a posterior probability of a hypothesis, simply because the catch-all problem prevents us from computing that probability, except for very special cases (e.g. cancer or not, SPAM mail or not). In this study, I show examples of statistical theories that use concepts both in Bayesian and frequentism and some proposals to reconstruct a linkage between Bayesian statistics and philosophy.

References Watanabe, S. (2010). Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory. Journal of Machine Learning Research 11: 3571-3591.

15:15-16:15 Session 31G: B3 SYMP Communication and exchanges among scientific cultures 4 (CESC-4). Comparing modes of recycling
Chair:
Karine Chemla (SPHERE-CNRS & University Paris Diderot, France)
Location: Room 152+153
15:15
Fanglei Zheng (Tsinghua University, China)
When Arabic Algebraic Problems met Euclidean Norms in the 13th Century – A Case Study on the Scientific Innovation by the Transformation in Cross-cultural Transmission

ABSTRACT. This presentation belongs to symposium :. The transmission of sciences from Arabic world to the Medieval Europe is an established historical fact. The present article is not merely presenting one example of the fact but discussing a way in which the cross-culture transmission of knowledge resulted in the production of new knowledge. The case taken is the reaction of a 13th mathematician, Jordanus de Nemore, to Arabic algebraic problems. In short, he changed them in such a form that we won’t see its Arabic relatives unless we summarize his propositions by means of symbolic equations. What was remained and what was lost for the sources after the transmission? My general answer is that only the problems and some solutions, only if understood abstractly, were remained. The form taken to presenting the problems, as well as the demonstrations for them, along with the practical roots and the epistemological foundations of them, were lost. The substitution provided by Jordanus were the Euclidean way of doing mathematics: mathematical knowledges were organized as demonstrated propositions in a deductive system. What’s more, Jordanus not only followed this principle, but also modeled Euclid’s forms (terms, sentence patterns, structure of propositions, demonstrations). I suggest that the above modifications were an effect of the cultural environment in which Jordanus worked. Although the research on his identity turned to be in vain, by comparison of his works to what were taught and circulated in the universities of the time, we can reasonably assume that he worked in a university environment, where Euclidean mathematics and alike dominated the contents and norms of mathematics. Thus, this is a case that an actor in the adopting culture tried to incorporate an alien knowledge by transforming it according to the dominating norms. Jordan’s action did result in an innovation in science. Firstly, the convention of representing numbers by letters was one of the key steps for the development of algebra. More importantly, the analysis method of Greek mathematics enhanced the analytical feature of algebra. However, it seems that this innovative work didn’t play its role until several hundred years later, which, in my opinion at least, can be interpreted by the multiplicity of mathematical traditions (also cultures) in the Medieval Europe. But this hypothesis would be discussed in another article.

15:45
Michael Barany (The University of Edinburgh, UK)
Experts and Expertise in North-South Circulation in Mid-Twentieth Century Mathematics

ABSTRACT. Mid-twentieth century mathematics was decisively shaped by a variety of high-profile efforts to move mathematicians and their textual productions between global hemispheres, driven and funded by philanthropies such as the Rockefeller and Guggenheim Foundations, government efforts such as the United States' Office of Inter-American Affairs and Fulbright program, and international organizations such as UNESCO. I shall examine the implicit and explicit cultural logics and assumptions of fellowship, exchange, and technical assistance programs linking South America to North America and Europe. In their different formulations and contexts, these programs each relied on ideas about how circulation--including both departure from and, crucially, return to one's home country--enabled virtuous developments in local research cultures and beneficial ties between such cultures. These assumptions inflected both the high ideals and the routine practices of the various programs, evident in programmatic statements, pamphlets and advertisements, routines of intelligencing and administration, and supporting bureaucratic infrastructures. At stake were the inward- and outward-facing roles of experts and the cultural functions of expertise in the context of emerging institutional formations. Visiting experts were both producers and observers of scientific cultures in the venues they visited, both North and South, and the conditions of their circulation relied upon consequential models of convergence and exchange on the basis of both scientific and institutional expertise. This admixture of scientific and institutional knowledge reinforced funding and administrative agents' interventionist and diffusionist understandings of scientific culture, reinforcing elite-driven models of national and international science in both old and emerging institutional establishments.

15:15-16:15 Session 31H: C2 Philosophy of the physical and chemical sciences
Chair:
Ave Mets (University of Tartu, Estonia)
Location: Room 302
15:15
Michele Ginammi (University of Amsterdam, Italy)
Applicability Problems Generalized

ABSTRACT. The effectiveness of mathematics in physics has been topic of debate in the philosophy of science in the last decennia (see for example Steiner 1998; Pincock 2012; Bangu 2012). In their attempt to clarify the applicability of mathematics to physics, philosophers usually only focus on cases of applicability of mathematics to physics and ignore other kinds of application of (or to) mathematics. However, since the application of mathematics to physics is just part of the more complex interrelation between physics and mathematics, it might be that such an approach is actually too narrow. Maybe, if we better understand how this kind of application (from mathematics to physics) compares to other kinds of application, we might be able to better understand the applicability of mathematics to physics as well. A kind of applicability, which is usually not taken into account when dealing with the problem of math-to-physics application, is the application of physics to mathematics. This subject has been broadly neglected by the philosophical debate on the applicability of mathematics (to my best knowledge, Urquhart 2008a and Urquhart 2008b are the only relevant exceptions). Actually, in contemporary physics and mathematics there is a fruitful circulation of methods and representative strategies, in which not only mathematics can be effectively employed to modelize physics, but also physics can be fruitfully “applied” to mathematics to generate new strategies of mathematical analysis. This (unreasonable?) effectiveness of physics in mathematics is still unheeded by the philosophical community and awaits to be explored. The presupposition that these kinds of applicability are completely different from (and therefore not relevant for) the understanding of the applicability of mathematics to physics might well be wrong. If there were analogies between these three kinds of application, then we might exploit these analogies in order to offer a generalized account for mathematical application, and to better understand the complex relationship between physics and mathematics. In this talk I am going to develop this suggestion. I will present some examples of math-to-physics, math-to-math, and physics-to-math application. Then I will make some considerations about the possible analogies that can be traced among them, and I will analyze whether these analogies might be of any help in clarifying the applicability problems and the relationship between physics and mathematics.

References: Bangu, S. (2012). The Applicability of Mathematics in Science: Indispensability and Ontology. Basingstoke (UK): Palgrave Macmillan. Pincock, C. (2012). Mathematics and Scientific Representation. Oxford: Oxford University Press. Urquhart, A. (2008a). The Boundary Between Mathematics and Physics. In Mancosu, P. (2008), The Philosophy of Mathematical Practice. Oxford: Oxford University Press, pp. 407-416. Urquhart, A. (2008b). Mathematics and Physics: Strategies of Assimilation. In Mancosu, P. (2008), pp. 417-440. Steiner, M. (1998). The Applicability of Mathematics as a Philosophical Problem. Cambridge, Mass.: Harvard University Press.

15:45
Martin King (University of Bonn, Germany)
Towards the Reconciliation of Confirmation Assessments

ABSTRACT. The philosophy of scientific confirmation has as its task the formal and informal representations of scientific reasoning, in particular, how evidence relates to the confidence that scientists have in theories. This paper examines the recent discovery of the Higgs boson at the LHC, and in particular the reasoning that has led to the high confidence in a particular hypothesis about the Higgs boson, the Standard Model (SM) Higgs hypothesis. While the confidence in the SM Higgs is very high, a logical model of confirmation, Bayesian or otherwise, falls short of establishing this. What has been confirmed by the LHC data, as Dawid (2015) and others correctly point out, is the Higgs mechanism of spontaneous electroweak symmetry breaking, but not any particular implementation of this mechanism. The paper aims to address the question of how it is that a particular formulation of the Higgs hypothesis is taken to be so highly confirmed. That is, how can one account for, and possibly bridge, the gap between the informal and the logical assessments of confirmation.

This paper consists of two broad parts. First, to show how far the SM Higgs hypothesis is logically confirmed and to elaborate on the limitations of this kind of confirmation, which is done in Sections 2. Here, I review what I call the `direct confirmation' of the SM Higgs hypothesis in isolation, which captures the precision, novelty, and riskiness of the predictions with respect to the data. In Section 3, I examine this hypothesis in the context of other competing hypotheses and show that the lack of unique predictions that can currently be determined undermines claims about the high degree of confirmation of the SM Higgs hypothesis.

The second part explores two avenues that might enrich a simple model of confirmation to reconcile it with the judgements of physicists. The first is to consider what I call the `indirect confirmation' of the hypothesis, which stems from the degree to which alternative hypotheses are being ruled out by data. This approach is shown to be problematic given that the competing models are in very close agreement in achievable energy levels. The second is to factor is that not all predictions are equally valuable for confirmation. To capture this I introduce a hierarchical Bayesian framework for confirmation to make use of a structured hypothesis space and hyperparameters to capture the varying levels of importance of a model's hypotheses. This avenue is also problematic given that the probabilities must be determined by products and a single unconfirmed hypothesis gives the entire model a zero degree of confirmation. Thus, finally an approach is taken where the individual contributions are summed and suitably normalised. This is not a strictly Bayesian approach, but its machinery can bring us towards the reconciliation of confirmation assessments.

15:15-16:45 Session 31I: C3 Historical issues in evolutionary thinking
Chair:
Olha Simoroz (Taras Shevchenko National University of Kyiv, Ukraine)
Location: Room 201
15:15
Roman Otto Jordan (Institute Vienna Circle (University of Vienna), Austria)
The evolutionary epistemology of Rupert Riedl – a consequent realization of the program of naturalizing epistemology?

ABSTRACT. The program of evolutionary epistemology, of which Rupert Riedl is one of its main representatives besides Konrad Lorenz and Gerhard Vollmer, is to describe the development of consciousness and human reason as the natural consequence of the principles of evolution. As Konrad Lorenz pointed out, all humans including the natural scientists are also living beings, which owe their properties, including the ability to recognize, to evolution, in the course of which all organisms have dealt with reality and adapted to it.

According to Rupert Riedl, evolutionary epistemology is understood as a theory derived from biology in order to gain understanding of cognitive processes. Evolutionary epistemology seeks to study how knowledge is acquired in organic life, how living beings attain knowledge about the world, and how living beings form a picture of the world.

Riedl is concerned with a study of the human cognitive apparatus. He regards human intuition as innate and prior with respect to our individual reason. At the same time, these forms of intuition would also be posterior and inherited products of adaptation and learning of a biological tribe. Thus, they are products of phylogenetic learning, which determine our intuition. They evoke in us a system of fundamental hypotheses, which functions as a ratiomorphic teacher and predetermines our consciousness.

For Riedl and Lorenz, life in itself is a process of knowledge acquisition, through which biological structures reflect and manage their surrounding environment. Riedl's idea is that the patterns of the order of human thought processes derive from the systematic order of organic nature. The patterns of thinking are thus to be seen as a consequence of a mapping of the patterns of nature and ultimately are also a product of natural selection. After all, human culture as a whole could only be understood as a consequence of evolution. From this point of view, all basic patterns of human thought have emerged only on the basis of patterns of order on the more fundamental material and organic levels.

The evolutionary perspective of Rupert Riedl and others like for instance Konrad Lorenz or Gerhard Vollmer leads to a naturalization of epistemology. This naturalistic point of view implies that we can objectively grasp nature and that an objective understanding of nature is a prerequisite for our understanding of human culture as well. According to this view, study of the "culturomorphic" superstructure is insufficient for gaining understanding of human reason; it is also necessary to consider the "theriomorphic", viz., animal-organic dispositions which form the basis for the formation of the culturomorphic superstructure and for the formation of human reason as well.

Evolutionary epistemology, in the sense of Riedl, seeks to elucidate the genetic makeup of man's psychic equipment as well as his cognitive and social capabilities. The causes of the product of the hereditary fixations are to be found in the conditions for adaptation to extrasubjective reality and in the natural history of the human species. Riedl emphasizes the role of hereditary instructions in our human equipment which determine our vision of the world. Thus, our worldview apparatus has developed in real life, so to speak, it initially has been adapted for the purpose of mere survival, rather than world knowledge.

This paper reexamines the views of Riedl and Lorenz, in connection with related views to be found, for example, in Gerhard Vollmer, Erhard Oeser or Franz Wuketits. On this basis, I will try to develop preliminaries toward a comparison with more well known varieties of evolutionary or naturalized epistemology such as to be found in the philosophies of Karl Popper or W. V. O. Quine.

15:45
Christopher Donohue (National Institutes of Health, United States)
CANCELLED: The Monogenesis Controversy: A Historical and Philosophical Investigation

ABSTRACT. I give a general overview of the "unity of species" and "unity of origin" of human species controversy, in the 20th century known as discussions of mono and polygenesis, from Samuel Stanhope Smith's "Essay on the Causes of Variety of Complexion and Figure in the Human Species" (1810) to geneticist Francisco Ayala's biological "disproof" of "monogenesis," or the idea that human beings are one species with a common origin, and the biblical Eve of Genesis (c. 1995). It covers much ground, much of it having to do with describing one of the key biological and racial theories of the 19th century. This was "polygenesis" or the argument that human beings were distinct "racial types" and evolved in distinct locations, in the writings of Charles Darwin and in more detail, Paul Broca, Louis Agassiz, Josiah Nott and the "American school" in the United States. Arguments for polygenesis among biologists, sociologists and anthropologists were present through the "evolutionary synthesis." However, there is no real discussion of genetics until the last part of the paper, as I consider most of the distortions to be in scholars discussions of 19th century accounts of "unity" or "plurality" of human origins and species. This will then be a historical account of the species concept as it pertains to humanity and a philosophical paper in that it attempts (particularly in its conclusion) to address why arguments surrounding the unity and plurality of species as well as the unity and plurality of origin have persisted, and the impact of differing kinds of evidence (genetic and theological) on theory change and conceptual discussions during the 19th and 20th centuries.

16:15
Hayley Clatterbuck (University of Rochester, United States)
Darwin's causal argument against intelligent design

ABSTRACT. In the Origin of Species, Darwin presents his “one long argument” against the thesis of special creation (SC) and for his alternative hypothesis of evolution by natural selection (ENS). His objections to SC come in two, perhaps contradictory, forms. First, Darwin sometimes argues that some observations, such as imperfect traits, seem to provide evidence against an omnipotent and benevolent creator. More specifically, using the Law of Likelihood, the claim is that for some observations, Pr(O|SC) < Pr(O|ENS), so O is evidence that favors ENS over SC. However, in other places, Darwin argues that SC makes no predictions at all. Because the goals and intentions of an all-perfect God are unknowable, we can’t assign precise likelihoods to various outcomes on the hypothesis of design. Worse, if we make favorable assumptions about the creator, we can generate a high probability for any outcome whatsoever. Hence, Darwin complains that SC makes no genuine predictions or explanations: “On the ordinary view of the independent creation of each being, we can only say that so it is;—that it has so pleased the Creator to construct each animal and plant” (Darwin 1859, Ch. 13). This latter possibility has come to be known as the Preference Problem (Sober 2008). I explicate a way out of the Preference Problem that Darwin himself found compelling, using the modern tools of causal modeling frameworks. In frameworks obeying the Causal Markov Condition, probabilistic dependencies between two variables are indicative of a causal relationship between them (or between them and some common cause). The Preference Problem states that the design hypothesis can accommodate any probabilistic dependency between observed traits and the designer and hence preserve a causal connection between the two, come what may. However, Darwin himself held that the most persuasive evidence against the design hypothesis was a particular probabilistic independence, namely, that the variations that occur in a population are probabilistically independent of what would be good for those organisms to possess (in modern terminology, mutation is random with respect to fitness) (Lennox 2010, Beatty 2006). Given certain assumptions, such as faithfulness, probabilistic independencies indicate the absence of a causal connection between two variables. Darwin’s insight is that if there were a designer that is a common cause of both variation and natural selection, then we would predict a probabilistic dependence. The design proponent can only capture the data by assuming that a designer would guarantee that faithfulness was violated. While this gives Darwin a kind of evidence against the SC hypothesis that does not fall victim to the Preference Problem, there are several complications inherent in the causal modeling framework that provide an “out” to the design proponent. I argue that the probabilistic independence Darwin identifies doesn’t merely put theological pressure on our conception of a designer, but in fact, standard causal reasoning justifies the inference that there is no such designer.

Beatty, John (2006) ‘Chance variation: Darwin on orchids.’ Philosophy of Science 73(5): 629-641. Darwin, Charles (1859) The origin of species by means of natural selection, 1st edition. Project Gutenberg. Lennox, James G. (2010) ‘The Darwin/Gray correspondence 1857-1869: an intelligent discussion about chance and design.’ Perspectives in Science 18(4): 456-479. Sober, Elliott. (2008) Evidence and evolution: The logic behind the science. Cambridge: Cambridge University Press.

15:15-16:15 Session 31K: C2 Interpretation of quantum physics 2
Chair:
Matthew Parker (CPNSS, London School of Economics and Political Science, UK)
Location: Room 402
15:15
Maria Panagiotatou (National and Kapodistrian University of Athens, Greece)
The quantum measurement as a physical interaction

ABSTRACT. The quantum measurement as a physical interaction

The aim of the talk is to argue that there is ground for dealing with quantum measurement simply as a physical mind-independent interaction. Electrons, for example, manifest definite spin along certain axes due to physical facts or physical procedures that occur in nature; and it is these physical facts or procedures that guarantee definite results when we perform measurements. The fact that the dynamical equations of standard quantum mechanics cannot describe what exactly goes on during the measurement process does not justify reference to mind-dependence, especially when this does not solve or explain anything. The measurement problem is something that the working physicists have to face up to, and either retain standard quantum mechanics acknowledging that the world at the quantum level behaves as the theory describes or modify it or replace it with another theory. However, what if the electron is in the |y+⟩e state and we want to measure its spin component along the x-axis? Which physical interaction forces it to adopt a value along the x-axis? Can we tell a coherent story about that kind of experiments when taking the measurement only as a physical interaction? I think we can and I will articulate one using the relevant well-known Stern-Gerlach experiment.

15:45
Foad Dizadji-Bahmani (California State University Los Angeles, United States)
In Defence of Branch Counting in Everettian Quantum Mechanics

ABSTRACT. The main challenge for the Everett interpretation of quantum mechanics (EQM) is the ‘probability problem’: If every possible outcome is actualized in some branch, how can EQM make sense of the probability of a single outcome as given by the Born rule, which, after all, makes for the empirical adequacy of the original theory?

Advocates of EQM have sought to make conceptual room for epistemic probabilities. There are two prominent approaches: the decision-theoretic approach (Deutsch (1999), Greaves (2004), Wallace (2012)) and the self-location uncertainty approach (Vaidman (1998; 2011), Sebens/Carroll (2016)). In the decision-theoretic approach, one conceives of rational agents facing hypothetical bets pertaining to the outcomes of quantum measurements. Whilst some such agent knows that all the possible outcomes are instantiated, she may nonetheless have betting preferences from which her credences can be operationalized. In the self-location approach one conceives of a rational agent who has undergone branching but is yet to see the outcome of the quantum measurement and is therefore uncertain about which branch she is in, and probabilities here are her credences about her self-location.

Both approaches aim to show that a rational agent is required to set her credences as per the Born rule. In the first, the result is variously proved from a set of decision-theoretic axioms. In the second, Sebens/Carrol prove the result from a single principle, their “Epistemic Separability Principle” (ESP).

Prima facie, the right way to set one’s credences is by “Branch Counting” (BC): the credence a rational agent ought to have in a particular quantum measurement outcome is equal to the ratio of the number of branches in which that (kind of) outcome is actualized to the total number of branches. After all, each branch is equally real.

BC is demonstrably at odds with the Born rule and thus advocates of EQM have sought to argue against BC in various ways. The aim of this paper is to show that these arguments are not persuasive, and that, therefore, the probability problem in EQM has not been solved, neither in the decision- theoretic approach nor the self-location uncertainty approach. Specifically, I consider three different ways in which BC has been attacked: 1) that BC is not rational because an agent using it can be Dutch-booked; 2) that BC is not rational because there is no such thing as the number of branches in EQM; and 3) that BC is not rational because it conflicts with a more fundamental principle of rationality, namely ESP.

Apropos 1: Wallace (2012) argues that BC is irrational because an agent accepting it can be subjected to a diachronic Dutch book. However, I show that in the Everettian multiverse some such diachronic Dutch book cannot even be constructed. That it seems to be constructible owes to certain underspecifications of the when bets are placed and paid-out. Once these are fully specified, there is no Dutch book.

Apropos 2: Wallace (2003, 2007, 2012) has argued that BC is irrational because there is no such thing as the number of branches. Following Dizadji-Bahmani (2015), a distinction is drawn between two possible claims: that the number of branches is indeterminate (a metaphysical claim) and that the number of branches is indeterminable (an epistemological claim). It is argued that the former claim is not defensible and that the second claim would only show that BC is irrational given a further strong assumption which is not justifiable. Here it is shown that this analysis extends to the Sebens/Carrol (2016) approach.

Apropos 3: The Sebens/Carrol (2016) self-location uncertainty approach turns on ESP, which, in short, requires that the “credence one should assign to being any one of several observers having identical experiences is independent of the state of the environment.” Adopting this principle, they argue that BC is (in some cases) irrational. This is shown by their thought experiment ‘Once-Or-Twice’, wherein two agents face a bifurcation at time t2, and then a trifurcation at time t3, of their initial branch. BC is inconsistent with ESP in this case, and they advocate adopting the latter. I argue contra this that A) BC is a far more intuitive principle than ESP in the given context and that B) whilst tracing out the environment - a crucial move in the Sebens/Carrol argument - leaves the mathematical representation of Alice’s state from t2 to t3 unchanged, there is an important physical difference between these two states and to which Alice is in principle privy, and that this warrants changing her credences in violation of the Born rule.

15:15-15:45 Session 31L: C5 Philosophy of the cognitive and behavioral sciences
Chair:
Valeria Giardino (Archives Henri Poincaré, Nancy, France)
Location: Room 202
15:15
Sajjad Karmaly (Universty Paris-Sorbonne, France)
Why cognitive kinds can't be the kind of kinds that are natural kinds? A new hypothesis for natural kinds

ABSTRACT. It is generally assumed by philosophers that natural kinds mirror the causal structure of the world or corresponds to its real division. Ultimately natural kinds are to be found in the fixed order of nature that we humans endeavour to pick them up through different operations.

Boyd and Tobin (2012) offer a very clear-cut characterization of natural kinds put in the following terms: "To say that a kind is natural is to say that it corresponds to a grouping or ordering that does not depend on humans". They are quite right in underlining that natural kinds are independent of any human whims and desires and that what makes natural a natural kind is its naturalness, in other words, the main criterion to warranty 'naturalness' is human mind-independence; nonetheless, such notions as 'natural' or 'mind' require a thorough characterization.

Four main views of natural kinds have been put forward from the late 1970s: essentialists, conventionalists, homeostatic property cluster (Boyd, 1991) and natural binding practices. Each of these views is unsatisfactory relative to natural kinds and cognitive kinds, for example Sullivan (2016) contends that "scientific practice in the neurosciences of cognition is not conducive to the discovery of natural kinds of cognitive capacities" and Irvine's (2012) eliminative statement "The lack of well-demarcated and suitably targeted mechanisms of consciousness means that there are no scientific kinds that ‘consciousness’ refers to".

The main purpose of this presentation is to hypothesize a new view on natural kinds embedded in common sense realism and active realism (Chang, 2012; Sankey, 2014). To that end, I will first lay down a critique of the assumption that natural kinds are also those events, phenomena, mechanisms, processes, relationships, properties, functions, capacities, laws and concepts that play the role of a demarcating tool in the structure of nature by claiming that their relative complexity and enmeshed layers do not support for their candidacy as natural kinds. Then, I will proceed toward setting reasons why cognitive kinds can't be natural kinds by maintaining that at best they can be no more than brain functions. Finally, from the previous delineated evidence, I will devise arguments for a new hypothesize on what counts as a natural kind.

Bird, A. and Tobin, E. (2012) ‘Natural Kinds’, in: Zalta, E. (ed.) The Stanford Encyclopedia of Philosophy (Winter 2012 edition) Boyd, R. (1991). Realism, anti-foundationalism and the enthusiasm for natural kinds. Philosophical Studies, 61, 127–148. Chang, Hasok (2012). Is Water H2O? Evidence, Realism and Pluralism. Boston Studies in the Philosophy and History of Science. Irvine, Elizabeth (2012). Consciousness as a scientific concept: a philosophy of science perspective. Springer. Kendig, Catherine (ed.) (2016). Natural Kinds and Classification in Scientific Practice. Routledge. Kornblith, Hilary (1993). Inductive Inference and its Natural Ground. MIT Press. Sankey, Howard (2014). Scientific Realism and Basic Common Sense. Kairos 10:11-24. Sullivan, Jacqueline Anne (2016). Neuroscientific Kinds Through the Lens of Scientific Practice. In Catherine Kendig (ed.), Natural Kinds and Classification in Scientific Practice. Routledge. pp. 47-56. Wilkerson, T. E. (1988). Natural Kinds: T. E. Wilkerson. Philosophy 63 (243):29-42.

15:15-16:45 Session 31M: A1 Mathematical logic 5
Chair:
Joachim Hertel (H-Star, Inc, United States)
Location: Room 347
15:15
Giovanni Marco Martino (San Raffaele University, Italy)
An algebraic model for Frege's Basic Law V

ABSTRACT. 1 As it is well known, the system of Frege’s Grundgesetze is formally inconsistent. Indeed,BLV,(εF(ε) = εG(ε)) ↔ ∀x(Fx ↔ Gx),plus the full–CA, ∃X∀x(Xx ↔ φ(x)), leads to inconsistency: Russell Paradox is derivable: ∃X ∀x(X x ↔ ∃X [x = y–(Xy) ∧ ¬Xy]. In recent years, Heck, Ferreira and Wehmeier have pointed out that BLV is consistent with some predicative restriction. However, they have succeeded to recover at most Robinson’s Q. Contrary to any predicative setting, I shall employ a full-impredicative approach by using some crucial algebraic intuition in order to give a model for the theory TK . My aim is a model-theoretic consistent representation of Frege’s Grundgesetze. Both BLV and CA won’t be syntactically restricted: I shall only impose semantical restriction on BLV.

2 The above mentioned characterisation proceeds in two different stages. Firstly, I shall fix a domain of interpretation M = ⟨D, ⊆⟩ where D = ℘(ω) is a poset, ⊆ is a relation, reflexive, antisymmetric, and transitive over D. Subsequently, I shall define over M a monotone function φ order-preserving. According to Moschovakis, φ has the least fixed point property. Thus, my purpose shall be to apply φ to TK -predicates: only φ-monotone predicates that have least fixed point property delivers concepts. An interpretation for the syntax of TK shall be given in agreement with the for- mer structure: the pair (E , A), extension and anti-extension, interprets any second- order variable Fi, where E(Fi) ⊆ M2 (second-order domain); A(Fi) ⊆ M2 and E(Fi) ∩ A(Fi) = ∅; the function ν : π → M1 (first-order domain) interprets ε, where π ⊆ M2, is the set of all φ-monotone predicates with extension or anti- extension fixed – it is also clear how BLV is restricted; the interpretation of the quantifiers is given in standard SOL definitions. Secondly, in order to fix denotation for any ε-term (VR-term) I shall generalise M: the triple ⟨M, ⊆, F⟩, where ⟨M, ⊆⟩ is a poset, ⟨M, F⟩ is a field of sets, with F ⊆ ℘(ω) and F = π. According to this representation of the Boolean Algebra, to any point in M corresponds a M1 individual of TK and, to any complex in F correspond a φ-monotone predicate. Thus, such structure is a model of TK . Finally, TK results both consistent and strong enough to recover FA. The Russell paradox is blocked for the following reason: let R = ∃F [y = xˆ(F x) ∧ ¬F x].R is not φ-monotone, it does not delivers any concept and there is no correspond- ing VR-term:R ∈/ Eσ ∩ Aσ. Furthermore, TK manages to recover FA: I may form the concept N(x) =def P red+(0, x) because only with a predicative fragment I have at least Dedekind-infinitely many M1 individuals that fall under it. If P red+(0, x) = ∃F ∃u(F u ∧ y = #F ∧ x = #[λz.F z ∧ z ̸= u]), TK proofs that F is φ-monotone, namely, there is a corresponding VR-term.

References - Ferreira F., and Wehmeier K. F., On the consistency of the ∆1-CA fragment of Frege’s Grundgesetze, Journal of Philosophical Logic, 31 (2002) 4, pp. 301-311.

- Goldblatt, R., Varieties of Complex Algebra, Annals of Pure and Applied Logic, 44 (1989) 3, pp. 173-242.

- Heck, R. G., The consistency of predicative fragments of Frege’s Grundgesetze der Arithmetik, History and Philosophical Logic, 17 (1996) 4, pp. 209-220.

- Moschovakis, Y., Notes on Set Theory, New York: Springer, 2006 (2nd edition).

15:45
Nurlan Markhabatov (Novosibirsk State Technical University, Russia)
Sergey Sudoplatov (Sobolev Institute of Mathematics, Novosibirsk State Technical University, Novosibirsk State University, Russia)
On calculi and ranks for definable families of theories

ABSTRACT. Let $\mathcal{T}$ be a family of first-order complete theories in a language $L$, $\mathcal{T}_L$ be the family of all first-order complete theories in a language $L$. For a set $\Phi$ of $L$-sentences we put $\mathcal{T}_\Phi=\{T\in\mathcal{T}\mid T\models\Phi\}$. A family of the form $\mathcal{T}_\Phi$ is called {\em $d$-definable} (in $\mathcal{T}$). If $\Phi$ is a singleton $\{\varphi\}$ then $\mathcal{T}_\varphi=\mathcal{T}_\Phi$ is called {\em $s$-definable}.

We consider properties of calculi for families $\mathcal{T}$ with respect to the relations $\vdash_\mathcal{T}$, where $\Phi\vdash_\mathcal{T}\Psi\Leftrightarrow\mathcal{T}_\Phi\subseteq \mathcal{T}_\Psi$. We use terminology from \cite{csMS, rsMS18} including $E$-closure ${\rm Cl}_E(\mathcal{T})$, rank ${\rm RS}(\mathcal{T})$, and degree ${\rm ds}(\mathcal{T})$.

\begin{theorem}\label{th1_MS} For any sets $\Phi$ and $\Psi$ of sentences and a family $\mathcal{T}$ of theories the following conditions are equivalent:

$(1)$ $\Phi\vdash_\mathcal{T}\Psi$;

$(2)$ $\Phi\vdash_{\mathcal{T}_0}\Psi$ for any finite $\mathcal{T}_0\subseteq\mathcal{T}$;

$(3)$ $\Phi\vdash_{\{T\}}\Psi$ for any singleton $\{T\}\subseteq\mathcal{T}$;

$(4)$ $\Phi\vdash_{{\rm Cl}_E(\mathcal{T})}\Psi$. \end{theorem}

\begin{theorem}\label{th2_MS} For any sets $\Phi$ and $\Psi$ of sentences in a language $\Sigma$ the following conditions are equivalent:

$(1)$ $\Phi\vdash\Psi$, i.e., each sentence in $\Psi$ is forced by some conjunction of sentences in $\Phi$;

$(2)$ $\Phi\vdash_{\mathcal{T}_L}\Psi$;

$(3)$ $\Phi\vdash_{\mathcal{T}}\Psi$ for any {\rm (}finite{\rm )} family {\rm (}singleton{\rm )} $\mathcal{T}\subseteq \mathcal{T}_L$;

$(4)$ $\Phi\vdash_{\mathcal{T}}\Psi$ for any {\rm (}finite{\rm )} family {\rm (}singleton{\rm )} $\mathcal{T}$. \end{theorem}

\begin{theorem}\label{th3_MS} A subfamily $\mathcal{T}'\subseteq\mathcal{T}$ is $d$-definable in $\mathcal{T}$ if and only if $\mathcal{T}'$ is $E$-closed in $\mathcal{T}$, i.e., $\mathcal{T}'={\rm Cl}_E(\mathcal{T}')\cap \mathcal{T}$. \end{theorem}

\begin{theorem}\label{th4_MS} For any ordinals $\alpha\leq\beta$, if ${\rm RS}(\mathcal{T})=\beta$ then ${\rm RS}(\mathcal{T}_\varphi)=\alpha$ for some {\rm (}$\alpha$-ranking{\rm )} sentence $\varphi$. Moreover, there are ${\rm ds}(\mathcal{T})$ pairwise $\mathcal{T}$-inconsistent $\beta$-ranking sentences for $\mathcal{T}$, and if $\alpha<\beta$ then there are infinitely many pairwise $\mathcal{T}$-inconsistent $\alpha$-ranking sentences for $\mathcal{T}$. \end{theorem}

\begin{theorem}\label{th5_MS} Let $\mathcal{T}$ be a family of a countable language $\Sigma$ and with ${\rm RS}(\mathcal{T})=\infty$, $\alpha\in\{0,1\}$, $n\in\omega\setminus\{0\}$. Then there is a $d$-definable subfamily $\mathcal{T}_\Phi$ such that ${\rm RS}(\mathcal{T}_\Phi)=1$ and ${\rm ds}(\mathcal{T}_\Phi)=n$. \end{theorem}

This research was partially supported by Committee of Science in Education and Science Ministry of the Republic of Kazakhstan (Grants No. AP05132349, AP05132546) and Russian Foundation for Basic Researches (Project No. 17-01-00531-a).

\begin{thebibliography}{10} \bibitem{csMS} {\scshape S.V.~Sudoplatov}, {\itshape Closures and generating sets related to combinations of structures}, {\bfseries\itshape The Bulletin of Irkutsk State University. Series ``Mathematics''}, vol.~16 (2016), pp.~131--144. \bibitem{rsMS18} {\scshape S.V.~Sudoplatov}, {\itshape On ranks for families of theories and their spectra}, {\bfseries\itshape International Conference ``Mal'tsev Meeting'', November 19--22, 2018, Collection of Abstracts}, Novosibirsk: Sobolev Institute of Mathematics, Novosibirsk State University, 2018, pp.~216. \end{thebibliography}

16:15
Nikolay Bazhenov (Sobolev Institute of Mathematics, Russia)
Manat Mustafa (Mathematics Department,SST, Nazarbayev University, Kazakhstan)
Mars Yamaleev (Kazan (Volga Region) Federal University, Russia)
Semilattices of numberings
PRESENTER: Nikolay Bazhenov

ABSTRACT. Uniform computations for families of mathematical objects constitute a classical line of research in computability theory. Formal methods for studying such computations are provided by the theory of numberings. The theory goes back to the seminal article of G{\"o}del, where an effective numbering of first-order formulae was used in the proof of the incompleteness theorems. To name only a few, computable numberings were studied by Badaev, Ershov, Friedberg, Goncharov, Kleene, Kolmogorov, Lachlan, Mal'tsev, Rogers, Uspenskii, and many other researchers.

Let $S$ be a countable set. A numbering of $S$ is a surjective map $\nu$ from the set of natural numbers $\omega$ onto $S$. A standard tool for measuring the algorithmic complexity of numberings is provided by the notion of reducibility between numberings: A numbering $\nu$ is reducible to another numbering $\mu$ if there is total computable function $f(x)$ such that $\nu(x) = \mu( f(x) )$ for all $x\in\omega$. In other words, there is an effective procedure which, given a $\nu$-index of an object from $S$, computes a $\mu$-index for the same object. In a standard recursion-theoretical way, the notion of reducibility between numberings gives rise to an upper semilattice, which is usually called the Rogers semilattice. Rogers semilattices allow one to compare different computations of a given family of sets, and they also provide a tool for classifying properties of computable numberings for different families. Following this approach, one can formulate most of the problems on numberings in terms of Rogers semilattices.

Goncharov and Sorbi [Algebra Logic, 36:6 (1997), 359--369] started developing the theory of generalized computable numberings. We follow their approach and work in the following setting: Given a complexity class, say $\Sigma^0_n$, we consider the upper semilattice $R_{\Sigma^0_n}$ of all $\Sigma^0_{n}$-computable numberings of all $\Sigma^0_n$-computable families of subsets of $\omega$.

We prove that the theory of the semilattice of all computable numberings is computably isomorphic to first order arithmetic. We show that the theory of the semilattice of all numberings is computably isomorphic to second order arithmetic. Furthermore, it is shown that for each of the theories $T$ mentioned above, the $\Pi_5$-fragment of $T$ is hereditarily undecidable. We also discuss related results on various algorithmic reducibilities.

16:15-16:45Coffee Break