View: session overviewtalk overview

Organizer: Aviezer Tucker

Philosophers have attempted to distinguish the Historical Sciences at least since the Neo-Kantians. The Historical Sciences attempt to infer rigorously descriptions of past events, processes, and their relations from their information preserving effects. Historical sciences infer token common causes or origins: phylogeny and evolutionary biology infer the origins of species from information preserving similarities between species, DNAs and fossils; comparative historical linguistics infers the origins of languages from information preserving aspects of exiting languages and theories about the mutation and preservation of languages in time; archaeology infers the common causes of present material remains; Critical Historiography infers the human past from testimonies from the past and materials remains, and Cosmology infers the origins of the universe. By contrast, the Theoretical Sciences are not interested in any particular token event, but in types of events: Physics is interested in the atom, not in this or that atom at a particular space and time; Biology is interested in the cell, or in types of cells, not in this or that token cell; Economics is interested in modeling recessions, not in this recession; and Generative Linguistics studies “Language” not any particular language that existed in a particular time and was spoken by a particular group of people. The distinctions between realms of nature and academic disciplines may be epistemically and methodologically arbitrary. If from an epistemic and methodological perspectives, historiography, the study of the human past, has more in common with Geology than with the Social Sciences that have more in common with Agronomy than with historiography, we need to redraw the boundaries of philosophies of the special disciplines. This is of course highly controversial and runs counter to attempts to distinguish the historical sciences by the use of narrative explanations, reenactment or emphatic understanding.

The Historical Sciences may be distinguished from the Theoretical Sciences according to their objects of study, tokens vs. types; from Experimental Sciences according to their methodologies, inference from evidence vs. experimenting with it; and from natural sciences according to the realm of nature they occupy. The criteria philosophers proposed for these distinctions were related to larger issues in epistemology: Do the Historical Sciences and offer different kinds of knowledge? Do the Historical and Theoretical sciences support each other’s claims for knowledge, and if so, how?; metaphysics and ontology: Do the types of objects the Historical and Theoretical Sciences attempt to study, represent, describe, or explain differ, and if so, how does it affect their methodologies?; and the philosophy of science: What is science and how do the Historical and Theoretical Sciences relate to this ideal?

Organizers: Valentin Goranko and Frederik Van De Putte

The concept of rational agency is broadly interdisciplinary, bringing together philosophy, social psychology, sociology, decision and game theory. The scope and impact of the area of rational agency has been steadily expanding in the past decades, also involving technical disciplines such as computer science and AI, where multi-agent systems of different kinds (e.g. robotic teams, computer and social networks, institutions, etc.) have become a focal point for modelling and analysis.

Rational agency relates to a range of key concepts: knowledge, beliefs, knowledge and communication, norms, action and interaction, strategic ability, cooperation and competition, social choice etc. The use of formal models and logic-based methods for analysing these and other aspects of rational agency has become an increasingly popular and successful approach to dealing with their complex diversity and interaction.

This symposium will bring together different perspectives and approaches to the study of rational agency and rational interaction in the context of philosophical logic.

The symposium talks are divided into three thematic clusters, each representing a session and consisting of 4-5 presentations, as follows.

I. Logic, Rationality, and Game-theoretic Semantics. Applying logic-based methods and formal logical systems to reasoning in decision and game theory is a major and increasingly popular approach to agency and rationality. Formal logical languages allow us to specify principles of strategic behaviour and interaction between agents, and essential game-theoretic notions, including solution concepts and rationality principles. Formal logical systems provide precise and unambiguous semantics and enable correct and reliable reasoning about these, while involving the concepts of knowledge, beliefs, intentions, ability, etc.

II. Deontic Logic, Agency, and Action. Logics of agency and interaction such as STIT and deontic logics have been very influential and generally appreciated approaches to normative reasoning and theory of actions. Active directions of research in this area include the normative status of actions vs. propositions, causality and responsibility, collective and group oughts and permissions, and further refinements of the STIT framework stemming from the works of Belnap, Horty and others.

III. Logic, Social Epistemology, and Collective Decision-making. Rational agency and interaction also presuppose an epistemological dimension, while intentional group agency is inextricably linked to social choice theory. In this thematic cluster, various logical and formal models are discussed that allow shedding light on these factors and processes.

09:00 | Varieties of permission for complex actions ABSTRACT. The main problem in deontic logic based on propositional dynamic logic is how to define the normative status of complex actions based on the normative status of atomic actions, transitions and states. There are two main approaches to this problem in the literature: the first defines the normative status of an action in terms of the normative status of the possible outcome states of the action (Broersen, 2004; Meyer, 1988), while the second defines the normative status of an action in terms of the normative status of the transitions occurring in the possible executions of the action (van der Meyden, 1996). In this work, I focus on interpretations of permission concepts. In particular, I address what I take to be two shortcomings in the two main approaches to permission in dynamic logic. First, when assessing an agent's behavior from a normative viewpoint, one must often take into account both the results brought about by the agent, and the means by which those results were brought about. Consequently, when deciding whether a complex action is to be permitted or not, one must, in many cases, take into account both the normative status of the possible outcome states of the action, and the normative status of the atomic actions that occur in the complex action: choosing one of the two is not enough. Second, most existing accounts, with the exception of the work of Kulicki and Trypuz (2015), consider the permissibility of actions only relative to their complete executions, i.e. the possible executions where each step in the complex action is carried out. However, in the presence of non-determinism it may happen that some initial part of a complex sequential action leads to a state where the remaining part of the action cannot be executed. This possibility can lead to counterintuitive consequences when one considers strong forms of permission in combination with non-deterministic choice. Such cases show that also partial executions of complex actions are important from a normative viewpoint. Taking both permitted states and permitted atomic actions as primitive allows for a wide variety of permission concepts for complex actions to be defined. Moreover, the distinction between complete and partial executions of complex actions offers further options for defining permission concepts. Based on these points, I define a variety of permission concepts and investigate their formal properties. References Broersen, J. (2004). Action negation and alternative reductions for dynamic deontic logic. Journal of Applied Logic 2, 153-168. Kulicki, P., and Trypuz, R. (2015). Completely and partially executable sequences of actions in deontic context. Synthese 192, 1117-1138. Meyer, J.-J. Ch. (1988). A different approach to deontic logic: Deontic logic viewed as a variant of dynamic logic. Notre Dame Journal of Formal Logic 29(1), 109-136. van der Meyden, R. (1996). The dynamic logic of permission. Journal of Logic and Computation 6(3), 465-479. |

09:30 | From Oughts to Goals PRESENTER: Alessandra Marra ABSTRACT. Suppose I believe sincerely and with conviction that today I ought to repay my friend Ann the 10 euro that she lent me. But I do not make any plan for repaying my debt: Instead, I arrange to spend my entire day at the local spa enjoying aromatherapy treatments. This seems wrong. Enkrasia is the principle of rationality that rules out the above situation. More specifically, by (an interpretation of) the Enkratic principle, rationality requires that if an agent sincerely and with conviction believes she ought to X, then X-ing is a goal in her plan. This principle plays a central role within the domain of practical rationality, and has recently been receiving considerable attention in practical philosophy (see Broome 2013, Horty 2015). This presentation pursues two aims. Firstly, we want to analyze the logical structure of Enkrasia in light of the interpretation just described. This is, to the best of our knowledge, a largely novel project within the literature. Much existing work in modal logic deals with various aspects of practical rationality starting from Cohen and Levesque's seminal 1990 paper. The framework presented here aims to complement this literature by explicitly addressing Enkrasia. The principle, in fact, bears some non-trivial conceptual and formal implications. This leads to the second aim of the talk. We want to address the repercussions that Enkrasia has for deontic logic. To this end, we elaborate on the distinction between so-called “basic oughts" and “derived oughts", and show how this distinction is especially meaningful in the context of Enkrasia. Moreover, we address issues related to the filtering of inconsistent oughts, the restricted validity of deontic closure, and the stability of oughts and goals under dynamics. In pursuit of these two aims, we provide a multi-modal neighborhood logic with three characteristic operators: A non-normal operator for basic oughts, a non-normal operator for goals in plans, and a normal operator for derived oughts. Based on these operators we build two modal logical languages with different expressive powers. Both languages are evaluated on tree-like models of future courses of events, enriched with additional structure representing basic oughts, goals and derived oughts. We show that the two modal languages are sound and weakly (resp. strongly) complete with respect to the class of models defined. Moreover, we provide a dynamic extension of the logic by means of product updates. |

10:00 | Reciprocal Group Oughts PRESENTER: Thijs De Coninck ABSTRACT. In [2], Horty shows that the framework of STIT logic can be used to reason about what agents and groups ought to do in a multi-agent setting. To decide what groups ought to do he relies on a utility function that assigns a unique value to each possible outcome of their group actions. He then makes use of a dominance relation to define the optimal choices of a group. When generalizing the utilitarian models of Horty to cases where each agent has his own utility function, Horty’s approach requires each group to have a utility function as well. There are several ways to do this. In [4], each group is assigned an independent utility function. This has the disadvantage that there is no connection between the preferences of a group and its members. Another option is to define the utility of a given outcome for a group of agents as the mean of the utilities of that outcome for the group’s members, as is done in [3]. However, this requires that utilities of individual agents be comparable. A third option is pursued in [5], where Turrini proposes to generalize Horty’s notion of dominance such that an action of a group X dominates another action X' just in case, relative to the utility function of each group member, X dominates X'. The optimal actions of a group can then be defined using this modified dominance relation. This approach, however, leads to undesirable outcomes in certain types of strategic interaction (e.g. a prisoner’s dilemma). Here, we present a new approach towards evaluating group actions in STIT logic by taking considerations of reciprocity into account. By reciprocity we mean that agents can help each other reach their desired outcomes through choosing actions that are in each other’s interest. We draw upon the work of Grossi and Turrini [1] to identify certain group actions as having different types of reciprocal properties. For example, a group action can be such that, for each agent a in the group, there is some other agent a' such that the action of a' is optimal given the utility function of a. We compare these properties and show that by first selecting a certain type of reciprocal action and only then applying dominance reasoning we are left with group actions that have a number of desirable properties. Next, we show in which types of situations agents can expect to benefit by doing their part in these reciprocal group actions. We then define what groups ought to do in terms of the optimal reciprocal group actions. We call the resulting deontic claims reciprocal oughts, in contradistinction to the utilitarian oughts of [2] and strategic oughts of [3]. We end by comparing each of these deontic operators using some examples of strategic interaction. References [1] Davide Grossi and Paolo Turrini. Dependence in games and dependence games. Autonomous Agents and Multi-Agent Systems, 25(2):284–312, 2012. [2] John F. Horty. Agency and deontic logic. Oxford University Press, 2001. [3]Barteld Kooi and Allard Tamminga. Moral conflicts between groups of agents. Journal of Philosophical Logic, 37(1):1–21, 2008. [4] Allard Tamminga. Deontic logic for strategic games. Erkenntnis, 78(1):183– 200, 2013. [5] Paolo Turrini. Agreements as norms. In International Conference on Deontic Logic in Computer Science, pages 31–45. Springer, 2012. |

Organizer: Zuzana Parusniková

Of all philosophers of the 20th century, few built more bridges between academic disciplines than did Karl Popper. For most of his life, Karl Popper made contributions to a wide variety of fields in addition to the epistemology and the theory of scientific method for which he is best known. Problems in quantum mechanics, and in the theory of probability, dominate the second half of Popper's Logik der Forschung (1934), and several of the earliest items recorded in §2 ('Speeches and Articles') of Volume 1 of The Writings of Karl Popper, such as item 2-5 on the quantum-mechanical uncertainty relations, item 2-14 on nebular red-shifts, and item 2-43 (and other articles) on the arrow of time, show his enthusiasm for substantive problems in modern physics and cosmology. Interspersed with these were a number of articles in the 1940s on mathematical logic, and in the 1950s on the axiomatization of the theory of probability (and on other technical problems in this area). Later he made significant contributions to discussions in evolutionary biology and on the problem of consciousness. All these interests (except perhaps his interest in formal logic) continued unabated throughout his life.

The aim of this symposium is to illustrate, and to evaluate, some of the interventions, both substantive and methodological, that Karl Popper made in the natural and mathematical sciences. An attempt will be made to pinpoint the connections between these contributions and his more centrally philosophical concerns, especially his scepticism, his realism, his opposition to subjectivism, and his indeterminism.

The fields that have been chosen for the symposium are quantum mechanics, evolutionary biology, cosmology, mathematical logic, statistics, and the brain-mind liaison.

Kateřina Trlifajová (Czech Technical University, Czechia)

09:00 | Machine learning: a new technoscience ABSTRACT. Machine learning, a new technoscience . Jean-Michel KANTOR,IMJ Jussieu,Paris. There has been a tremendous development of computerized systems for artificial intelligence in the last thirty years. . Now in some domains the machines get better results than men: --playing chess or even Go, winning over the best champions, --medical diagnosis (for example in cancerology) --automatic translation, --vision : recognizing faces in one second from millions of photos... The successes rely on : --progress in hardware technology, of computational speed and capacity of Big Data .. --New ideas in the structure of the computers with the neural networks ,inspired originally from the structure of vision treatment in the brain ,and progress in mathematical algorithms for exploiting statistical data, extensions of Markovian methods. These developments have led the main actors to talk about a new science, or rather a new techno-science: Machine learning, defined by the fact that it is able to improve its own capacities by itself ( see [L]). Some opponents give various reasons for their scepticism , some following an old tradition of identification of ‘’Numbers ‘’ with the modern industrial civilization [ W ],some with theoretical arguments ,coming from the foundations of information and complexity theory ([ Ma ]),some doubting of the bayesian inferential approach to Science – refusing prediction without understanding [ Th ] which might lead to a radical attack on classical science ([H]).In particular the technique of neural networks created a new type of knowledge with a very particular mystery of ‘’ Black Box ‘’. We will describe the new kind of ‘’ truth without verificability ‘’ that is issued from this practice. We will discuss carefully these various topics,in particular : Is it a new science or a new techno-science ? Where is the limit between the science of Machine Learning and various conjectural visions leading to the science-fiction’s ideas of transhumanism ? What are the possible consequences of recent success of AI on our approach of language, of intelligence of man’s cognitive functioning in general. And finally what are the limits of this numerical invasion of the world ? H Horgan J.The end of science, Broadway Books L LeCun Le Deep Learning une révolution en intelligence artificielle,Collège de France,Février 2016 Ma Manin Y. Kolmogorov complexity as a hidden factor of scientific discourse : from Newton’s law to data mining. Th Thom R. Prédire n’est pas expliquer. Eschel,1991 W Weil S. ,Cahiers. |

09:30 | Can machine learning extend bureaucratic decisions? PRESENTER: Maël Pégny ABSTRACT. In the recent literature, there has been much discussion about the explainability of ML algorithms. This property of explainability, or lack thereof, is critical not only for scientific contexts, but for the potential use of those algorithms in public affairs. In this presentation, we focus on the explainability of bureaucratic procedures to the general public.The use of unexplainable black-boxes in administrative decisions would raise fundamental legal and political issues, as the public needs to understand bureaucratic decisions to adapt to them, and possibly exerts its right to contest them. In order to better understand the impact of ML algorithms on this question, we need a finer diagnosis of the problem, and understand what should make them particularly hard to explain. In order to tackle this issue, we turn the tables around and ask: what makes ordinary bureaucratic procedures explainable? A major part of such procedures are decision trees or scoring systems. We make the conjecture, which we test on several cases studies, that those procedures typically enjoy two remarkable properties. The first is compositionality: the decision is made of a composition of subdecisions. The second is elementarity: the analysis of the decision ends on easily understandable elementary decisions. The combination of those properties has a key consequence on explainability, which we call \emph{explainability by extracts}: it becomes possible to explain the output of a given procedure, through a contextual selection of subdecisions, without the need to explain the entire procedure. This allows bureaucratic procedures to grow in size without compromising their explainability to the general public. In the case of ML procedures, we show that the properties of compositionality and elementarity correspond to properties of the segmentation of the data space by the execution of the algorithm. Compositionality corresponds to the existence of well-defined segmentations, and elementarity corresponds to the definition of those segmentations by explicit, simple variables. But ML algorithms can loose either of those properties. Such is the case of opaque ML, as illustrated by deep learning neural networks, where both properties are actually lost.This entails an enhanced dependance of a given decision to the procedure as a whole, compromising explainability by extracts. If ML algorithms are to be used in bureaucratic decisions, it becomes necesary to find out if the properties of compositionality and elementarity can be recovered, or if the current opacity of some ML procedures is due to a fundamental scientific limitation. |

10:00 | The Historical Basis for Algorithmic Transparency as Central to AI Ethics ABSTRACT. This paper embeds the concern for algorithmic transparency in artificial intelligence within the history of technology and ethics. The value of transparency in AI, according to this history, is not unique to AI. Rather, black box AI is just the latest development in the 200-year history of industrial and post-industrial technology that narrows the scope of practical reason. Studying these historical precedents provides guidance as to the possible directions of AI technology, towards either the narrowing or the expansion of practical reason, and the social consequences to be expected from each. The paper first establishes the connection between technology and practical reason, and the concern among philosophers of ethics and politics about the impact of technology in the ethical and political realms. The first generation of such philosophers, influenced by Weber and Heidegger, traced the connection between changes in means of production and the use of practical reason for ethical and political reasoning, and advocated in turn a protection of practical reasoning – of phronesis – from the instrumental and technical rationality valued most by modern production. More recently, philosophers within the postphenomenological tradition have identified techne within phronesis as its initial step of formation, and thus call for a more empirical investigation of particular technologies and their enablement or hindering of phronetic reasoning. This sets the stage for a subsequent empirical look at the history of industrial technology from the perspective of technology as an enabler or hindrance to the use of practical reasoning and judgment. This critical approach to the history of technology reveals numerous precedents of significant relevance to AI that from a conventional approach to the history of technology focusing on technical description appear to be very different from AI – such as the division of labor, assembly lines, power machine tools and computer-aided machinery. What is revealed is the effect of most industrial technology, often quite intentional, in deskilling of workers by narrowing the scope of their judgment, whereas other innovations have the potential to expand the scope of workers’ judgment. In particular, this section looks like the use of statistics in industrial production, as it is the site of a nearly century-long tension between approaches explicitly designed to narrow or expand the judgment of workers. Finally, the paper extends this history to contemporary AI – where statistics is the product, rather than a control on the means of production – and presents the debate on explainable AI as an extension of this history. This final section explores the arguments for and against transparency in AI. Equipped with the guidance of 200 years of precedents, the possible paths forward for AI are much clearer, as are the effects of each path for ethics and political reasoning more broadly. |

09:00 | In Defence of the Evidential Role of Mechanistic Reasoning ABSTRACT. Mechanistic reasoning involves a process of inferring that a medical intervention will have a particular effect on the basis of evidence of underlying biological mechanisms. The principles of evidence-based medicine (EBM) maintain that the best evidence for the effectiveness of medical interventions is obtained from comparative clinical studies. Evidence of the underlying mechanisms typically does not provide evidence for the effectiveness of medical interventions. Miriam Solomon (2015) is a philosopher who supports this view. Her argument appeals to the unreliability of mechanistic reasoning in medicine. There are numerous examples of treatments proposed where we had good evidence of mechanism, which then turned out to be ineffective. Jeremy Howick (2011) is a philosopher who argues that mechanistic reasoning can provide evidence of effectiveness, while still acknowledging that it is rare that this is the case. A problem for mechanistic reasoning raised by both Howick and Solomon is that mechanisms are complex and our knowledge of mechanisms is almost always incomplete. Moreover, it is hard to know when a mechanism is complete as there is always the possibility of counteracting mechanisms. In this paper I argue that mechanistic reasoning is not the only way that evidence of mechanisms might provide evidence of effectiveness. A more reliable type of reasoning may be distinguished by appealing to recent work on evidential pluralism in the epistemology of medicine (Clarke et al. 2014). In an instance of so-called reinforced reasoning, evidence of mechanisms can provide evidence for the effectiveness of a medical intervention. This is only the case when mechanistic reasoning is combined with correlational reasoning, where correlational reasoning involves a process of inferring that an intervention will have a particular effect on the basis of evidence of a correlation (typically obtained from clinical studies). A case study from virology provides an example of reinforced reasoning in medicine. This case study involves putative treatments for middle east respiratory syndrome (MERS). One potential treatment is a combination therapy of ribavirin and interferon. The rationale for this treatment is based on evidence of a mechanism linking combination therapy and survival in MERS patients, and limited clinical evidence. Mechanistic or correlational reasoning alone cannot determine whether combination therapy is effective. Reinforced reasoning however can because the strengths of correlational reasoning complement the weaknesses of mechanistic reasoning - e.g. if there is a correlation then it is less likely that the mechanism is counteracted - and the weaknesses of correlational reasoning are complemented by the strengths of mechanistic reasoning - e.g. correlations can be confounded, and mechanisms can rule out the presence of confounders. I show that in the case study only a combination of clinical and mechanistic evidence can provide evidence of combination therapy's effectiveness. References Clarke, B., Gillies, D., Illari, P., Russo, F., and Williamson, J. (2014). Mecha- nisms and the Evidence Hierarchy. Topoi, 33(2):339{360. Howick, J. (2011). The Philosophy of Evidence-Based Medicine. Wiley- Blackwell. Solomon, M. (2015). Making Medical Knowledge. Oxford University Press. |

09:30 | Measurable Epistemological Computational Distances in Medical Guidelines Peer Disagreement PRESENTER: Luciana Garbayo ABSTRACT. The study of medical guidelines disagreement in the context of the epistemology of disagreement (Goldman, 2011, Christensen & Lackey, 2013) may strongly contribute to the clarification of epistemic peer disagreement problems encoded in scientific (medical) guidelines. Nevertheless, the clarification of peer disagreement under multiple guidelines may require further methodological development to improve cognitive grasp, given the great magnitude of data and information in them, as in the case of multi-expert decision-making (Garbayo, 2014, Garbayo et al., 2018). In order to fill this methodological gap, we propose an innovative computational epistemology of disagreement platform for the study of epistemic peer evaluations of medical guidelines. The main epistemic goal of this platform is to analyze and refine models of epistemic peer disagreement with the computational power of natural language processing to improve modeling and understanding of peer disagreement under encoded guidelines, regarding causal propositions and action commands (Hamalatian & Zadrozny, 2016). To that effect, we suggest to measure the conceptual distances between guidelines terms in their scientific domains with natural language processing tools and topological analysis to add modeling precision to the characterization of epistemic peer disagreement in its specificity, while contrasting simultaneously multiple guidelines. To develop said platform, we study the breast cancer screening medical guidelines disagreement (CDC) as a test case. We provide a model theoretic treatment of propositions of breast cancer conflicting guidelines, map terms/predicates in reference to the medical domains in breast cancer screening and investigate the conceptual distances between them. The main epistemic hypothesis in this study is that medical guidelines disagreement of breast cancer screening, when translated into conflicting epistemic peers positions, may represent a Galilean idealization type of model of disagreement that discounts relevant peer characterization aspects thereof, which a semantic treatment of contradictions and disagreement may further help to clarify (Zadrozny, Hamatialam, Garbayo, 2017). A new near-peer epistemic agency classification in reference to the medical sub-areas involved may be required as a result, to better explain some disagreements in different fields such as oncology, gynecology, mastology, and family medicine. We also generate a topological analysis of contradictions and disagreement of breast cancer screening guidelines with sheaves, while taking in consideration conceptual distance measures, to further explore in geometrical representation continuities and discontinuities in such disagreements and contradictions (Zadrozny & Garbayo, 2018). Bibliography: CDC, “Breast Cancer Screening Guidelines for Women”, accessed 2017 at http://www.cdc.gov/cancer/breast/pdf/BreastCancerScreeningGuidelines.pdf Christensen, D., Lackey, J. (eds.) The Epistemology of Disagreement: New Essays. Oxford University Press, 2013. Garbayo, L. “Epistemic considerations on expert disagreement, normative justification, and inconsistency regarding multi-criteria decision making. In Ceberio, M & Kreinovich, W. (eds.) Constraint programming and decision making, 35-45, Springer, 2014. Garbayo, L., Ceberio, M., Bistarelli, S. Henderson, J. “On modeling Multi-Experts Multi-Criteria Decision-Making Argumentation and Disagreement: Philosophical and Computational Approaches Reconsidered. In Ceberio, M & Kreinovich, W. (eds.) Constraint Programming and Decision-Making: Theory and Applications, Springer, 2018. Goldman, A & Blanchard, T. “Social Epistemology”. In Oxford Bibliographies Online, OUP, 2011. Hamalatian, H., Zadrozny, W. “Text mining of Medical Guidelines. In Proc. of the Twenty-Ninth Intern. Florida Artificial Intelligence Res. Soc. Cons.: FLAIRS-29. Poster Abstracts. AAAI. Zadrozny, W; Garbayo, L. “A Sheaf Model of Contradictions and Disagreements”. Preliminary Report and Discussion.arXiv:1801.09036 ISAIM 2018, International Symposium on Artificial, 2018 Zadrozny, W; Hamatialam, H; Garbayo, L. “Towards Semantic Modeling of Contradictions and Disagreements: A Case Study of Medical Guidelines”. ACL Anthology A Digital Archive of Research Papers in Computational Linguistics, 2017. |

Organizer: Lilia Gurova

There are several camps in the recent debates on the nature of scientific understanding. There are factivists and quasi-factivists who argue that scientific representations provide understanding insofar as they capture some important aspects of the objects they represent. Representations, the (quasi-)factivists say, yield understanding only if they are at least partially or approximately true. The factivist position has been opposed by the non-factivists who insist that greatly inaccurate representations can provide understanding given that these representations are effective or exemplify the features of interest. Both camps face some serious challenges. The factivists need to say more about how exactly partially or approximately true representations, as well as non-propositional representations, provide understanding. The non-factivists are expected to put more effort into the demonstration of the alleged independence of effectiveness and exemplification from the factivity condition. The aim of the proposed symposium is to discuss in detail some of these challenges and to ultimately defend the factivist camp.

‘The Factivity of Model-Based Explanations’ defends a factive account of model-based explanations (ME). The explananda of MEs are argued to be “relaxed” approximate descriptions of the explanandum-phenomenon. The explanantia of MEs involve correct propositions that are extracted from the model. On this account, the indispensable idealizations, which many successful models contain, can contribute to factive understanding by enabling the extraction of correct explanatory information.

A different argument for the factivity of scientific understanding provided by models containing idealizations is presented in ‘Understanding Metabolic Regulation: A Case for the Factivists’. The central claim of this paper is that such models bring understanding if they capture correctly the causal relationships between the entities, which these models represent.

What happens, however, when understanding is provided by explanations which do not refer to any causal facts? This question is addressed in ‘Factivity of Understanding in Non-causal Explanations’. The author argues that the factivity of understanding could be analyzed and evaluated by using some modal concepts that capture “vertical” and “horizontal” counterfactual dependency relations which the explanation describes.

‘Scientific Explanation and Partial Understanding’ focuses on cases where the explanations consist of propositions, which are only partially true (in the sense of da Costa’s notion of partial truth). The author argues that such explanations bring partial understanding insofar as they allow for an inferential transfer of information from the explanans to the explanandum.

One of the biggest challenges to factivisim, the existence of non-explanatory representations which do not possess propositional content but nevertheless provide understanding, is addressed in ‘Considering the Factivity of Non-explanatory Understanding’. This paper argues against the opposition between effectiveness and veridicality. Building on some cases of non-explanatory understanding, the author shows that effectiveness and veridicality are compatible and that we need both.

‘Effectiveness, Exemplification, and Factivity’ further explores the relation between the factivity condition and its suggested alternatives – effectiveness and exemplification. The author’s main claim is that the latter are not alternatives to factivity, strictly speaking, insofar as they could not be construed without any reference to truth conditions.

11:00 | Experiments in History and Archaeology: Building a Bridge to the Natural Sciences? ABSTRACT. The epistemic challenges to the historical sciences include the direct inaccessibility of their subject matters and limited empirical data whose scope and variety cannot be easily augmented. The output of historiographic research is rarely in the form of universal or general theory. Nonetheless, these properties do not distinguish the historical sciences from other disciplines. The historical sciences have been successful in generating knowledge of the past. One of the methods common to the natural sciences that historians and archaeologists pursue in order to bridge different academic cultures is the experimental method, most clearly manifest in experimental archaeology. This paper examines the use of experiments in historical and archaeological research and situate them in relation to contemporary philosophies of historiography. Experiments in historiography can take many forms – they can be designed based on textual, pictorial, or other non-textual evidence including fragments of artefacts; they can take place inside the laboratories or in the field. Designers of experiment can aim to describe an exact occurrence in the past (e. g. specific event), types of production techniques, to interpret technical texts, or to inquire into the daily life of our ancestors. However, can the results of such experiments cohere with other scientific historical methods? Can experiment in archaeology truly verify or falsify historiographic hypotheses? Is the experimental method suitable for historical research and to what extent? How do we represent the results of experimental archaeology? These questions accompanied by individual examples of experimental archaeology are discussed in relation to the constructivist approach to historiography and in relation to historical anti‑realism. It is argued that despite the fruitfulness of some experiments, their results generally suffer from the same underdetermination as other historiographic methods and theories. |

11:30 | Collingwood, the narrative turn, and the cookie cutter conception of historical knowledge PRESENTER: Giuseppina D'Oro ABSTRACT. The narrative turn in the philosophy of historiography relies on a constructivist epistemology motivated by the rejection of the view that there is any such thing as immediate knowledge of the past. As there is no such thing as knowledge of things as they are in themselves generally speaking, so there is no knowledge of the past in itself. Some narrativists characterise the temporal distance between the agents and the historian in positive terms and present it as an enabling condition of historical knowledge, because, so they argue, the significance of an historical event is better grasped retrospectively, in the light of the chain of events it set in motion. Others, on the other hand, see the retrospective nature of historical knowing as a sort of distorting mirror which reflects the historian’s own zeitgeist. Historical knowledge, so the argument goes, requires conceptual mediation, but since the mediating concepts are those of the historian, each generation of historians necessarily re-writes the past from their own perspective, and there can never be anything as “the past as it always was” (Dray). To use a rather old analogy one might say that as the form of the cookie cutter changes, so does the shape of the cookie cut out of the dough. This paper argues that there is a better way of preserving the central narrativist claim that the past cannot be known in-itself, one which does not require biting the bullet that the past needs to be continuously re-written from the standpoint of the present. To do so one needs to rethink the notion of mediacy in historical knowledge. We present this alternative conception of mediacy through an explication and reconstruction of Collingwood’s philosophy of history. According to Collingwood the past is known historically when it is known through the eyes of historical agents, as mediated by their own zeitgeist. The past is therefore not an ever-changing projection from different future “nows”. While human self-understanding changes over time (the norms which govern how a medieval serf should relate to his lord are not the same as those which govern the relation between landlord and tenant in contemporary London) the norms which governed the Greek, the Romans, the Egyptian or the Mesopotamian civilization remain what they always were. It is the task of the historian to understand events as they would have been perceived by the historical agents, not in the light of legal, epistemic or moral norms that are alien to them. Fr example, understanding Caesar’s crossing of the Rubicon as challenging the authority of the senate (rather than, say, simply talking a walk with horses and men) involves understanding the Roman legal system and what Republican law entailed. This is a kind of conceptual knowledge that is not altered either by the future course of events or by changes in human self-understanding. Although Collingwood’s account of the nature of mediacy in historical knowledge would disallow that later historians should/could retrospectively change the self-understanding of the Romans (or the Egyptians, or the Greeks), the claim that historians can know the past as the Egyptians, the Romans or the Mesopotamians did, is not tantamount to claiming that the past can be known in itself. It is rather the assertion that the past is known historically when it is known through the eyes of the historical agent, not those of the historian. This conception of mediacy takes the past to be always-already mediated (by the conceptual framework of the agent) and, unlike the cookie-cutter conception of knowledge, does not lead to the sceptical implications which go hand in hand with the narrativist conception of mediacy. |

11:00 | Stit heuristics and the construction of justification stit logic ABSTRACT. From its early days, stit logic was built around a set of heuristic principles that were typically phrased as recommendations to formalize certain ideas in a certain fashion. We have in mind the set of 6 stit theses advanced in [1, Ch. 1]. These theses mainly sought to guide the formalization of agentive sentences. However, it is often the case that one is interested in extending stit logic with new notions which are not necessarily confined to agentive phenomena; even in such cases one has to place the new notions in some relation to the existing stit conceptual machinery which often involves non-trivial formalization decisions which are completely outside the scope of the Belnapian stit theses. The other issue is that the preferred stit operator of [1] is achievement stit, whereas in the more recent literature the focus is on different variants of either Chellas stit or deliberative stit operator. In our talk we try to close these two gaps, by (1) reformulating some of the Belnapian theses for Chellas/deliberative stit operator, (2) developing heuristics for representing non-agentive sentences in stit logic, and (3) compensating for the absence of achievement stit operator by introducing the so-called `fulfillment perspective' on modalities in stit logic. In doing so, we introduce a new set of heuristics, which, we argue, is still in harmony with the philosophy expressed in [1]. We then apply the new heuristic principles to analyze the ideas behind the family of justification stit logics recently introduced in [2] and [3]. References [1] N. Belnap, M. Perloff, and M. Xu. Facing the Future: Agents and Choices in Our Indeterminist World. Oxford University Press, 2001. [2] G. Olkhovikov and H. Wansing. Inference as doxastic agency. Part I: The ba- sics of justification stit logic. Studia Logica. Online first: January 27, 2018, https://doi.org/10.1007/s11225-017-9779z. [3] G. Olkhovikov and H. Wansing. Inference as doxastic agency. Part II: Ramifications and refinements. Australasian Journal of Logic, 14:408-438, 2017. |

11:30 | Ability and Knowledge ABSTRACT. Imagine that I place all the cards from a deck face down on a table and ask you to turn over the Queen of Hearts. Are you able to do that? In a certain sense, yes – this is referred to as causal ability. Since you are able to pick any of the face-down cards, there are 52 actions available to you, and one of these guarantees that you turn over the Queen of Hearts. However, you do not know which of those 52 actions actually guarantees the result. Therefore, you are not able to turn over the Queen of Hearts in the epistemic sense. I explore this epistemic qualification of ability and three ways of modelling it. I show that both the analyses of knowing how in epistemic transition systems (Naumov and Tao, 2018) and of epistemic ability in labelled STIT models (Horty and Pacuit, 2017) can be simulated using a combination of impersonal possibility, knowledge and agency in standard epistemic STIT models. Moreover, the standard analysis of the epistemic qualification of ability relies on action types – as opposed to action tokens – and states that an agent has the epistemic ability to do something if and only if there is an action type available to her that she knows guarantees it. I argue, however, that these action types are dispensable. This is supported by the fact that both epistemic transition systems and labelled STIT models rely on action types, yet their associated standard epistemic STIT models do not. Thus, no action types, no labels, and no new modalities are needed. Epistemic transition systems as well as labelled STIT models have been noticeably influenced by the semantics of ATL. In line with the ATL tradition, they model imperfect information using an epistemic indistinguishability relation on static states or moments, respectively. In the STIT framework this implies that agents cannot know more about the current moment/history pair than what is historically settled. In particular, they cannot know anything about the action they perform at that moment/history pair. This is at odds with the standard epistemic extension of STIT theory which models epistemic indistinguishability on moment/history pairs instead. The main benefit of using the standard epistemic STIT models instead of epistemic transition systems or labelled STIT models is that they are more general and therefore provide a more general analysis of knowing how and of epistemic ability in terms of the notion of knowingly doing. References Horty, J. F. and E. Pacuit (2017). Action types in stit semantics. The Review of Symbolic Logic 10(4), 617–637. Naumov, P. and J. Tao (2018). Together we know how to achieve: An epistemic logic of know-how. Artificial Intelligence 262(September), 279–300. |

12:00 | Introducing Causality in Stit Logic PRESENTER: Ilaria Canavotto ABSTRACT. In stit logic, every agent is endowed at every moment with a set of available choices. Agency is then characterized by two fundamental features. That is, (i) independence of agency: agents can select any available choice and something will happen, no matter what the other agents choose; (ii) dependence of outcomes: the outcomes of agents' choices depend on the choices of the other agents. In this framework, an agent sees to it that F when her choice ensures that F, no matter what the other agents do. This characterization (or variants thereof) is taken to capture the fact that an agent brings about F. However, the notion of bringing about thus modelled is too demanding to represent situations in which an individual, interacting with others, brings about a certain fact: actually, in most of the cases in which someone brings something about, what the other agents do matters. In light of this, we aim at refining stit semantic in order to make it suitable to represent the causal connection between actions and their consequences. The key idea is, first, to supplement stit semantics with action types (following Broersen, 2011; Herzig & Lorini, 2010; Horty & Pacuit, 2017; Ming Xu, 2010); then, to introduce a new relation of opposition between action types. We proceed as follows. Step 1. Let (Mom, <) be a tree-like and discrete ordered set of moments and call “transition” any pair (m,m') such that m' is a successor of m. Given a finite set Ag of agents, we have, for each i in Ag, a set A_i of action types available to i and a labelling function Act_i assigning to each transition an action type available to i, so that Act_i((m,m')) is the action that i performs along transition (m,m'). The joint action performed by a group I of agents along (m,m') is then the conjunction of the actions performed by the agents in I along (m,m'). The joint actions performed by Ag are called global actions, or strategy profiles. In this framework, the next-stit operator [i xstit]F can be given a straightforward interpretation. Step 2. Intuitively, an individual or joint action B opposes another individual or joint action when B blocks or hinders it (e.g. my action of running to catch a train is opposed by the crowd's standing in the way). In order to represent this relation, we introduce a function O associating to every action B the set O(B) of actions opposing B. We then say that B is unopposed in a global action G just in case B occurs in G and no action constituting G opposes B. The global actions in which B is unopposed represent counterfactual scenarios allowing us to determine the expected causal consequences of B. Specifically, we can say that F is an expected effect of B only if B leads to an F-state whenever it is done unopposed. Besides presenting an axiomatization of the logic induced by the semantics just sketched, we show that the next-stit operator [i xstit]F is a special case of a novel operator [i pxstit]F, defined in terms of expected effects, and that, by using this operator, we are able to fruitfully analyse interesting case studies. We then assess various refinements of the [i pxstit] operator already available in this basic setting. Finally, we indicate how this setting can be further elaborated by including the goals with which actions are performed. |

11:00 | Modal notions and the counterfactual epistemology of modality PRESENTER: Mihai Rusu ABSTRACT. The paper discusses a conceptual tension that arises in Williamson's counterfactual epistemology of modality: that between accepting minimal requirements for understanding, on the one hand, and providing a substantial account of modal notions, on the other. While Williamson's theory may have the resources to respond to this criticism, at least prima facie or according to a charitable interpretation, we submit that this difficulty is an instance of a deeper problem that should be addressed by various types of realist theories of metaphysical modality. That is, how much of the content of metaphysical modal notions can be informed through everyday/naturalistic cognitive and linguistic practices? If there is a gap between these practices and the content of our metaphysical modal assertions, as we believe there is, it appears that the (counterfactual) account needs to be supplemented by various principles, rules, tenets, etc. This reflects on the nature and content of philosophical notions, as it seems that one may not be able to endorse an extreme externalist account of philosophical expressions and concepts, of the kind Williamson favours, and at the same time draw out a substantial epistemology of these notions, as a robust interpretation of metaphysical modal truth seems to require. |

11:30 | Epistemology of Modality Without Metaphysics PRESENTER: Ilmari Hirvonen ABSTRACT. The epistemological status of modalities is one of the central issues of contemporary philosophy of science: by observing the actual world, how can scientists obtain knowledge about what is possible, necessary, contingent, or impossible. It is often thought that a satisfactory answer to this puzzle requires making non-trivial metaphysical commitments, such as grounding modal knowledge on essences or being committed to forms of modal realism. But this seems to put the cart before the horse, for it assumes that in order to know such ordinary modal facts as “it is possible to break a teacup” or such scientific modal facts as “superluminal signaling is impossible”, we should first have a clear metaphysical account of the relevant aspects of the world. It seems clear to us that we do have such everyday and scientific knowledge, but less clear that we have any kind of metaphysical knowledge. So, rather than starting with metaphysical questions, we offer a metaphysically neutral account of how modal knowledge is gained that nevertheless gives a satisfactory description of the way modal beliefs are formulated in science and everyday life. We begin by explicating two metaphysically neutral means for achieving modal knowledge. The first, a priori way is founded on the idea of relative modality. In relative modality, modal claims are defined and evaluated relative to a system. Claims contradicting what is accepted, fixed or implied in a system are impossible within that system. Respectively, claims that can be accepted within the system without contradiction are possible. Necessary claims in a system are such that their negation would cause a contradiction, and so on. The second, a posteriori way is based on the virtually universally accepted Actuality-to-Possibility Principle. Here, what is observed to be or not to be the case in actuality or under manipulations gives us modal knowledge. Often this also requires making ampliative inferences. The knowledge thus gained is fallible, but the same holds for practically all empirical knowledge. Based on prevalent scientific practice, we then show that there is an important bridge between these two routes to modal knowledge: Usually, what is kept fixed in a given system, especially in scientific investigation, is informed by what is discovered earlier through manipulations. Embedded in scientific modelling, relative modalities in turn suggest places for future manipulations in the world, leading to an iterative process of modal reasoning and the refinement of modal knowledge. Finally, as a conclusion, we propose that everything there is to know about modalities in science and in everyday life can be accessed through these two ways (or their combination). No additional metaphysical story is needed for the epistemology of modalities – or if such a story is required, then the onus of proof lies on the metaphysician. Ultimately, relative modality can accommodate even metaphysical modal claims. However, they will be seen as claims simply about systems and thus not inevitably about reality. While some metaphysicians might bite the bullet, few have been ready to do so explicitly in the existing literature. |

11:00 | Duality and interaction: a common dynamics behind logic and natural language PRESENTER: Luc Pellissier ABSTRACT. The fact that some objects interact well together – say, a function with an argument in its domain of definition, whose interaction produce a result – define a notion of duality that has been central in the last-century mathematics. Not only does it provide a general framework for considering at the same time objects of interests and tests (or measures) on them, but it also provides a way to both enrich and restrict the objects considered, by studying a relaxed or strengthened notion of interaction. A reconstruction of logic around the notion of interaction have been underway since the pi- oneering works of Krivine and Girard where (para-)proofs are seen as interacting by exchanging logical arguments, the interaction stopping successfully only if one the two gives up as it recognises to lack arguments. All the proofs interacting in a certain way – for instance, interacting correctly with the same proof – can then be seen as embodying a certain formula; and the possible operations on proofs translate into operations on formulæ. In this work, we intend to show that, somewhat surprisingly, the same approach in terms of duality and interaction succeeds in grasping structural aspects of natural language as purely emergent properties. Starting from the unsupervised segmentation of an unannotated linguistic corpus, we observe that co-occurrence of linguistic segments at any level (character, word, phrase) can be considered as a successful interaction, defining a notion of duality between terms. We then proceed to represent those terms by the distribution of their duals within the corpus and define the type of the former through a relation of bi-diuality with respect to all the other terms of the corpus. The notion of type can then be refined by considering the interaction of a type with other types, thus creating the starting point of a variant of Lambek calculus. This approach has several precursors, for instance Hjelmslev’s glossematic algebra, and more generally, the structuralist theory of natural language (Saussure, Harris). The formal version we propose in this work reveals an original relation between those perspectives and one of the most promising trends in contemporary logic. We also include an implementation of the described algorithm for the analysis of natural language. Accordingly, our approach appears as a way of analyzing many efficient mechanized natural language processing methods. More generally, this approach opens new perspectives to reassess the relation between logic and natural language. Bibliography. Gastaldi, Juan-Luis. “Why can computers understand natural language?” Under review in Philosophy and Technology. Girard, Jean-Yves. “Locus solum: from the rules of logic to the logic of rules”. In: Mathematical Structures in Computer Science 11.3 (2001), pp. 301–506. Hjelmslev, Louis and Hans Jørgen Uldall. Outline of Glossematics. English. Nordisk Sprog- og Kulturforlag. Copenhague, 1957. Krivine, Jean-Louis. “Realizability in classical logic”. In: Panoramas et synthèses 27 (2009), pp. 197–229. Lambek, Joachim. “The Mathematics of Sentence Structure”. In: The American Mathemat- ical Monthly 65.3 (Mar. 1958), pp. 154–170. |

11:30 | The practice of proving a theorem: from conversations to demonstrations ABSTRACT. In this talk, I will focus on mathematical proofs “in practice” and introduce as an illustration a proof of the equivalence of two presentations of the Poincaré homology sphere, which is taken from a popular graduate textbook (Rolfsen, 1976) and discussed in De Toffoli and Giardino (2015). This proof is interesting because it is given by showing a sequence of pictures and explaining in the text the actions that ought to be performed on them to move from one picture to the other and reach the conclusion. By relying on this example, I will propose to take into account Stone and Stonjic (2015)’s view of demonstrations as practical actions to communicate precise ideas; my objective is to evaluate whether such a suggestion can be of help to define what the mathematical “practice” of giving a poof is. Stone and Stonjic consider as a case study an “origami proof” of the Pythagorean theorem and base their analysis on certain aspects of the philosophy of language of David Lewis. According to Lewis (1979), communication naturally involves coordination; in principle, any action could be a signal of any meaning, as long as the agent and her audience expect the signal to be used that way; a conversation happens only when a coordination problem is solved. Formal reasoning is a particular form of coordination that happens on a conversational scoreboard, that is, an abstract record of the symbolic information that interlocutors need to track in conversation. Stone and Stonjic conclude that the role of practical action in a conversation is explained in terms of coherence relations: meaning depends on a special sort of knowledge— convention—that serves to associate practical actions with precise contributions to conversation; interpretive reasoning requires us to integrate this conventional knowledge—across modalities—to come up with an overarching consistent pattern of contributions to conversation. On this basis, I will discuss the pros of considering proofs as conversations: if this is the case, then non-linguistic representations like diagrams have content and mathematics is a distributed cognitive activity, since transformations in the world can be meaningful. However, some general problems arise in Lewis’ framework when applied to mathematical proof: (i) does a convention of truthfulness and trust exist really?; (ii) how can we coordinate and update our conversational scoreboard table when we read a written demonstration? The interest of the talk will be to investigate the possibility of a link between philosophy of the mathematical practice and philosophy of language. References De Toffoli, S. and Giardino, V. (2015). An Inquiry into the Practice of Proving in Low-Dimensional Topology. Boston Studies in the Philosophy and History of Science, 308, 315-336. Lewis, D. K. (1979). Scorekeeping in a language game. Journal of Philosophical Logic, 8, 339–359. Rolfsen, D. 1976. Knots and links. Berkeley: Publish or Perish. Stone, M and Stojnic, U. (2015). Meaning and demonstration. Review of Philosophy and Psychology (special issue on pictorial and diagrammatic representation), 6(1), 69-97. |

12:00 | An Attempt at Extending the Scope of Meaningfulness in Dummett's Theory of Meaning. ABSTRACT. Michael Dummett proposed a radically new approach to the problem how the philosophical foundations of a meaning theory of a natural language are to be established. His central point is threefold. First, a theory of meaning should give an account of the knowledge (i.e., understanding) that competent speakers of the language have of it. Second, the knowledge consists in certain practical abilities. If someone counts as a competent speaker, it is because, by using the language, she/he is able to do anything that all and only those who understand the language can do. Therefore what a theory of meaning should account for is those practical abilities that a competent speaker. Then, what do those practical abilities consist in? This question leads us to Dummett’s third point. Ordinarily, one is entitled to possess some ability by exhibiting (making manifest the possession of ) the ability. : i.e., by having done, often doing or being likely to do something that can be done by virtue of the ability. Truly, there is an intricate problem of what one should do to be entitled to possess the ability. Let us set aside the problem. Dummett tackled with another (related but more profound) problem: in almost all natural languages and formalized languages, there are various sentences that are, while well-formed and hence associated with certain precise conditions for them to be true, definitely beyond the scope of possible exhibition of those abilities that (if there were any at all) the understanding of the sentences would consist in. He objected to the common opinion that meaning of a sentence could be equated with its truth-conditions and instead claimed that the meaning should be accounted for as consisting in its (constructive) provability condition, that is, according to Dummett someone knows the meaning of a sentence just in case he knows what has to be done (what construction has to be realized) to justify the sentence (i.e. to establish constructively the sentence holds.) I basically agree with these lines of Dummett’s thought, although I should point out that his view on the scope of meaningfulness (intelligibility) of sentence is too restrictive. Dummett proposes that in giving provability conditions of a sentence we should adopt the intuitionistic meaning condition of the logical connectives. The reason is that the intuitionistic connectives are conservative with respect to constructivity: If sentences derived intuitionistically from some assumptions, then the sentence is constructively justifiable provided those assumptions are. However, I think we can point out there are some sentences that are while beyond the criterion, that can be established by virtue of an agent’s behavior that conclusively justifies the sentence. In that case the agent’s behavior could be said to make her understanding of sentence manifest. One of the typical examples of such sentences is, one might say certain kind of infinitary disjunctions that are treated prominently by the proponents of the geometric logic such as S.Vickers. I will investigate into the matter more closely in the talk. |

11:00 | Schematism of historical reality ABSTRACT. CLMPST2019 The philosophy of history and the methodology of historical knowledge - traditional themes within the framework of continental philosophy. A person, reasoning about history, seeks to clarify his position history, to define his presence in it. History is not only a reality in which humanity finds itself, understands and interprets itself, but also a professional sphere on the professional sphere of acquiring and transmitting knowledge. In the 20-th century, a kind of «emancipation» of concrete historical knowledge from the conceptual complexes of the "classical" philosophy of history and from metaphysical grounds took place. In the 20-th century there was a rejection of the main ideas of modern philosophy regarding the philosophy of history: the idea of a rational world order, the idea of the progressive development of mankind, the idea of transcendental power responsible for what is happening in history, etc. Anthropologists, sociologists, historians, ethnographers played important role in the process of «emancipation» of concrete historical knowledge. However, many questions did not receive any answer: «What is history?», «What is the historical meaning (and is there any at all)?», «What are the problems of interpretation of history and how can they be overcome?», «What are the general and special features of different types of history?». One of the ways of contemporary understanding of history is to coordinate the schematism of historical knowledge and the structure of historical being. According to the type of co-presence described in the event communication, three schematic dimensions of historical reality are possible: spatial, situational, temporal. The spatial schematic is presented in M. Foucault's «Words and Things». According to it, the historical is found there, and only where the spatial structure and the description of the typical method of communication of the elements of this structure are deployed. The situational schematic of the historical takes place where a specific (moral, political, legal) nature of the connection between historical events is realized. The most important element of the situational schematic is the generation that has received an education that has left the fruits and results of its labor. Attractive in the history described in this way is the representation of historical reality as a process: historical formations, historical types, historical characters. The temporal schematics of the historical, exemplified by M. Heidegger's phenomenological construction in «Being and Time», is found where the temporal measure of the existence of the historical being is explicated, that is, historicity is understood as the temporality of the existence of the real, and the temporality of the historical understanding itself. |

11:30 | Philosophy (and methodology) of the Humanities: towards constructing a glossary PRESENTER: Konstantin Skripnik ABSTRACT. It is hard to challenge the point of view according to which our century is a century of Humanities. Undisputable evidence in favor of this point is a list of thematic sections of 14th, 15th and our 16th Congresses of Logic, Methodology and Philosophy of Science (and Technology). We mean that the list of 14th Congress did not include any section with the term “Humanities” in its title; the programme of 15th Congress included a section devoted the philosophy of the Humanities, so did the present Congress, although this section has – if it is possible to say – a “palliative” title “Philosophy of the Humanities and the Social Sciences”, And now among topic areas of the 16th Congress one can see “Philosophical Traditions, Miscellaneous”. Now there is the intricate spectrum of different approaches to the philosophical and methodological problems of the Humanities, each of which is connected with its own “philosophy”, ideology, and visions. The fact is that the attempt to form the philosophy of the Humanities along the lines of the philosophy of science is definitely – and perhaps irrevocably – failed. It is time to scrutinize this spectrum with the aim to find a certain sustainable set of terms and notions for creating a basis of philosophy (and methodology) of the Humanities. We propose not the dictionary, not the encyclopedia (in Umberto Eco’s sense), but namely glossary, each entry of which will contain clear, straightforward definitions, practice and examples of using, from one side, and, from the other side, each entry will not be closed – it may be supplemented and advanced. The order of entries will not be alphabetical; it will rather be determined by the functional features of the terms and notions, by their relationships to each other. These relations can be historical, methodological, ontological, lexico-terminological, socially-oriented etc. The terms and notions included in the glossary give us opportunity to form a certain kind of the frame or better to say, some kind of net for further researches. The net (frame) may be expanded, by means of including new notions, terms, phrases and collocations; the frame may be deepen by means of forming new connections between “old” notions or between “old” and “new” notions and terms. For example, we include notion “text” in the glossary, this inclusion forces to include such notions as “author”, “reader”, “language”, “style”, “(outer) world”, “history”, “value”. We suppose that the initial list of basic notions must include the following set: representation, intention, sign and sign system, code, semiosis and retrograde semiosis (as a procedure of sense analysis), sense, meaning, dialogue, translation, text (and notions connected with text), interpretation and understanding. It is easy to see that basic notions are used in different realms of the Humanities (semiotics, hermeneutics, significs, history of notions and history of ideas, theory of literature, philosophy, logic and linguistics); this fact emphasizes their basic features. |

Organizers: Carolin Antos, Deborah Kant and Deniz Sarikaya

Text is a crucial medium to transfer mathematical ideas, agendas and results among the scientific community and in educational context. This makes the focus on mathematical texts

a natural and important part of the philosophical study of mathematics. Moreover, it opens up the possibility to apply a huge corpus of knowledge available from the study of texts in other disciplines to problems in the philosophy of mathematics.

This symposium aims to bring together and build bridges between researchers from different methodological backgrounds to tackle questions concerning the philosophy of mathematics. This includes approaches from philosophical analysis, linguistics (e.g., corpus studies) and literature studies, but also methods from computer science (e.g., big data approaches and natural language processing), artificial intelligence, cognitive sciences and mathematics education. (cf. Fisseni et al. to appear; Giaquinto 2007; Mancosu et al. 2005; Schlimm 2008; Pease et al. 2013).

The right understanding of mathematical texts might also become crucial due to the fast successes in natural language processing on one side and automated theorem proving on the other side. Mathematics as a technical jargon or as natural language, which quite reach structure, and semantic labeling (via LaTeX) is from the other perspective an important test-case for practical and theoretical study of language.

Hereby we understand text in a broad sense, including informal communication, textbooks and research articles.

14:00 | Bridging the Gap Between Proof Texts and Formal Proofs Using Frames and PRSs PRESENTER: Marcos Cramer ABSTRACT. We will debate how different layers of interpretation of a mathematical text are useful at different stages of analysis and in different contexts. To achieve this goal we will rely on tools from formal linguistics and artificial intelligence which, among other things, allow to make explicit in the formal representation information that is implicit in the textual form. In this way, we wish to contribute to an understanding of the relationship between the formalist and the textualist position in the investigation of mathematical proofs. Proofs are generally communicated in texts (as strings of symbols) and are modelled logically as deductions, e.g. a sequence of first order formulas fulfilling specified syntactical rules. We propose to bridge the gap between these two representations by combining two methods: First, Proof Representation Structures (PRSs), which are an extension of Discourse Representation Structures (see Geurts, Beaver, & Maier, 2016). Secondly, frames as developed in Artificial Intelligence and linguistics. PRSs (Cramer, 2013) were designed in the Naproche project to formally represent the structure and meaning of mathematical proof texts, capturing typical structural building blocks like definitions, lemmas, theorems and proofs, but also the hierarchical relations between propositions in a proof. PRSs distinguish proof steps, whose logical validity needs to be checked, from sentences with other functions, e.g. definitions, assumptions and notational comments. On the (syntacto-)semantic level, PRSs extend the dynamic quantification of DRSs to more complex symbolic expressions; they also represent how definitions introduce new symbols and expressions. Minsky (1974) introduces frames as a general “data-structure for representing a stereotyped situation”. ‘Situation’ should not be understood too narrowly, as frames can be used to model concepts in the widest sense. The FrameNet project prominently applies frames to represent the semantics of verbs. For example, “John sold his car. The price was € 200.” is interpreted as meaning that the second sentence anaphorically refers to the `price` slot of `sell`, which is not explicitly mentioned in the first sentence. In the context of mathematical texts, we use frames to model what is expected of proofs in general and specific types of proofs. In this talk, we will focus on frames for inductive proofs and their interaction with other frames. An example of the interaction of different proof frames is the dependence of the form of an induction on the underlying inductive type, so that different features of the type (the base element and the recursive construction[s]) constitute natural candidates for the elements of the induction (base case and induction steps). The talk will show how to relate the two levels (PRSs and frames), and will sketch how getting from the text to a fully formal representation (and back) is facilitated by using both levels. Cramer, M. (2013). Proof-checking mathematical texts in controlled natural language (PhD thesis). Rheinische Friedrich-Wilhelms-Universität Bonn. Geurts, B., Beaver, D. I., & Maier, E. (2016). Discourse Representation Theory. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2016). Minsky, M. (1974). A Framework for Representing Knowledge. Cambridge, MA, USA: Massachusetts Institute of Technology. |

14:30 | Perspectives on Proofs ABSTRACT. In this talk, we want to illustrate how to apply a general concept of perspectives to mathematical proofs, considering the dichotomy of formal proofs and textual presentation as two perspectives on the same proof. We take *perspective* to be a very general notion that applies to spatial representation, but also phenomena in natural language syntax known as *perspectivation* and related to diathesis (grammatical voice) or semantically partially overlapping verbs such as *sell*, *buy*, *trade*; to phenomena in natural language semantics (e.g., prototype effects) and in narrative texts (Schmid, 2010 distinguishes perspective of characters or narrators in six dimensions, from perception to language). In most applications of the concept of perspective, a central question is how to construct a superordinate ‘meta’perspective that accommodates given perspectives, while maintaining complementary information. Perspectival phenomena intuitively have in common that different perspectives share some information and are partially ‘intertranslatable’ or can be seen as projections from a more complete and more fine-grained metaperspective to less informative or more coarse perspectives. In our approach, modelling is done bottom-up starting from specific instances. We advocate a formal framework for the representation of perspectives as frames and using feature structures, a data structure well known in linguistics. With feature structures, it becomes easier to model the interaction of frames and approach compositionality, and connects to formal models of (unification-based) linguistic grammar like Berkeley Construction Grammar (cf., e.g., Boas & Sag, 2012), but also recent work on frame semantics (see, e.g., Gamerschlag, Gerland, Osswald, & Petersen, 2015). Metaperspectives are constructed using decomposition of features and types into finer structures (see Fisseni, forthcoming), organized in the inheritance hierarchies typical of feature structure models (see, e.g., Carpenter, 1992; Pollard & Sag, 1994). Using this formal model of perspectives, it can be shown that occasionally, e.g. in the case of metaphors, *partial* perspectives are used, i.e. that perspectives contain semantic material that is to be disregarded, for instance splitting notions semantic verb classes into different properties like *involving an agent* or *most prominent participant*. Similar to syntactic perspectivation (active – passive, *buy* – *sell*), where the same event can be conceptualized differently (e.g., as an action or as a process) mathematical texts and formal proofs can be seen as describing ‘the same proof’ as a process and a state of affairs, respectively. The talk will show how to elaborate this analogy, and will discuss the construction of a metaperspective, i.e. merging both perspectives in a way that their common core will be distilled. References ---------- Boas, H. C., & Sag, I. A. (Eds.). (2012). *Sign-based construction grammar*. Stanford: CSLI. Carpenter, B. (1992). *The logic of typed feature structures*. Cambridge University Press. Fisseni, B. (forthcoming). Zwischen Perspektiven. In *Akten des 52. Linguistischen Kolloquiums, Erlangen*. Gamerschlag, T., Gerland, D., Osswald, R., & Petersen, W. (Eds.). (2015). *Meaning, frames, and conceptual representation. Studies in language and cognition*. Düsseldorf: Düsseldorf University Press. Pollard, C., & Sag, I. (1994). *Head driven phrase structure grammar*. University of Chicago Press. Schmid, W. (2010). *Narratology. An introduction.* Berlin: de Gruyter. |

14:00 | A Roundabout Ticket to Pluralism PRESENTER: Luca Zanetti ABSTRACT. A thriving literature has developed over logical and mathematical pluralism (LP and MP, respectively) – i.e. the views that several rival logical and mathematical theories can be correct. However, these have unfortunately grown separate; we submit that, instead, they both can greatly gain by a closer interaction. To show this, we present some new kinds of MP modeled on parallel ways of substantiating LP, and vice versa. We will use as a reference abstractionism in the philosophy of mathematics (Wright 1983). Abstractionists seek to recover as much mathematics as possible from abstraction principles (APs), viz. quantified biconditionals stating that two items have the same abstract just in case they belong to the same equivalence class; e.g. Hume’s Principle (HP), which states that two concepts have the same cardinal number iff they can be put into one-to-one correspondence (Frege 1884, §64). The proposed new forms of pluralism we will advance can fruitfully be clustered as follows: 1. CONCEPTUAL PLURALISM – From LP to MP: Just as LPs argue that different relations of logical consequence are equally legitimate by claiming that the notion of validity is underspecified (Beall & Restall 2006) or polysemous (Shapiro 2014), abstractionists might deem more than one version of HP acceptable by stating that the notion of “just as many” – and, consequently, of cardinal number – admits of different precisifications. 2. DOMAIN PLURALISM – From MP to LP: Just as MPs claim that rival mathematical theories can be true of different domains (Balaguer 1998), it could be argued that each version of HP introduces its own domain of cardinal numbers, and that the results these APs yield might differ with respect to some domains, and match with respect to some others (e.g., of finite and infinite cardinals). The proposal, in turn, prompts some reflections on the sense of “rivalry” between the logics accepted by LPs, which often agree on some laws, while diverging on others. Is the weaker logic genuinely disagreeing or just silent on the disputed rule? Do rival logicians employ the same notion of consequence in those rules about which they agree or, given some inferentialist view, always talk past each other? 3. CRITERIA PLURALISM – From LP to MP, and back: Another form of pluralism about abstractions could be based on the fact that more than one AP is acceptable with respect to different criteria (e.g. irenicity, conservativity, simplicity); accordingly, LP has so far been conceived as the claim that more than one logic satisfies a single set of requirements, but a new form of LP could arise from the acceptance of several legitimacy criteria themselves (e.g. compliance with our intuitions on validity, accordance with mathematical practice). These views – besides, we will argue, being in and of themselves attractive – help expanding and clarifying the spectrum of possibilities available to pluralists in the philosophy of both logic and mathematics; as a bonus, this novel take can be shown to shed light on long-standing issues regarding LP and MP – in particular, respectively, the “collapse problem” (Priest 1999) and the Bad Company Objections (Linnebo 2009). References Balaguer, M. (1998). Platonism and Anti-Platonism in the Philosophy of Mathematics. OUP. Beall, JC and Restall, G. (2006). Logical Pluralism. OUP. Frege, G. (1884). The Foundations of Arithmetic, tr. by J. Austin, Northwestern University Press, 1950. Linnebo, Ø. (2009). Introduction to Synthese Special Issue on the Bad Company Problem, 170(3): 321-9. Priest, G. (1999). “Logic: One or Many?”, typescript. Shapiro, S. (2014). Varieties of Logic. OUP. Wright, C. (1983). Frege’s Conception of Numbers as Objects, Aberdeen UP. |

14:30 | A Practice-Oriented Logical Pluralism ABSTRACT. I conceive logic as a formal presentation of a guide to undertaking a rational practice, a guide which itself is constituted by epistemic norms and their consequences. The norms themselves may be conceived in a non-circular manner with a naturalistic account, and we use Hilary Kornblith's: epistemic norms are "hypothetical imperatives" informed by instrumental desires "in a cognitive system that is effective at getting at the truth" ([1]). What I mean by "formal" is primarily what John MacFarlane refers to in his PhD thesis [2] as the view that logic "is indifferent to the particular identities of objects", taken together with MacFarlane's intrinsic structure principle and my own principle that logic is provided by the norms that constitute a rational practice. The view that logic is provided by constitutive norms for a rational practice helps us respond to a popular objection to logical pluralism, the collapse argument ([3], chapter 12). Logic here has been misconceived as starting with a given situation and then reasoning about it. Instead we start with our best known practice to suit an epistemic goal, and ask how to formalise this practice. This view of logic provides a starting point for an account of the normativity of logic: assuming we ought to follow the guide, we ought to accept the logic's consequences. If we cannot, we must either revise either the means of formalisation or some of the epistemic norms that constitute the guide. Revision might be performed either individually or on a social basis, comparable to Novaes' conception in [4]. Mutual understanding of differences emerges from the practice-based principle of interpretive charity: we make the best sense of others when we suppose they are following epistemic norms with maximal epistemic utility with respect to our possible interpretations of what their instrumental desires could be. One might ask what the use is of logic as a formalisation of good practice rather than good practice in itself. Indeed Teresa Kouri Kissel in [5] takes as a motto that "we ought not to legislate to a proper, functioning, science". Contrary to this, my response is that logic provides evidence for or against our conception of good practice, and can thus outrun our own intuitions of what good practice is. Implementations of intuitionistic logic manifested in proof assistants such as Coq have proved themselves capable of outrunning intuitions of good mathematical practice in the cases of particularly long proofs (see for instance [6]). [1] Kornblith, Hilary, "Epistemic Normativity", Synthese, Vol. 94, pp. 357-376, 1993. [2] MacFarlane, John, What Does It Mean That Logic Is Formal, PhD thesis University of Pittsburgh, 2000. [3] Priest, Graham, Doubt Truth to Be a Liar, 2009. [4] Dutilh Novaes, Catarina, "A Dialogical, Multi-Agent Account of the Normativity of Logic", Dialectica, Vol. 69, Issue 4, pp. 587-609, 2015. [5] Kouri Kissel, Teresa, Logical Instrumentalism, PhD thesis Ohio State University, 2016. [6] Gonthier, Georges, "Formal Proof—The Four Color Theorem", Notices of the American Mathematical Society, Vol. 55, No. 11, pp. 1382-1393, 2008. |

15:15 | A structuralist framework for the automatic analysis of mathematical texts PRESENTER: Juan Luis Gastaldi ABSTRACT. As a result of the “practical turn” in the philosophy of mathematics, a significant part of the research activity of the field consists in the analysis of all sorts of mathematical corpora. The problem of mathematical textuality (inscriptions, symbols, marks, diagrams, etc.) has thus gained an increasing importance as decisive aspects of mathematical knowledge have been shown to be related to regularities and emergent patterns identifiable at the level of mathematical signs in texts. However, despite the fruitfulness of text-driven approaches in the field, the concrete tools available for the analysis of actual mathematical texts are rather poor and difficult to employ objectively. Moreover, analytical techniques borrowed from other fields, such as computational linguistics, NLP, logic or computer science, often present problems of adaptability and legitimacy. Those difficulties reveal a lack of clear foundations for a theory of textuality that can provide concrete instruments of analysis, general enough to deal with mathematical texts. In this work, we intend to tackle this problem by proposing a novel conceptual and methodological framework for the automatic treatment of texts, based on a computational implementation of an analytical procedure inspired by the classic structuralist theory of signs. Guided by the goal of treating mathematical texts, our approach assumes a series of conditions for the elaboration of the intended analytical model. In particular, the latter should rely on a bottom-up approach; be unsupervised; be able to handle multiple sign regimes (e.g. alphabetical, formulaic, diagrammatical, etc.); be oriented towards the identification of syntactic structures; capture highly stable regularities; and provide an explicit account of those regularities. A major obstacle the vast majority of existing NLP models present to match those requirements resides in the primacy accorded to words as fundamental units of language. The main methodological hypothesis of our perspective is that basic semiological units should not be assumed (e.g. as words in a given dictionary) but discovered as the result of a segmentation procedure. The latter not only allows to capture generic units of different levels (graphical, morphological, lexical, syntactical, etc.) in an unsupervised way, but also provides a more complex semiological context for those units (i.e. units co-occurring with a given unit within a certain neighborhood). The task of finding structural features can thus be envisaged as that of identifying plausible ways of typing those units, based on a duality relation between units and contexts within the segmented corpus. More precisely, two terms are considered of the same type if they are bi-dual with respect to contexts. The types thus defined can then be refined by considering their interaction, providing an emergent complex type structure that can be taken as the abstract grammar of the text under analysis. In addition to providing a conceptual framework and concrete automated tools for textual analysis, our approach puts forward a novel philosophical perspective in which logic appears as a necessary intermediary between textual properties and mathematical contents. Bibliography Juan Luis Gastaldi. Why can computers understand natural language. Philosophy & Technology. Under review. Jean-Yves Girard et al. Proofs and types. Cambridge University Press, New York, 1989. Zellig Harris. Structural linguistics. University of Chicago Press, Chicago, 1960. Louis Hjelmslev. Résumé of a Theory of Language. Number 16 in Travaux du Cercle linguistique de Copenhague. Nordisk Sprog-og Kulturforlag, Copenhagen, 1975. Tomas Mikolov et al. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013. Peter D. Turney et al. From frequency to meaning: Vector space models of semantics. CoRR, abs/1003.1141, 2010. |

15:45 | Entering the valley of formalism: Results from a large-scale quantitative investigation of mathematical publications ABSTRACT. As pointed out by Reuben Hersh (1991) there is a huge difference between the way mathematicians work and the way they present their results. In a previous qualitative study on mathematical practice we confirmed this result by showing that although mathematicians frequently use diagrams and figures in their work process, they tend to downplay these representations their published manuscripts, in part because they feel subjected to genre norms and values when they prepare their work for publication (AUTHOR and ANONYMIZED 2016; Accepted). This result calls for a better understanding of these genre norms and for the development the norms may undergo over time. From a casual point of view, it may seem that the norms are currently in a process of change. The formalistic claim that figures and diagrams are superfluous has been contested by philosophers of mathematics (e.g. Brown 1999, Giaquiont 2007), and looking at mathematics journals and textbooks, one gets the impression that diagrams and figures are being used more frequently. That however is merely an impression, as we do not have solid empirical data tracking the representational style used in mathematics texts. In order to fill this gab ANONYMIZED, ANONYMIZED and AUTHOR developed a classification scheme that makes it possible to distinguish between the different types of diagrams used in mathematics based on the cognitive support they offer (AUTHOR et al 2018). The classification scheme is designed to facilitate large-scale quantitative investigations of the norms and values expressed in the publication style of mathematics, as well as trends in the kinds of cognitive support used in mathematics. We presented the classification scheme at conferences and workshops during the summer 2018 to get feedback from other researchers in the field. After minor adjustments we applied the scheme to track the changes in publication style in the period 1885 to 2015 in the three mathematics journals Annals of Mathematics, Acta Mathematica and Bulletin of the AMS. In this talk I will present the main results of our investigation, and I will discuss the advantages and disadvantages of our method as well as the possible philosophical implications of our main results. Literature • Hersh, R. (1991): Mathematics has a front and a back. Synthese 80(2), 127-133. • AUTHOR and ANONYMIZED (2016): [Suppressed for review] • AUTHOR and ANONYMIZED (Accepted): [Suppressed for review] • Brown, J. R. (1999): Philosophy of mathematics, an introduction to a world of proofs and pictures. Philosophical Issues in Science. London: Routledge. • Giaquinto, M. (2007): Visual Thinking in mathematics, an epistemological study. New York: Oxford University Press. • AUTHOR, ANONYMIZED and ANONYMIZED (2018): [Suppressed for review] |

15:15 | Deliberation, Single-Peakedness and Voting Cycles PRESENTER: Olivier Roy ABSTRACT. A persistent theme in defense of deliberation as a process of collective decision making is the claim that voting cycles, and more generally Arrowian impossibility results can be avoided by public deliberation prior to aggregation [2,4]. The argument is based on two observations. First is the mathematical fact that pairwise majority voting always outputs a Condorcet winner when the input preference profile is single-peaked. With its domain restricted to single-peaked profiles pairwise majority voting satisfies, alongside the other Arrowian conditions, rationality when the number of voters is odd [1]. In particular, it does not generate voting cycles. Second are the conceptual arguments [4, 2] and the empirical evidence that deliberation fosters the creation of single-peaked preferences [3], which is often explained through the claim that group deliberation helps creating meta-agreements [2]. These are agreements regarding the relevant dimensions along which the problem at hand should be conceptualized, as opposed to a full consensus on how to rank the alternatives, i.e. a substantive agreement. However, as List [2] observes, single-peakedness is only a formal structural condition on individual preferences. Although single-peaked preferences do entail the existence of a structuring dimension, this does not mean that the participant explicitly agree on what that dimension is. As such single-peakedness does not reflect any joint conceptualization, which is necessary for meta-agreement. Achieving meta-agreement usually requires the participants to agree on the relevant normative or evaluative dimension for the problem at hand. This dimensions will typically reflect a thick concept intertwining factual with normative and evaluative questions, for instance health, well-being, sustainability, freedom or autonomy, to name a few. It seems rather unlikely that deliberation will lead the participants to agree on the meaning of such contested notions. Of course, deliberative democrats have long observed that public deliberation puts rational pressure on the participants to argue in terms of the common good [4], which might be conducive of agreement on a shared dimension. But when it comes to such thick concepts this agreement might only be a superficial one, involving political catchwords, leaving the participants using their own, possibly mutually incompatible understanding of them [5]. All of this does not exclude the fact that deliberation might make it more likely, in comparison with other democratic procedures, to generate single-peaked preferences from meta-agreements. The point is rather that by starting from the latter one puts the bar very high, especially if there appear to be other ways to reach single-peaked preferences or to avoid cycles altogether. In view of this two questions arise regarding the claim that deliberation helps avoiding cycles: Q1: Can cycles be avoided by pre-voting deliberation in cases where they are comparatively more likely to arise, namely in impartial cultures i.e. where a voter picked at random is equally likely to have any of the possible strict preference orderings on the alternatives? Q2: If yes, are meta-agreements or the creation of single-peaked preferences necessary or even helpful for that? In this work we investigate these questions more closely. We show that, except in case where the participants are extremely biased towards their own opinion, deliberation indeed helps to avoid cycles. It does so even in rather unfavourable conditions, i.e. starting from an impartial culture and with participants rather strongly biased towards themselves. Deliberation also creates single-peaked preferences. Interestingly enough, however, this does not appear particularly important for avoiding cycles. Most if not all voting cycles are eliminated, but not by reaching single-peaked preferences. We show this in a minimalistic model of group deliberation in which the participants repeatedly exchange, and rationally update their opinions. Since this model completely abstracts away from the notion of meta agreement, it provides an alternative, less demanding explanation as to how pre-voting deliberation can avoid cyclic social preferences, one that shifts the focus from the creation of single-peaked preferences to rational preference change and openness to change one's mind upon learning the opinion of others. References [1] K. J. Arrow. Social Choice and Individual Values. Number 12. Yale University Press, 1963. [2] C. List. Two concepts of agreement. The Good Society, 11(1):72–79, 2002. [3] C. List, R. C. Luskin, J. S. Fishkin, and I. McLean. Deliberation, single-peakedness, and the possibility of meaningful democracy: evidence from deliberative polls. The Journal of Politics, 75(1):80–95, 2012. [4] D. Miller. Deliberative democracy and social choice. Political studies, 40(1 suppl):54–67, 1992. [5] V. Ottonelli and D. Porello. On the elusive notion of meta-agreement. Politics, Philosophy & Economics, 12(1):68–92, 2013. |

15:45 | Learning Probabilities: A Logic of Statistical Learning PRESENTER: Soroush Rafiee Rad ABSTRACT. We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of `radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a `plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning. |

15:15 | Does Scientific Literacy Require a Theory of Truth? ABSTRACT. From "flat earthers" to "anti-vaxxers", to the hoax cures and diets in social media, the importance of scientific literacy cannot be emphasized enough. On the one hand, this informs one of the challenges of those in science education. Changes in teaching approaches and addressing deficiencies in the curriculum may be done. On the other hand, this opens the discussion to epistemological questions of truth and knowledge. Easily one can go to “The earth is flat is false”, or “Baking soda is not a treatment for cancer” for these kinds of discussions that would involve scientific literacy and epistemology. This paper aims to show that while scientific literacy may benefit from discussions of epistemological issues, it does not require a theory of truth. This appears counterintuitive since there is a view that epistemology needs to account for the success of science in terms of its being truth conducive. This is the view that Elgin (2017) calls veritism. Following Elgin, some of the problems with veritism in science will be discussed in terms of their relevance to scientific literacy. Popularizers of science would also probably object to this position, that a theory of truth is not required for scientific literacy. Especially since this paper, would also look back at Rorty’s (1991) views on science to further buttress its position. Indeed, Rorty’s views on science may prove more relevant to issues in scientific literacy than to science itself. REFERENCES Elgin, Catherine. 2017. True Enough. MIT Press. Rorty, Richard. 1991. Objectivity, Relativism and Truth. Cambridge University Press. |

15:45 | Impact of Teaching on Acceptance of Pseudo-Scientific Claims PRESENTER: Jan Štěpánek ABSTRACT. Can teaching have any impact on students' willingness to embrace pseudo-scientific claims? And if so, will this impact be significant. This paper aims to present an ongoing research conducted in two countries and four universities which aims to answer these questions. The research is based on a previous work McLaughlin & McGill (2017). They conducted a study among university students which seems to show that teaching critical thinking can have a statistically significant impact on the acceptance of pseudo-scientific claims by students. They compared a group of students that attended a course on critical thinking and pseudo-scientific theories with a control group of students who attended a course on a general philosophy of science using the same questionnaire containing the pseudo-scientific claims. The questionnaire was administered at the onset of the semester (along with a Pew Research Center Science Knowledge Quiz), and then at the end of the semester. While there was no significant change in a degree of belief in pseudo-scientific claims in the control group, the experimental group showed a statistically significant decrease in belief in pseudo-scientific claims. In the first phase of our research, we conducted a study similar to that of McLaughlin & McGill, though we were not able to replicate their results. There was no significant change in belief in pseudo-scientific claims among the study's participants. This, in our opinion, is due to the imperfections and flaws in both our and McLaughlin & McGills studies. In this paper, we would like to present our research along with the results obtained during its first phase. We will also discuss the shortcomings and limitations of our research and the research it is based on. Finally, we would like to present and discuss future plans for the next phase of our research into the teaching of critical thinking and its transgression of critical thinking in cases focusing on humanities and science. McLaughlin, A.C. & McGill, A.E. (2017): Explicitly Teaching Critical Thinking Skills in a History Course. Science & Education 26(1–2), 93–105. Adam, A. & Manson, T. (2014): Using a Pseudoscience Activity to Teach Critical Thinking. Teaching of Psychology 41(2), 130–134. Tobacyk, J. (2004): A revised paranormal belief scale. International Journal of Transpersonal Studies 23, 94–98. |

15:15 | Liability Without Consciousness? The Case of a Robot ABSTRACT. It is well known that the law punishes those who caused harm to someone else. However, the criteria for punishment becomes complicated when applied to non-human agents. When talking about non-human agency we primarily have in mind robot agents. Robot agency could be reasonably defended in terms of liability, the mental state of being liable. The roots of the problem should be looked for when defining robots’ ability to have mental states but even when we put this particular problem aside the question of liability seems to be of a crucial value while discussing a harm-causing technology. Since the question of liability requires special attention to the domain of mental states, we argue that it is crucial for the legal domain to define the legal personhood of a robot. We should try to answer the question – what constitutes a legal person in terms of non-human agency? If legal personhood is the ability to have legal rights and obligations, how can we ascribe these human qualities to a non-human agent? Are computing paradigms able to limit robots’ ability to cause harm? If so, can legal personhood still be ascribed (having in mind that computing could be limiting the free will)? These questions are of the highest importance when thinking about if we should punish a robot, and how this punishment could function having in mind the non-human personhood. |

15:45 | Automated Reasoning with Complex Ethical Theories -- A Case Study Towards Responsible AI PRESENTER: Christoph Benzmüller ABSTRACT. The design of explicit ethical agents [7] is faced with tough philosophical and practical challenges. We address in this work one of the biggest ones: How to explicitly represent ethical knowledge and use it to carry out complex reasoning with incomplete and inconsistent information in a scrutable and auditable fashion, i.e. interpretable for both humans and machines. We present a case study illustrating the utilization of higher-order automated reasoning for the representation and evaluation of a complex ethical argument, using a Dyadic Deontic Logic (DDL) [3] enhanced with a 2D-Semantics [5]. This logic (DDL) is immune to known paradoxes in deontic logic, in particular "contrary-to-duty" scenarios. Moreover, conditional obligations in DDL are of a defeasible and paraconsistent nature and thus lend themselves to reasoning with incomplete and inconsistent data. Our case study consists of a rational argument originally presented by the philosopher Alan Gewirth [4], which aims at justifying an upper moral principle: the "Principle of Generic Consistency" (PGC). It states that any agent (by virtue of its self-understanding as an agent) is rationally committed to asserting that (i) it has rights to freedom and well-being; and that (ii) all other agents have those same rights. The argument used to derive the PGC is by no means trivial and has stirred much controversy in legal and moral philosophy during the last decades and has also been discussed as an argument for the a priori necessity of human rights. Most interestingly, the PGC has lately been proposed as a means to bound the impact of artificial general intelligence (AGI) by András Kornai [6]. Kornai's proposal draws on the PGC as the upper ethical principle which, assuming it can be reliably represented in a machine, will guarantee that an AGI respects basic human rights (in particular to freedom and well-being), on the assumption that it is able to recognize itself, as well as humans, as agents capable of acting voluntarily on self-chosen purposes. We will show an extract of our work on the formal reconstruction of Gewirth's argument for the PGC using the proof assistant Isabelle/HOL (a formally-verified, unabridged version is available in the Archive of Formal Proofs [8]). Independent of Kornai's claim, our work exemplarily demonstrates that reasoning with ambitious ethical theories can meanwhile be successfully automated. In particular, we illustrate how it is possible to exploit the high expressiveness of classical higher-order logic as a metalanguage in order to embed the syntax and semantics of some object logic (e.g. DDL enhanced with quantication and contextual information) thus turning a higher-order prover into a universal reasoning engine [1] and allowing for seamlessly combining and reasoning about and within different logics (modal, deontic, epistemic, etc.). In this sense, our work provides evidence for the flexible deontic logic reasoning infrastructure proposed in [2]. References 1. C. Benzmüller. Universal (meta-)logical reasoning: Recent successes. Science of Computer Programming, 172:48-62, March 2019. 2. C. Benzmüller, X. Parent, and L. van der Torre. A deontic logic reasoning infrastructure. In F. Manea, R. G. Miller, and D. Nowotka, editors, 14th Conference on Computability in Europe, CiE 2018, Proceedings, volume 10936 of LNCS, pages 60-69. Springer, 2018. 3. J. Carmo and A. J. Jones. Deontic logic and contrary-to-duties. In Handbook of Philosophical Logic, pages 265-343. Springer, 2002. 4. A. Gewirth. Reason and morality. University of Chicago Press, 1981. 5. D. Kaplan. On the logic of demonstratives. Journal of Philosophical Logic, 8(1):81-98, 1979. 6. A. Kornai. Bounding the impact of AGI. Journal of Experimental & Theoretical Artificial Intelligence, 26(3):417-438, 2014. 7. J. Moor. Four kinds of ethical robots. Philosophy Now, 72:12-14, 2009. 8. XXXXXXX. Formalisation and evaluation of Alan Gewirth's proof for the principle of generic consistency in Isabelle/HOL. Archive of Formal Proofs, 2018. |

15:15 | How virtue signalling makes us better: Moral preference of selection of types of autonomous vehicles. PRESENTER: Robin Kopecký ABSTRACT. In this paper, we present a study on moral judgement on autonomous vehicles (AV). We employ a hypothetical choice of three types of “moral” software in a collision situation (“selfish”, “altruistic”, and “aversive to harm”) in order to investigate moral judgement beyond this social dilemma in the Czech population we aim to answer two research questions: Whether the public circumstances (i.e. if the software choice is visible at the first glance) make the personal choice “altruistic” and what type of situation is most problematic for the “altruistic” choice (namely if it is the public one, the personal one, or the one for a person’s offspring). We devised a web-based study running between May and December of 2017 and gathered 2769 respondents (1799 women, 970 men; age IQR: 25-32). This study was a part of research preregistered at OSF before start of data gathering. The AV-focused block of the questionnaire was opened by a brief information on AV and three proposed program solutions for previously introduced “trolley problem like” collisions: “selfish” (with preference for passengers in the car), “altruistic” (with preference for the highest number of saved lives), and “aversion to harm” (which will not actively change direction leading to killing a pedestrian or a passenger, even though it would save more lives in total). Participants were asked the following four questions: 1. What type of software would you choose for your own car if nobody was able to find out about your choice (“secret/self”). 2. What type of software would you choose for your own car if your choice was visible at the first glance (“public/self”). 3. What type of software would you choose for the car of your beloved child if nobody was able to find out (“child”). 4. What type of software would you vote for in secret in the parliament if it was to become the only legal type of AV (“parliament”). The results are as follows, test of independence was performed by a chi square: “Secret/self”: “selfish” (45.2 %), “altruistic” (45.2 %), “aversion to harm” (9.6 %). “public/self: “selfish” (30 %), “altruistic” (58.1 %), “aversion to harm” (11.8 %). In public choice, people were less likely to choose selfish software for their own car. “Child”: “selfish” (66.6 %), “altruistic” (27.9 %), “aversion to harm” (5.6 %). A vote in parliament for legalizing single type: “selfish” (20.6 %), “altruistic” (66.9 %), “aversion to harm” (12.5 %) In choice of car for one’s own child people were more likely to choose selfish software than in the choice for themselves. Based on the results, we can conclude that the public choice is more likely to pressure consumers to accept the altruistic solution making it a reasonable and relatively cheap way to shift them towards higher morality. In less favourable news, the general public tends to heightened sensibility and selfishness in case of one’s own offspring, and a careful approach is needed to prevent moral panic. |

15:45 | Prospect of NBICS Development and Application ABSTRACT. The report considers the basic principles of the philosophical approach to NBICS-convergence. Being a method of getting a fundamental knowledge, NBICS-technologies turn into an independent force influencing nature, society and man. One of the basic ideas of nanotechnology concept is an opportunity to consider a man as a constructor of the real world, e.g., by means of constructing human perception due to nanochips, programming the virtual reality in human brain. It might lead to some new forms of consciousness and emergence of the modified objective reality. Developing and introducing nanotechnologies brings up new scientific issues being closely connected with the realization of possible projects such as, for instance, complete description of thinking processes and perception of the reality by human brain, slowdown of aging processes, opportunity of human organism rejuvenation, development of brain/brain or brain/computer interfaces, creation of robots and other devices possessing at least partial individuality, etc. Penetrating technologies into human perception inevitably results in the hybrid reality, which eliminates any borders between man’s virtual personality and his physical embodiment. Space ideas of physical limits of communication and identification also change due to the fact that human presence in the communication medium is cognized as virtual and real simultaneously. It turns out to be an absolutely new phenomenon of human existence having in many ways the constructivism principles in its foundation. The active role of cognition is the most important aspect of the paradigm analyzed in the report as the methodology of this new type of technologies. Such an opportunity opens unlimited perspectives for individual and collective creative work. The author examines the dialogue between man and nature by means of the technologies. He demonstrates that they are directed to the decision of scientific issues, mostly having a constructive nature under the influence of virtualization of human consciousness and social relations. The report illustrates on the example of the ‘instrumental rationality’ paradigm that as NBICS-technologies include the Internet, they can’t be used in vacuum. They are interconnected and imply a number of political, economical and social aspects which accompany them. As a result, they’re becoming a characteristic of the public style of thinking. The emphasis is made on socio-cultural prospects of the new kind of technologies and their constructivism nature. Any cognition process turns into a social act as some norms and standards, which are not related to a significant originator, but being recognized by all the society involved in the process, appear among the representatives of different knowledge spheres during the communication process. From the scientific point of view, the consequences of NBICS application are both the unprecedented progress in medicine, molecular biology, genetics, proteomics and the newest achievements in electronics, robotics and software. They will provide a chance to create artificial intelligence, to prolong the life expectancy unprecedentedly, to create new public forms, social and psychical processes. At the same time man doesn’t stop to be rational under the influence of technologies. His cognition process is accompanied by creative and constructive human activity leading to the effects that can reveal themselves, for instance, in the modification of human sensitivity level by means of significant transformation of physical capabilities. In turn, it should lead to nonreversible consequences, because man himself, his body and consciousness turn into an integral part of complex eco-, socio-cultural and socio-technical systems. That’s why the philosophical reflection of ecological, social and cultural results of NBICS-technologies introduction and application is becoming more and more topical. The report concludes that NBICS overcomes all previous technological achievements according to its potential and socio-cultural effects. |

15:15 | A generalized omitting type theorem in mathematical fuzzy logic PRESENTER: Carles Noguera ABSTRACT. Mathematical fuzzy logic (MFL) studies graded logics as particular kinds of many-valued inference systems in several formalisms, including first-order predicate languages. Models of such first-order graded logics are variations of classical structures in which predicates are evaluated over wide classes of algebras of truth degrees, beyond the classical two-valued Boolean algebra. Such models are relevant for recent computer science developments in which they are studied as weighted structures. The study of such models is based on the corresponding strong completeness theorems [CN,HN] and has already addressed several crucial topics such as: characterization of completeness properties w.r.t. models based on particular classes of algebras [CEGGMN], models of logics with evaluated syntax [NPM,MN], study of mappings and diagrams [D1], ultraproduct constructions [D2], characterization of elementary equivalence in terms of elementary mappings [DE], characterization of elementary classes as those closed under elementary equivalence and ultraproducts [DE3], Löwenheim-Skolem theorems [DGN1], and back-and-forth systems for elementary equivalence [DGN2]. A related stream of research is that of continuous model theory [CK,C]. Another important item in the classical agenda is that of omitting types, that is, the construction of models (of a given theory) where certain properties of elements are never satisfied. In continuous model theory the construction of models omitting many types is well known [YBWU], but in MFL has only been addressed in particular settings [CD,MN]. The goal of the talk is establish a new omitting types theorem, generalizing the previous results to the wider notion of tableaux (pairs of sets of formulas, which codify the properties that are meant to be preserved and those that will be falsified). References: [YBWU] I. Ben Yaacov, A. Berenstein, C. Ward Henson, and A. Usvyatsov. Model theory for metric structures, (2007), URL:https://faculty.math.illinois.edu/~henson/cfo/mtfms.pdf [C] X. Caicedo. Maximality of continuous logic, in Beyond first order model theory, Chapman & Hall/CRC Monographs and Research Notes in Mathematics (2017). [CK] C.C. Chang and H. J. Keisler. Continuous Model Theory, Annals of Mathematical Studies, vol. 58, Princeton University Press, Princeton (1966). [CD] P. Cintula and D. Diaconescu. Omitting Types Theorem for Fuzzy Logics. To appear in IEEE Transactions on Fuzzy Systems. [CEGGMN] P. Cintula, F. Esteva, J. Gispert, L. Godo, F. Montagna, and C. Noguera. Distinguished Algebraic Semantics For T-Norm Based Fuzzy Logics: Methods and Algebraic Equivalencies, Annals of Pure and Applied Logic 160(1):53-81 (2009). [CN] P. Cintula and C. Noguera. A Henkin-style proof of completeness for first-order algebraizable logics. Journal of Symbolic Logic 80:341-358 (2015). [D1] P. Dellunde. Preserving mappings in fuzzy predicate logics. Journal of Logic and Computation 22(6):1367-1389 (2011). [D2] P. Dellunde. Revisiting ultraproducts in fuzzy predicate logics, Journal of Multiple-Valued Logic and Soft Computing 19(1):95-108 (2012). [D3] P. Dellunde. Applicactions of ultraproducts: from compactness to fuzzy elementary classes. Logic Journal of the IGPL 22(1):166-180 (2014). [DE] P. Dellunde and Francesc Esteva. On elementary equivalence in fuzzy predicate logics. Archive for Mathematical Logic 52:1-17 (2013). [DGN1] P. Dellunde, À. García-Cerdaña, and C. Noguera. Löwenheim-Skolem theorems for non-classical first-order algebraizable logics. Logic Journal of the IGPL 24(3):321-345 (2016). [DGN2] P. Dellunde, À. García-Cerdaña, and C. Noguera. Back-and-forth systems for first-order fuzzy logics. Fuzzy Sets and Systems 345:83-98 (2018). [HN] P. Hájek and P. Cintula. On theories and models in fuzzy predicate logics. Journal of Symbolic Logic 71(3):863-880 (2006). [MN] P. Murinová and V. Novák. Omitting Types in Fuzzy Logic with Evaluated Syntax, Mathematical Logic Quarterly 52 (3): 259-268 (2006). [NPM] V. Novák, I. Perfilieva, and J. Močkoř. Mathematical Principles of Fuzzy Logic, Kluwer Dordrecht (2000). |

15:45 | On ranks for families of theories of abelian groups PRESENTER: Inessa Pavlyuk ABSTRACT. We continue to study families of theories of abelian groups \cite{PS18} characterizing $e$-minimal subfamilies \cite{rsPS18} by Szmielew invariants $\alpha_{p,n}$, $\beta_p$, $\gamma_p$, $\varepsilon$ \cite{ErPa, EkFi}, where $p\in P$, $P$ is the set of all prime numbers, $n\in\omega\setminus\{0\}$, as well as describing possibilities for the rank ${\rm RS}$ \cite{rsPS18}. We denote by $\mathcal{T}_A$ the family of all theories of abelian groups. \begin{theorem}\label{th1_PS} An infinite family $\mathcal{T}\subseteq\mathcal{T}_A$
is $e$-minimal if and only if for any upper bound $\xi\geq m$ or
lower bound $\xi\leq m$, for $m\in\omega$, of a Szmielew invariant
$$\xi\in\{\alpha_{p,n}\mid p\in
P,n\in\omega\setminus\{0\}\}\cup\{\beta_p\mid p\in
P\}\cup\{\gamma_p\mid p\in P\},$$ there are finitely many theories
in $\mathcal{T}$ satisfying this bound. Having finitely many
theories with $\xi\geq m$, there are infinitely many theories in
$\mathcal{T}$ with a fixed value $\alpha_{p,s} \begin{theorem}\label{th2_PS} For any theory $T$ of an abelian group $A$ the following conditions are equivalent: $(1)$ $T$ is approximated by some family of theories; $(2)$ $T$ is approximated by some $e$-minimal family; $(3)$ $A$ is infinite. \end{theorem} Let $\mathcal{T}$ be a family of first-order complete theories in a language $\Sigma$. For a set $\Phi$ of $\Sigma$-sentences we put $\mathcal{T}_\Phi=\{T\in\mathcal{T}\mid T\models\Phi\}$. A family of the form $\mathcal{T}_\Phi$ is called {\em $d$-definable} (in $\mathcal{T}$). If $\Phi$ is a singleton $\{\varphi\}$ then $\mathcal{T}_\varphi=\mathcal{T}_\Phi$ is called {\em $s$-definable}. \begin{theorem}\label{th3_PS} Let $\alpha$ be a countable ordinal, $n\in\omega\setminus\{0\}$. Then there is a $d$-definable subfamily $(\mathcal{T}_A)_\Phi$ such that ${\rm RS}((\mathcal{T}_A)_\Phi)=\alpha$ and ${\rm ds}((\mathcal{T}_A)_\Phi)=n$. \end{theorem} This research was partially supported by Committee of Science in Education and Science Ministry of the Republic of Kazakhstan (Grant No. AP05132546) and Russian Foundation for Basic Researches (Project No. 17-01-00531-a). \begin{thebibliography}{10} \bibitem{PS18} {\scshape In.I.~Pavlyuk, S.V.~Sudoplatov}, {\itshape Families of theories of abelian groups and their closures}, {\bfseries\itshape Bulletin of Karaganda University. Series ``Mathematics''}, vol.~90 (2018). \bibitem{rsPS18} {\scshape S.V.~Sudoplatov}, {\itshape On ranks for families of theories and their spectra}, {\bfseries\itshape International Conference ``Mal'tsev Meeting'', November 19--22, 2018, Collection of Abstracts}, Novosibirsk: Sobolev Institute of Mathematics, Novosibirsk State University, 2018, p.~216. \bibitem{ErPa} {\scshape Yu.L.~Ershov, E.A.~Palyutin}, {\bfseries\itshape Mathematical logic}, FIZMATLIT, Moscow, 2011. \bibitem{EkFi} {\scshape P.C.~Eklof, E.R.~Fischer}, {\itshape The elementary theory of abelian groups}, {\bfseries\itshape Annals of Mathematical Logic}, vol.~4 (1972), pp.~115--171. \end{thebibliography} |

16:45 | Using linguistic corpora to understand mathematical explanation PRESENTER: Juan Pablo Mejía Ramos ABSTRACT. The notion of explanation in mathematics has received a lot of attention in philosophy. Some philosophers have suggested that accounts of scientific explanation can be successfully applied to mathematics (e.g. Steiner 1978). Others have disagreed, and questioned the extent to which explanation is relevant to the actual practice of mathematicians. In particular, the extent to which mathematicians use the notion of explanatoriness explicitly in their research is a matter of sharp disagreement. Resnik and Kushner (1987, p.151) claimed that mathematicians “rarely describe themselves as explaining”. But others disagree, claiming that mathematical explanation is widespread, citing individual mathematicians’ views (e.g., Steiner 1978), or discussing detailed cases in which mathematicians explicitly describe themselves or some piece of mathematics as explaining mathematical phenomena (e.g. Hafner & Mancosu 2005). However, this kind of evidence is not sufficient to settle the disagreement. Recently, Zelcer (2013) pointed out that a systematic analysis of standard mathematical text was needed to address this issue, but that such analysis did not exist. In this talk we illustrate the use of corpus linguistics methods (McEnery & Wilson 2001) to perform such an analysis. We describe the creation of large-scale corpora of written research-level mathematics (obtained from the arXiv e-prints repository), and a mechanism to convert LaTeX source files to a form suitable for use with corpus linguistic software packages. We then report on a study in which we used these corpora to assess the ways in which mathematicians describe their work as explanatory in their research papers. In particular, we analysed the use of ‘explain words’ (explain, explanation, and various related words and expressions) in this large corpus of mathematics research papers. In order to contextualise mathematicians’ use of these words/expressions, we conducted the same analysis on (i) a corpus of research-level physics articles (constructed using the same method) and (ii) representative corpora of modern English. We found that mathematicians do use this family of words, but relatively infrequently. In particular, the use of ‘explain words’ is considerably more prevalent in research-level physics and representative English, than in research-level mathematics. In order to further understand these differences, we then analysed the collocates of ‘explain words’ –words which regularly appear near ‘explain words’– in the two academic corpora. We found differences in the types of explanations discussed by physicists and mathematicians: physicists talk about explaining why disproportionately more often than mathematicians, who more often focus on explaining how. We discuss some possible accounts for these differences. References Hafner, J., & Mancosu, P. (2005). The varieties of mathematical explanation. In P. Mancosu et al. (Eds.), Visualization, Explanation and Reasoning Styles in Mathematics (pp. 215–250). Berlin: Springer. McEnery, T. & Wilson, A. (2001). Corpus linguistics: An introduction (2nd edn). Edinburgh: Edinburgh University Press. Steiner, M. (1978) Mathematical explanation. Philosophical Studies, 34(2), 135–151. Resnik, M, & Kushner, D. (1987). Explanation, independence, and realism in mathematics. British Journal for the Philosophy of Science, 38, 141–158. Zelcer, M. (2013). Against mathematical explanation. Journal for General Philosophy of Science, 44(1), 173-192. |

17:15 | Studying Actions and Imperatives in Mathematical Texts PRESENTER: Fenner Tanswell ABSTRACT. In this paper, we examine words relating to mathematical actions and imperatives in mathematical texts, and within proofs. The main hypothesis is that mathematical texts, and proofs especially, contain frequent uses of instructions to the reader, issued by using imperatives and other action-focused linguistic constructions. We take common verbs in mathematics, such as “let”, “suppose”, “denote”, “consider”, “assume”, “solve”, “find”, “prove” etc. and compare their relative frequencies within proofs, in mathematical texts generally, and in spoken and written British and American English, by using a corpus of mathematical papers taken from the ArXiv. Furthermore, we conduct ‘keyword’ analyses to identify those words which disproportionately occur in proofs compared to other parts of mathematics research papers. Previous analyses of mathematical language, such as those conducted by de Bruijn (1987) and Ganesalingam (2013), have largely been carried out without empirical investigations of actual mathematical texts. As a result, some of the claims they make are at odds with the reality of written mathematics. For example, both authors claim that there is no room for imperatives in rigorous mathematics. Whether this is meant to be a descriptive or normative claim, we demonstrate that analysing the actual writings of mathematicians, particularly written proofs, shows something quite different. Mathematicians use certain imperatives far more frequently than in natural language, and within proofs we find an even higher prevalence of certain verbs. The implications of this are that mathematical writing and argumentation may be harder to formalise than other linguistic accounts of it suggest. Furthermore, this backs the idea that proofs are not merely sequences of declarative sentences, but instead provide instructions for mathematical activities to be carried out. References De Bruijn, N.G. (1987) “The mathematical vernacular, a language for mathematics with typed sets”, in Dybjer, P., et al. (eds.) Proceedings of the Workshop on Programming Logic, Report 37, Programming Methodology Group, University of Göteborg and Chalmers University of Technology. Ganesalingam, M. (2013) The Language of Mathematics, Lecture Notes in Computer Science Vol 7805, Springer, Berlin. |

16:45 | Numbers as properties; dissolving Benacerraf’s Tension ABSTRACT. Generations of mathematicians and philosophers have been intrigued by the question, What are arithmetic propositions about? I defend a Platonist answer: they're about numbers, and numbers are plural properties. I start with the seminal "Mathematical Truth" (1973), where Benacerraf showed that if numbers exist, there is a tension between their metaphysical and epistemological statuses. Even as Benacerraf's particular assumptions have been challenged, this tension has reappeared. I bring it out with two Benacerrafian requirements: Epistemic requirement. Any account of mathematics must explain how we can have mathematical knowledge. Semantic requirement. Any semantics for mathematical language must be homogenous with a plausible semantics for natural language. Each of the prominent views of mathematical truth fails one of these requirements. If numbers are abstract objects, as the standard Platonist says, how is mathematical knowledge possible? Not by one common source of knowledge-causal contact. Field (1989) argues that the epistemological problem extends further: if numbers are abstract objects, we cannot verify the reliability of our mathematical belief-forming processes, even in principle. If mathematical truth amounts to provability in a system, as the combinatorialist says, the semantics for mathematical language is unlike those semantics normally given for natural language ('snow is white' is true iff snow is white, vs. '2 + 2 = 4' is true iff certain syntactic facts hold). I argue that numbers are properties. Epistemic requirement. We're in causal contact with properties, so we're in causal contact with numbers. More generally, because a good epistemology must account for knowledge of properties, any such theory should account for mathematical knowledge. Semantic requirement. Just as 'dog' refers to the property doghood, '2' refers to the property being two. Just as 'dogs are mammals' is true iff a certain relation holds between doghood and mammalhood, '2 + 2 = 4' is true iff a certain relation holds between being two and being four. Specifically, I say that numbers are what I call pure plural properties. A plural property is instantiated by a plurality of things. Take the fact that Thelma and Louise cooperate. The property cooperate doesn't have two argument places: one for Thelma, and one for Louise. Rather, it has a single argument place: here it takes the plurality, Thelma and Louise. Consider another property instantiated by this plurality: being two women. This plural property is impure because it does not concern only numbers, but we can construct it out of two other properties, womanhood and being two. This latter plural property is a pure. It is the number two. Famously, number terms are used in two ways: referentially ('two is the smallest prime') and attributively ('I have two apples'). If numbers are objects, the attributive use is confounding (Hofweber 2005). If they're properties, there is no problem: other property terms share this dual use ('red is my favorite color' vs. 'the apple is red'). The standard Platonist posits objects that are notoriously mysterious. While the nature of properties may be contentious, my numbers-as-properties view is not committed to anything so strange. |

17:15 | The development of epistemic objects in mathematical practice: Shaping the infinite realm driven by analogies from finite mathematics in the area of Combinatorics. PRESENTER: Deniz Sarikaya ABSTRACT. We offer a case study of mathematical theory building via analogous reasoning. We analyse the conceptualisation of basic notions of (topological) infinite graph theory, mostly exemplified by the notion of infinite cycles. We show in how far different definitions of “infinite cycle” were evaluated against results from finite graph theory. There were (at least) three competing formalisations of “infinite cycles” focusing on different aspects of finite ones. For instance, we might observe that in a finite cycle every vertex has degree two. If we take this as the essential feature of cycles, we can get to a theory of infinite cycles. A key reason for the rejection of this approach is that some results from finite graph theory do not extend (when we syntactically change “finite graph”, “finite cycle”, etc. to “infinite graph”, “infinite cycle” etc. The activity to axiomatise a field is no purely mathematical one, which cannot be solved by proof but by philosophical reflection. This might sound trivial but is often neglected due to an over simplified aprioristic picture of mathematical research. We must craft our formal counterparts in mathematics by our intuitions of the abstract concepts/objects. While we normally think of a mathematical argument as the prototype of deductive reasoning, there are inductive elements in at least three senses: 1. In the heuristic of developing. 2. In the process of axiomatisation, while 2a. we test the adequacy of an axiomatisation. 2b. we are looking for new axioms to extend a current axiomatic system. We want to focus on 2a and especially on the role of analogies. Nash-Williams (1992, p. 1) analysed that “the majority of combinatorialists seem to have concentrated on finite combinatorics, to the extent that it has almost seemed an eccentricity to think that graphs and other combinatorial structures can be either finite or infinite”. This observation is still true, but more recently a growing group of combinatorialists work on infinite structures. We want to analyse the heuristics of theory development in this growing area. There are vocabularies from finite graph theory for which it is not clear which infinite counterpart might be the best concept to work with. And for theorems making use of them it is also unclear whether they can or should have an extension to infinite graphs. This area is very suitable for the philosophical discourse, since the used concepts are quite intuitive and involve only a little background from topology and graph theory. We argue that mathematical concepts are less fixed and eternal than it might seem. Shifting the focus from the sole discussion of theorems, which is overrepresented in the reflections of philosophers of mathematics, towards the interplay of definitions and theorems. While theorems are a very important (and probably even constitutive) part of the practice of mathematicians, we should not forget that mathematical concepts in the sense of concepts used in mathematical discourses develop over time. We can only proof / state / comprehend with fitting vocabulary, which we develop in a process of iterative refinement. |

16:45 | Frames – A New Model for Analyzing Theories ABSTRACT. The frame model was developed in cognitive psychology (Barsalou 1992) and imported into the philosophy of science in order to provide representations of scientific concepts and conceptual change (Andersen and Nersessian 2000; Andersen et al. 2006; Chen and Barker 2000; Chen 2003; Barker et al. 2003; Votsis and Schurz 2012; Votsis and Schurz 2014). The aim of my talk is to show that beside the representation of scientific concepts the frame model is an efficient instrument to represent and analyze scientific theories. That is, I aim to establish the frame model as a representation tool for the structure of theories within the philosophy of science. In order to do so, in the first section of my talk, I will briefly introduce the frame model and develop the notion of theory frames as an extension of it. Further, I will distinguish between theory frames for qualitative theories, in which scientific measurement is based on nominal scales, and theory frames for quantitative theories, in which measurement is based on ratio scales. In two case studies, I will apply the notion of theory frames to a linguistic and a physical theory. Section 2 contains a diachronic analysis of a qualitative theory by applying the notion of a theory frame to the pro drop theory of generative linguistics. In section 3, I will provide a frame-based representation of electrostatics, the laws of which contain quantitative theoretical concepts. Based on the two case studies, I will argue that the frame model is a powerful instrument to analyze the laws of scientific theories, the determination of theoretical concepts, the explanatory role of theoretical concepts, the abductive introduction of a new theoretical concept, the diachronic development of a theory, and the distinction between qualitative and quantitative scientific concepts. I will show that due to its graphical character the frame model provides a clear and intuitive representation of the structure of a theory as opposed to other models of theory representation like, for instance, the structuralist view of theories. Literature Andersen, Hanne, and Nancy J. Nersessian. 2000. “Nomic Concepts, Frames, and Conceptual Change.” Philosophy of Science 67 (Proceedings): S224-S241. Andersen, Hanne, Peter Barker, and Xiang Chen. 2006. The Cognitive Structure of Scientific Revolutions. Cambridge: University Press. Barker, Peter, Xiang Chen, and Hanne Andersen. 2003. “Kuhn on Concepts and Categorization.” In Thomas Kuhn, ed. Thomas Nickles, 212-245. Cambridge: University Press. Barsalou, Lawrence W. 1992. “Frames, concepts, and conceptual fields.” In Frames, fields, and contrasts, ed. Adrienne Lehrer, and Eva F. Kittay, 21–74. Hillsdale: Lawrence Erlbaum Associates. Chen, Xiang. 2003. “Object and Event Concepts. A Cognitive Mechanism of Incommensurability.” Philosophy of Science 70: 962-974. Chen, Xiang, and Peter Barker. 2000. “Continuity through Revolutions: A Frame-Based Account of Conceptual Change During Scientific Revolutions.” Philosophy of Science 67:208-223. Votsis, I., and Schurz, G. 2012. “A Frame-Theoretic Analysis of Two Rival Conceptions of Heat.” Studies in History and Philosophy of Science, 43(1): 105-114. Votsis, I., and Schurz, G. 2014. “Reconstructing Scientific Theory Change by Means of Frames.” In Concept Types and Frames. Application in Language, Cognition, and Science, ed. T. Gamerschlag, D. Gerland, R. Osswald, W. Petersen, 93-110. New York: Springer. |

17:15 | Abstraction in Scientific Modeling PRESENTER: Demetris Portides ABSTRACT. Abstraction is ubiquitous in scientific model construction. It is generally understood to be synonymous to omission of features of target systems, which means that something is left out from a description and something else is retained. Such an operation could be interpreted so as to involve the act of subtracting something and keeping what is left, but it could also be interpreted so as to involve the act of extracting something and discarding the remainder. The first interpretation entails that modelers act as if they possess a list containing all the features of a particular physical system and begin to subtract in the sense of scratching off items from the list. Let us call this the omission-as-subtraction view. According to the second interpretation, a particular set of features of a physical system is chosen and conceptually removed from the totality of features the actual physical system may have. Let us call the latter the omission-as-extraction view. If abstraction consists in the cognitive act of omission-as-subtraction this would entail that scientists know what has been subtracted from the model description and thus would know what should be added back into the model in order to turn it into a more realistic description of its target. This idea, most of the time, conflicts with actual scientific modeling, where a significant amount of labor and inventiveness is put into discovering what should be added back into a model. In other words, the practice of science provides evidence that scientists, more often than not, operate without any such knowledge. One, thus, is justified in questioning whether scientists actually know what they are subtracting in the first case. Since it is hard to visualize how modelers can abstract, in the sense of omission-as-subtraction, without knowing what they are subtracting, one is justified in questioning whether a process of omission-as-subtraction is at work. In this paper we particularly focus on theory-driven models and phenomenological models in order to show that for different modeling practices what is involved in the model-building process is the act of extracting certain features of physical systems, conceptually isolating and focusing on them. This is the sense of omission-as-extraction, that we argue is more suitable for understanding how scientific model-building takes place before the scientist moves on to the question of how to make the required adjustments to the model in order to meet the representational goals of the task at hand. Furthermore, we show that abstraction-as-extraction can be understood as a form of selective attention and as such could be distinguished from idealization. |

Organizer: Cristina Corredor

The Spanish Society of Logic, Methodology and Philosophy of Science (SLMFCE in its Spanish acronym) is a scientific association formed by specialists working in these and other closely related fields. Its aims and scope cover also those of analytic philosophy in a broad sense and of argumentation theory. It is worth mentioning that among its priorities is the support and promotion of young researchers. To this aim, the Society has developed a policy of grants and awards for its younger members.

The objectives of the SLMFCE are to encourage, promote and disseminate study and research in the fields above mentioned, as well as to foster contacts and interrelations among specialists and with other similar societies and institutions. The symposium is intended to present the work carried out by prominent researchers and research groups linked to the Society. It will include four contributions in different subfields of specialization, allowing the audience at the CLMPST 2019 to form an idea of the plural research interests and relevant outcomes of our members.

18:00 | Semiotic analysis of Dedekind’s arithmetical strategies ABSTRACT. In this talk, I will present a case study, which uses close reading of [Dedekind 1871] to study semiotic processes of mathematical text. I focus on analyzing from a semiotic perspective what Haffner [2017] describes as ‘strategical uses of arithmetic’ employed by Richard Dedekind (1831-1916) and Heinrich Weber (1842-1913) in their joint work on function theory [Dedekind and Weber 1882]. My analysis of Dedekind’s representations shows that neither word-to-word correspondences with other texts (e.g. texts on number theory), nor correspondences between words and stable referents fully capture Dedekind’s “strategic use of arithmetic” in [Dedekind 1871]. This use is thus the product of a textual practice, not a structural correspondence to which the text simply refers. An important line of argument in [Haffner 2017] is that a mathematical theory (be it function theory as in [Dedekind and Weber 1882] or ideal theory as in [Dedekind 1871]) becomes arithmetical by introducing concepts that are ‘similar’ to number theoretical ones, and by transferring formulations from number theory. Haffner’s claim only emphasizes why we need a better understanding of the production of analogy as a semiotic process. Since the definitions and theorems of [Dedekind 1871] do not correspond word-for-word to number theoretical definitions and theorems, simply saying that two concepts or formulations are ’similar’ neglects to describe the signs that make us see the similarities. Thus, appealing to similarity cannot account for the semiotic processes of the practice that produces analogy of ideal theory to number theory. The case study aims to unfold Haffner’s appeals to similarity through detailed descriptions of representations that Dedekind constructs and uses in [1871]. Dedekind is often pointed to as a key influence in shaping present mathematical textual practices and a considerable part of this influence stems from his development of ideal theory, of which [Dedekind 1871] is the first published version. Therefore, apart from being interesting in their own right, a better understanding of the semiotic processes of this text could contribute to our views on both present mathematical textual practices and late-modern history of mathematics. References: Dedekind, R. (1871). Über die Komposition der binären quadratischen Formen. In [Dedekind 1969 [1932]], vol. III. 223-262. Dedekind, R. (1969) [1932]. Gesammelte mathematische Werke. Vol I-III. Fricke R., Noether E. & Ore O. (Eds.). Bronx, N.Y: Chelsea. Dedekind, R. and Weber, H. (1882). Theorie der algebraischen Funktionen einer Veränderlichen. J. Reine Angew. Math. 92, 181–290. In [Dedekind 1969 [1932]], vol. I. 238–351. Haffner, E. (2017). Strategical Use(s) of Arithmetic in Richard Dedekind and Heinrich Weber's Theorie Der Algebraischen Funktionen Einer Veränderlichen. Historia Mathematica, vol. 44, no. 1, 31–69. |

18:30 | Text-driven variation as a vehicle for generalisation, abstraction, proofs and refutations: an example about tilings and Escher within mathematical education. PRESENTER: Karl Heuer ABSTRACT. In this talk we want to investigate in how far we can understand (or rationally reconstruct) how mathematical theory building can be analysed on a text-level approach. This is apparently only a first approximation concerning the heuristics actually used in mathematical practice but delivers already useful insights. As a first model we show how simple syntactical variation of statements can yield to new propositions to study. We shall show in how far this mechanism can be used in mathematical education to develop a more open, i.e. research oriented experience for participating students. Apparently not all such variations yield to fruitful fields of study and several of them are most likely not even meaningful. We develop a quasi-evolutionary account to explain why this variational approach can help to develop an understanding how new definitions replace older ones and how mathematicians choose axiomatisations and theories to study. We shall give a case study within the subject of ‘tilings’. There we begin with the basic question which regular (convex) polygon can be used to construct a tilling of the plane; a question in principle accessible with high school mathematics. Small variations of this problem quickly lead to new sensible fields of study. For example, allowing the combination of different regular (convex) polygons yields to Archimedean tilings of the plane, or introducing the notion of ‘periodicity’ paves the way for questions related to so-called Penrose tilings. It is easy to get from a high school problem to open mathematical research by only introducing a few notions and syntactical variations of proposed questions. Additionally, we shall offer a toy model of the heuristics used in actual mathematical practice by a model of structuring a mathematical question together with variations of its parts on a syntactical level. This first step is accompanied by a semantic check to avoid category mistakes. By a quasi-evolutionary account, the most fruitful questions get studied, which leads to a development of new mathematical concepts. Depending on whether there is time left, we show that this model can also be applied to newly emerging fields of mathematical research. This talk is based on work used for enrichment programs for mathematically gifted children and on observations from working mathematicians. |

18:00 | Interactive Turing-complete logic via game-theoretic semantics ABSTRACT. We define a simple extension of first-order logic via introducing self-referentiality operators and domain extension quantifiers. The new extension quantifiers allow us to insert new points to model domains and also to modify relations by adding individual tuples. The self-referentiality operators are variables ranging over subformulas of the same formula where they are used, and they can be given a simple interpretation via game-theoretic semantics. We analyse the conceptual properties of this logic, especially the way it links games and computation in a one-to-one fashion. We prove that this simple extension of first-order logic is Turing-complete in the sense that it exactly captures the expressive power of Turing-machines in the sense of descriptive complexity: for every Turing-machine, there exists an equivalent formula, and vice versa. We also discuss how this logic can describe classical compass and straightedge constructions of geometry in a natural way. In classical geometry, the mechanisms of modifying constructions are analogous to the model modification steps realizable in the Turing-complete logic. Also the self-referentiality operators lead to recursive processes omnipresent in everyday mathematics. The logic has a very simple translation to natural language which we also discuss. |

18:30 | Rationality principles in pure coordination games PRESENTER: Raine Rönnholm ABSTRACT. We analyse so-called pure win-lose coordination games (WLC games) in which all players receive the same playoff, either 1 ("win") or 0 ("lose"), after every round. We assume that the players cannot communicate with each other and thus, in order to reach their common goal, they must make their choices based on rational reasoning only. We study various principles of rationality that can be applied in these games. We say that a WLC game G is solvable with a principle P if winning G is guaranteed when all players follow P. We observe that there are many natural WLC games which are not unsolvable in a single round by any principle of rationality, but which become solvable in the repeated setting when the game can be played several times until the coordination succeeds. Based on our analysis on WLC games, we argue that it is very hard to characterize which principles are "purely rational" - in the sense that all rational players should follow such principles in every every WLC game. |

18:00 | How Pragmatism Can Prevent From the Abuses of Post-truth Champions PRESENTER: Daniel Labrador-Montero ABSTRACT. How to establish if a sentence, a statement, or a theory are true has become a problem of public relevance. To defend that scientific knowledge is not a privileged way for understanding the reality and, therefore, that there are not good reasons for using science as the basis for committing some decisions, has grown a widespread argument. Even prevalent relativistic conceptions about science, like Fuller’s, defend the acceptance of the post-truth: “a post-truth world is the inevitable outcome of greater epistemic democracy. (…) once the instruments of knowledge production are made generally available—and they have been shown to work—they will end up working for anyone with access to them. This in turn will remove the relatively esoteric and hierarchical basis on which knowledge has traditionally acted as a force for stability and often domination.” (Fuller, 2016: 2-3). However, epistemic democracy does not necessary lead to the acceptance of that position. As the editor of Social Studies of Science has pointed out: “Embracing epistemic democratization does not mean a wholesale cheapening of technoscientific knowledge in the process. (...) the construction of knowledge (...) requires infrastructure, effort, ingenuity and validation structures.” (Sismondo, 2016: 3). Post-truth, defined as “what I want to be true is true in a post-truth culture (Wilber, 2017, p. 25), defends a voluntaristic notion of truth, and there is nothing democratic in individualistic and whimsical decisions about the truthfulness of a statement. For radical relativism scientific consensus is reached by the same kind of mechanisms as in other social institutions, i.e. by networks that manufacture the “facts” using political negotiation, or by other ways of domination. However, the notion of truth that relativists (for instance Zackariasson, 2018: 3) are attacking is actually a straw man: the “God’s eye point of view” that very few among philosophers or scientists, defend any more. We suggest that an alternative to post-truth arguments, that at the same time suggests mechanisms for developing a real epistemic democracy, is the notion of truth from pragmatism. This could seem controversial, if we have in mind the debunked and popularized version of pragmatism —the usefulness of an assertion is the only thing that counts in favour of it being true—. Nevertheless, whether among classic pragmatists as Dewey or neo-pragmatists (e.g. Kitcher, 2001), the construction of scientific knowledge, with all its limitations, is the best way for reaching if not the Truth, at least partial or rectifiable, but reliable and well-built knowledge. REFERENCES Fuller, S. (2016). Embrace the inner fox: Post-truth as the STS symmetry principle universalized. Social Epistemology Review and Reply Collective. Kitcher, P. (2001). Science, democracy and truth. Oxford U.P. Sismondo, S. (2016). Post-truth? Social Studies of Science Vol. 47(1) 3–6. Wilber, K. (2017). Trump and a Post-Truth World. Shamballah. Zackariasson, U. (2018). Introduction: Engaging Relativism and Post-Truth. In M. Stenmark et al. (eds.), Relativism and Post-Truth in Contemporary Society. Springer. |

18:30 | Deference as Analytic Technique and Pragmatic Process ABSTRACT. The goal of the paper is to consider what is determining the deference in cases with inaccurate and untested knowledge. Deferential concept is a sort of concept which people use a public word without fully understanding what it typically entertains (Recanati, F. "Modes Of Presentation: Perceptual vs. Deferential", 2001). What happens on the level of the everyday use of language? There is a link between social stimulations to certain use of words, social learning, different "encouragements for objectivity", leading to correcting of everything that is not consistent with the generally accepted use of words and meanings (Quine. Word and Object,1960) and the deference, revealing the chain of these uses, distortions, refinements, leading to some problematic beginning of use of the term. When a philosopher is performing a conceptual analysis, and affirming the causal relationship does not care about analysis of the cause, but relies on the specialists, we say such a philosopher applies the analytic technique of 'Grice's deference' (Cooper, W. E. "Gricean Deference".1976). This technique allows the philosopher to be free from any responsibility for explanation the nature of the causes. From this point of view, the philosopher at a certain point in her analysis defers to a specialist in the relevant science, competent to talk about the causal relationships.'Deferentially' means relying on someone's thought, opinion, knowledge. The wellknown post-truth phenomena is interpreted as a result of deferential attitude to information, knowledge and various data concerning reality. Along with linguistic and epistemic deference and their forms of default and intentional deference (Woodfield, A. Reference and Deference, 2000) (Stojanovic, De Brabanter, Fernandez, Nicolas. Deferential Utterances, 2005) the so called "backfire effect" will be considered. "Backfire effect" named the phenomenon pertaining to the cases when "misinformed people, when confronted with mistakes, cling even more fiercely to their incorrect beliefs (Tristan Bridges, Why People Are So Averse to Facts. "The Society Pages". http:// thesocietypages. org). There is a problem, within what approach could be correllated the instances of falsity-by-misunderstanding and cases when speaker openly prefers to use expressions like it makes someone else from the nearest linguistic community, following the custom or authority. Being a pragmatic process, deference is responsible for the lack of transparency in meta-representations (Recanati, op.cit.). So, what determines deference lies in basic concepts of the theory of opacity, in meta-representations, and in mechanism of the deference in connection with the opacity and meta-representations. The last, but not least, in this sequence is the use of the mechanism of deference to problems with imperfect mastery and unconsciously deferential thoughts (Fernandez, N.V. Deferential Concepts and Opacity). |

18:00 | Incompleteness-based formal models for the epistemology of complex systems ABSTRACT. The thesis I intend to argue is that formal approaches to epistemology deriving from Goedel incompleteness theorem, as developed for instance by Chaitin, Doria and da Costa (see [3]), even if originally conceived to solve decision problems in physical and social sciences (e.g. the decision problem for chaotic systems), could also be used to adress problems regarding consistency and incompleteness of sets of beliefs, and to define formal models for epistemology of complex systems and for the “classical” systemic-relational epistemology of psychology, such as Gregory Bateson’s epistemology (see [2]) and Piaget’s Genetic Epistemology (see for instance [4]). More specifically, following systemic epistemology of psychology, there are of two different classes of learning and change processes for cognitive sytems: a “quantitative learning” (the cognitive system adquires information without changing the rules of reasoning) and a “qualitative” learning (an adaptation process which leads the system to a re-organization). Therefore, as in Incompleteness theorems the emergence of an undecidable sentence in a logical formal system lead to the definition of a chain of formal systems, obtained by adjoining as axioms propositions that are undecidable at previous levels, in the same way, the emergence of an undecidable situation for a cognitive system could lead to the emergence of “new ways of thinking”. Thus, a (systemic) process of change (process of “deuterolearning”), could be interpreted as a process that leads the cognitive organization of the subject to a different level of complexity by the creation of a hierarchy of abstract relations between concepts, or by the creation of new sets of rules of reasoning and behaving (where the process of learning is represented by a sequence of learning-stages, e.g. by sequences of type-theoretically ordered sets, representing information/proposition and rules of reasoning/rules of inference). I will propose two formal models for qualitative change processes in cognitive systems and complex systems: • The first, form set set theory, is based on Barwise’s notion of partial model and model of Liar-like sentences (see [1]). • The second, from proof theory and algebraic logic, is based on the idea that a psychological-change process (the development on new epistemic strategies), is a process starting from a cognitive state s_0 and arriving to a cognitive state s_n, possibly assuming intermediate cognitive states s_1 , . . . , s_(n−1) : developing some researches contained in [5] and [6], I will propose a model of these processes based on the notion of paraconsistent consequence operator. I will show that these two different formal models are deeply connected and mutually translatable. References . [1] Barwise, J. and Moss L., Vicious circles: on the mathematics of non-well- founded phenomena, CSLI Lectures Notes, 60, Stanford, 1993. . [2] Bateson G., Steps to an ecology of mind, Paladin Book, New York, 1972. . [3] Chaitin, G. F.A. Doria, N.C. da Costa, Goedel’s Way: Exploits into an undecidable world, CRC Press, Boca Raton, 2011 . [4] Piaget, J. and Garcia, R., Toward a logic of meaning, Lawrence Erlbaum Associates, Hillsdale, 1991. . [5] Van Lambalgen, M. and Hamm, F., The Proper Treatment of Events, Black- well, London, 2004. . [6] Van Lambalgen, M. and Stenning, K., Human reasoning and Cognitive Science, MIT Press, Cambridge, 2008. |

18:30 | A Meta-Logical Framework for Philosophy of Science PRESENTER: Maria Dimarogkona ABSTRACT. In the meta-theoretic study of science we can observe today a tendency towards logical abstraction based on the use of abstract model theory [1], where logical abstraction is understood as independence from any specific logical system. David Pearce’s idea of an abstract semantic system in 1985 [2] was characterised by this tendency, and so was the idea of translation between semantic systems, which is directly linked to reduction between theories [3]. A further step towards logical abstraction was the categorical approach to scientific theories suggested by Halvorson and Tsementzis [4]. Following the same direction we argue for the use of institution theory - a categorical variant of abstract model theory developed in computer science [5] - as a logico-mathematical modeling tool in formal philosophy of science. Institutions offer the highest level of abstraction currently available: a powerful meta-theory formalising a logical system relative to a whole category of signatures, or vocabularies, while subscribing to an abstract Tarskian understanding of truth (truth is invariant under change of notation). In this way it achieves maximum language-independence. A theory is always defined over some institution in this setting, and we also define the category of all theories over any institution I. Appropriate functors allow for the translation of a theory over I to a corresponding theory over J. Thus we get maximum logic-independence, while the theory remains at all times yoked to some particular logic and vocabulary. To clarify our point we present an institutional approach to the resurgent debate between supporters of the syntactic and the semantic view of scientific theory structure, which currently focuses on theoretical equivalence. If the two views are formalized using institutions, it can be proven that the syntactic and the (liberal) semantic categories of theories are equivalent [6][7]. This formal proof supports the philosophical claim that the liberal semantic view of theories is no real alternative to the syntactic view; a claim which is commonly made - or assumed to be true. But it can also - as a meta-logical equivalence - support another view, namely that there is no real tension between the two approaches, provided there is an indispensable semantic component in the syntactic account. [1] Boddy Vos (2017). Abstract Model Theory for Logical Metascience. Master’s Thesis. Utrecht University. [2] Pearce David (1985). Translation, Reduction and Equivalence: Some Topics in Inter-theory Relations. Frankfurt: Verlag Peter Lang GmbH. [3] Pearce David, and Veikko Rantala (1983a). New Foundations for Metascience Synthese 56(1): pp. 1–26. [4] Halvorson Hans, and Tsementzis Dimitris (2016). Categories of scientific theories. Preprint. PhilSci Archive: http://philsciarchive.pitt.edu/11923/2/Cats.Sci.Theo.pdf [5] Goguen Joseph, and Burstall Rod (1992). Institutions: abstract model theory for specification and programming. J. ACM 39(1), pp. 95-146. [6] Angius Nicola, Dimarogkona Maria, and Stefaneas Petros (2015). Building and Integrating Semantic Theories over Institutions. Worksop Thales Algebraic Modeling of Topological and Computational Structures and Applications, pp.363-374. Springer [7] Dimarogkona Maria, Stefaneas Petros, and Angus Nicola. Syntactic and Semantic Theories in Abstract Model Theory. In progress. |

19:00 | A formal axiomatic epistemology theory and the controversy between Otto Neurath and Karl Popper about philosophy of science ABSTRACT. The controversy mentioned in the title had been related exclusively to the science understood as empirical cognition of the world as totality of facts. Obviously, verifiability of knowledge implies that it is scientific one. Popper developed an alternative to the verification-ism, namely, the falsification-ism emphasizing that falsifiability of knowledge implies that it is scientific one. Neurath criticized Popper for being fixed exclusively on falsifiability of knowledge as criterion of its scientific-ness. Neurath insisted that there was a variety of qualitatively different forms of empirical knowledge, and this variety was not reducible to falsifiable knowledge. In my opinion the discrepancy between Popper and Neurath philosophies of science is well-modeled by the axiomatic epistemology theory as according to this theory it is possible that knowledge is neither verifiable nor falsifiable but empirical one. The notion “empirical knowledge” is precisely defined by the axiom system considered, for instance, in [XXXXXXXXX XXXX]. The symbols Kq, Aq, Eq in stand, respectively, for: “agent knows that q”; “agent a-priori knows that q”; “agent has experience knowledge that q”. In the epistemic modality “agent empirically knows that q” is defined by the axiom 4 given below. In this axiom: Sq represents the verifiability principle; q represents the falsifiability one; (q Pq) represents an alternative meant by Neurath but missed by Popper. The symbol Pq in stands for “it is provable that q”. Thus, according to the theorems by Gödel, arithmetic-as-a-whole is an empirical knowledge. The theory is consistent. A proof of its consistency is the following. Let in the theory the meta-symbols and be substituted by the object-one q. Also let q be substituted by Pq. In this case the axiom-schemes of are represented by the following axioms, respectively. 1: Aq (q q). 2: Aq ((q q) (q q)). 3: Aq (Kq & (q & Sq & (q Pq))). 4: Eq (Kq & (q Sq (q Pq))). The interpretation is defined as follows. ω = ω for any formulae ω. (ω π) = (ω π) for any formulae ω and π, and for any classical logic binary connective . q = false. Aq = false. Kq = true. Eq = true. q = true. Sq = true. (q q) = true. Pq = true. (q Pq) = false (according to Gödel theorems). In all the axioms of are true. Hence, is a model for . Hence is consistent. References XXXXXXXXX XXXX |

18:00 | Rethinking the transformation of classical science in technoscience: ontological, epistemological and institutional shifts PRESENTER: Igor Frolov ABSTRACT. Rethinking the transformation of classical science in technoscience: ontological, epistemological and institutional shifts The key tendency in the development of science in the present-date society is that the scientific knowledge produced in Academia by scientists and academic society is losing its privileged position; moreover, the science as an institution is losing its monopoly to the production of knowledge that is considered powerful, valuable and effective. This process of deep transformation has been partially reflected in such concepts as technoscience, post-academic science, transdisciplinarity and can be found in such practices as deprofessionalization of scientific knowledge, civil science (expertise), informal science exchange in social media. In our presentation we aim to put for further consideration some ideas discussing not so much causes but purposes and effects of this transformation – epistemological, institutional and social. In particular we will focus on new subject (entity, person) of knowledge and its position in society and on the crucial change in the mechanisms of the scientific knowledge production that may lead to replacement of scientific knowledge by technologies (complex machines, techniques, skills, tools, methods) and innovations. The key theses we will develop in our presentation: 1. Basically, the concepts of technoscience, post-academic science and transdisciplinarity register and show various aspects of science transformation into something new, which we continue to call ‘‘science’’ only due to institutional and cultural reasons. Indeed, science is a project of the Modern Time, which was artificially extended by historization of scientific rationality; and apparently it has come to its end as any historical formation. It seems that ‘‘technoscience’’ is probably the best general term (though subject to a number of restrictions) to denote what we still call ‘‘science‘‘ but what, in fact, is not science anymore, however it is consistently taking the place /position of science in the society. 2. The term ‘‘technoscience’’ emphasizes an entanglement of science and technology and it was mainly raised to distinguish a ‘‘new’’ type of scientiﬁc activities from ‘‘traditional’’ ones with a different epistemic interest producing different objects with a different ontological status. Yet, for us it is important, that the concept enables us to capture the drastic changes in the means of production of knowledge and its organization. We claim that scientific knowledge is gradually turning into a component of innovative development and this means that scientific knowledge and academic science as an institution are becoming conformed to the principles and rules of functioning of other social spheres – economics, finance, and industry. The governance, business and society require the science to produce not the true / veritable knowledge to understand and explain the (real) world but give information and efficient ‘‘knowledge’’ to create a world, a specific environment, and artefacts. Indeed, we can see that the key functions of natural sciences are crucially changed as they become the part of capital circle-flow: the main task is production of potentially commercialized findings which can be constantly reinvested with the purpose of getting innovation. At the same time ‘‘innovation’’ has been shifted from new idea in form of device to provision of more-effective products (technologies) available to markets to a cycle of capital extended development, which uses new technology as permanent resource for its growing. 3. Apparently, the development of scientific knowledge will go in the direction of further synthesis with the technologies, the strengthening of the latter component, and partially the substitution of scientist by machines and scientific knowledge with technologies in two forms: producing artefacts (artificial beings and substances) and machines as well producing skills (techniques, algorithms) of working with information or giving and presentation of information. Now we can clearly see it on the examples of the explosion of emerging technosciences (e.g. artificial intelligence, nanotechnology, biomedicine, systems biology and synthetic biology) or the intervention of neuroscience based on wide use of fMRI brain scans into various areas of human practice and cognition which results in the formation of the so-called ‘‘brain culture’’. In general, transformation of science into "technoscience" implies that the production of information, technologies and innovations is its key goal. Thus, we can claim that science task is considerably narrowing as it implies the loss of scientific knowledge key function to be the dominant world-view (Weltanschauung). This loss may provoke other significant transformations. |

Ends 19:00.