View: session overviewtalk overview

Organizer: Hanne Andersen

Previous ERC panel member Atocha Aliseda will present her experiences from serving on the panel and on this basis provide advice to future applicants.

Grant recipients Tarja Knuuttila and Barbara Osimani will participate in the session and will answer questions from the audience on her experiences as applicant and reciptient.

Organizers: Lorenzo Galeotti and Philipp Lücke

The study of sets of real numbers and their structural properties is one of the central topics of contemporary set theory and the focus of the set-theoretic disciplines of descriptive set theory and set theory of the reals. The Baire space consists of all functions from the set of natural numbers to itself. Since this space is Borel-isomorphic to the real line and has a very accessible structure, it is one of the main tools of descriptive set theory. Because a great variety of mathematical objects can be canonically represented as subsets of Baire space, techniques from descriptive set theory and set theory of the reals can be applied throughout mathematics. These applications are limited to the study of objects of cardinality at most the size of the continuum. Therefore, the question whether similar methods can be applied in the analysis of larger objects arose naturally in several areas of mathematics and led to a strongly increasing interest in the study of higher Baire spaces, i.e., higher analogues of Baire space which consist of all functions from a given uncountable cardinal to itself.

In the recent years, an active and steadily growing community of researches has initiated the development of higher analogues of descriptive set theory and set theory of the reals for higher Baire spaces, turning this area of research into one of the hot topics of set theory. Results in this area provide a rich and independent theory that differs significantly from the classical setting and gives new insight into the nature of higher cardinals. The proofs of these results combine concepts and techniques from different areas of set theory: combinatorics, forcing, large cardinals, inner models and classical descriptive set theory. Moreover, they also use methods from other branches of mathematical logic, like model theory and the study of strong logics. In the other direction, these results have been applied to problems in other fields of mathematical logic and pure mathematics, like the classification of non-separable topological spaces, the study of large cardinals and Shelah's classification theory in model theory.

These developments have been strongly supported by regular meetings of the research community. The community met first at the Amsterdam Set Theory Workshop in 2014, then at a satellite workshop to the German mathematics congress in Hamburg in 2015, at a workshop at the Hausdorff Center for Mathematics in Bonn in 2016, and at the KNAW Academy Colloquium in Amsterdam in 2018.

The increased significance of the study of higher Baire spaces has been reflected through these meetings by both strongly growing numbers of attendees and a steadily increasing percentage of participants from other fields of set theory. The Symposium on higher Baire spaces will provide the opportunity to reunite this community a year after the last meeting.

Organizers: Sara Negri and Peter Schuster

Glivenko’s theorem from 1929 says that if a propositional formula is provable in classical logic, then its double negation is provable within intutionistic logic. Soon after, Gödel extended this to predicate logic, which requires the double negation shift. As is well-known, with the Gödel-Gentzen negative translation in place of double negation one can even get by with minimal logic. Several related proof translations saw the light of the day, such as Kolmogorov’s and Kuroda’s.

Glivenko’s theorem thus stood right at the beginning of a fundamental change of perspective: that classical logic can be embedded into intuitionistic or minimal logic, rather than the latter being a diluted version of the former. Together with the revision of Hilbert Programme ascribed to Kreisel and Feferman, this has led to the quest for the computational content of classical proofs, today culminating in agile areas such as proof analysis, dynamical algebra, program extraction from proofs and proof mining. The considerable success of these approaches suggests that classical mathematics will eventually prove much more constructive than widely thought still today.

Important threads of current research include the following:

1. Exploring the limits of Barr’s theorem about geometric logic

2. Program extraction in abstract structures characterised by axioms

3. Constructive content of classical proofs with Zorn’s Lemma

4. The algorithmic meaning of programs extracted from proofs

11:00 | On the historical relevance of Glivenko's translation from classical into intuitionistic logic: is it conservative and contextual? PRESENTER: Itala Maria Loffredo D'Ottaviano ABSTRACT. For several years we have studied interrelations between logics by analysing translations between them. The first known ‘translations’ concerning classical logic, intuitionistic logic and modal logic were presented by Kolmogorov (1925), Glivenko (1929), Lewis and Langford (1932), Gödel (1933) and Gentzen (1933). In 1999, da Silva, D’Ottaviano and Sette proposed a very general definition for the concept of translation between logics, logics being characterized as pairs constituted by a set and a consequence operator, and translations between logics being defined as maps that preserve consequence relations. In 2001, Feitosa and D’Ottaviano introduced the concept of conservative translation, and in 2009 Carnielli, Coniglio and D’Ottaviano proposed the concept of contextual translation. In this paper, providing some brief historical background, we will discuss the historical relevance of the ‘translation’ from classical logic into intuitionistic logic introduced by Glivenko in 1929, and will show that his interpretation is a conservative and contextual translation. References CARNIELLI, W. A., CONIGLIO M. E., D’OTTAVIANO, I. M. L. (2009) New dimensions on translations between logics. Logica Universalis, v. 3, p. 1-19. da SILVA, J. J., D’OTTAVIANO, I. M. L., SETTE, A. M. (1999) Translations between logics. In: CAICEDO, X., MONTENEGRO, C.H. (Eds.) Models, algebras and proofs. New York: Marcel Dekker, p. 435-448. (Lectures Notes in Pure and Applied Mathematics, v. 203) FEITOSA, H. A., D’OTTAVIANO, I. M. L. (2001) Conservative translations. Annals of Pure and Applied Logic. Amsterdam, v. 108, p. 205-227 GENTZEN, G. (1936) Die Widerspruchsfreiheit der reinem Zahlentheorie. Mathematische Annalen, v. 112, p. 493-565. Translation into English in Gentzen (1969, Szabo, M. E. (Ed.)). GENTZEN, G. (1969) On the relation between intuitionist and classical arithmetic (1933). In: Szabo, M. E. (ed.) The Collected Papers of Gerhard Gentzen, p. 53-67. Amsterdam: North-Holland. GLIVENKO, V. (1929) Sur quelques points de la logique de M. Brouwer. Académie Royale de Belgique, Bulletins de la Classe de Sciences, s. 5, v. 15, p. 183-188. GÖDEL, K. (1986) On intuitionistic arithmetic and number theory (1933). In: FEFERMAN, S. et alii (Ed.) Collected works. Oxford: Oxford University Press, p. 287-295. GÖDEL, K. (1986) An interpretation of the intuitionistic propositional calculus (1933). In: FEFERMAN, S. et alii (Ed.) Collected works. Oxford: Oxford University Press, p. 301-303. KOLMOGOROV, A. N. (1977) On the principle of excluded middle (1925). In: HEIJENOORT, J. (Ed.) From Frege to Gödel: a source book in mathematical logic 1879-1931. Cambridge: Harvard University Press, p. 414-437. LEWIS, C. I., LANGFORD, C. H. (1932) Symbolic Logic, New York (Reprinted in 1959). |

11:30 | A simple proof of Barr’s theorem for infinitary geometric logic ABSTRACT. Geometric logic has gained considerable interest in recent years: con- tributions and applications areas include structural proof theory, category theory, constructive mathematics, modal and non-classical logics, automated deduction. Geometric logic is readily defined by stating the structure of its axioms. A coherent implication (also known in the literature as a “geometric axiom”, a “geometric sentence”, a “coherent axiom”, a “basic geometric sequent”, or a “coherent formula”) is a first-order sentence that is the universal clo- sure of an implication of formulas built up from atoms using conjunction, disjunction and existential quantification. The proper geometric theories are expressed in the language of infinitary logic and are defined in the same way as coherent theories except for allowing infinitary disjunctions in the antecedent and consequent. Gentzen’s systems of deduction, sequent calculus and natural deduction, have been considered an answer to Hilbert’s 24th problem in providing the basis for a general theory of proof methods in mathematics that overcomes the limitations of axiomatic systems. They provide a transparent analysis of the structure of proofs that works to perfection for pure logic. When such systems of deduction are augmented with axioms for mathematical theories, much of the strong properties are lost. However, these properties can be regained through a transformation of axioms into rules of inference of a suitable form. Coherent theories are very well placed into this program, in fact, they can be translated as inference rules in a natural fashion: In the context of a sequent calculus such as G3c [4, 8], special coherent implications as axioms can be converted directly [2] to inference rules without affecting the admissibility of the structural rules; This is essential in the quest of applying the methods of structural proof theory to geometric logic. Coherent implications I form sequents that give a Glivenko class [5, 3]. In this case, the result [2], known as the first-order Barr’s Theorem (the general form of Barr’s theorem [1, 9, 6] is higher-order and includes the axiom of choice) states that if each I_i : 0 ≤ i ≤ n is a coherent implication and the sequent I_1, . . . , I_n ⇒ I_0 is classically provable then it is intuition- istically provable. By these results, the proof-theoretic study of coherent gives a general twist to the problem of extracting the constructive content of mathematical proofs. In this talk, proof analysis is extended to all such theories by augmenting an infinitary classical sequent calculus with a rule scheme for infinitary geometric implications. The calculus is designed in such a way as to have all the rules invertible and all the structural rules admissible. An intuitionistic infinitary multisuccedent sequent calculus is also intro- duced and it is shown to enjoy the same structural properties as the classical calculus. Finally, it is shown that by bringing the classical and intuitionistic calculi close together, the infinitary Barr theorem becomes an immediate result. References [1] Barr, M. Toposes without points, J. Pure and Applied Algebra 5, 265–280, 1974. [2] Negri, S. Contraction-free sequent calculi for geometric theories, with an application to Barr’s theorem, Archive for Mathematical Logic 42, pp 389–401, 2003. [3] Negri, S. Glivenko sequent classes in the light of structural proof theory, Archive for Mathematical Logic 55, pp 461–473, 2016. [4] Negri, S. and von Plato, J. Structural Proof Theory, Cambridge University Press, 2001. [5] Orevkov,V.P.Glivenko’ssequenceclasses,Logicalandlogico-mathematicalcalculi1, Proc. Steklov Inst. of Mathematics 98, pp 147–173 (pp 131–154 in Russian original), 1968. [6] Rathjen, M. Remarks on Barr’s Theorem: Proofs in Geometric Theories, In P. Schus- ter and D. Probst (eds.), Concepts of Proof in Mathematics, Philosophy, and Com- puter Science. De Gruyter, pp 347–374, 2016. [7] Skolem, T. Selected Works in Logic, J. E. Fenstad (ed), Universitetsforlaget, Oslo, 1970. [8] Troelstra, A.S. and Schwichtenberg, H. Basic proof theory (2nd edn.). Cambridge Univ. Press, 2001. [9] Wraith, G., Intuitionistic algebra: some recent developments in topos theory, Pro- ceedings of International Congress of Mathematics, Helsinki, pp 331–337, 1978. |

12:00 | Proof theory of infinite geometric theories ABSTRACT. A famous theorem of Barr's yields that geometric implications deduced in classical (infinitary) geometric theories also have intuitionistic proofs. Barr's theorem is of a category-theoretic (or topos-theoretic) nature. In the literature one finds mysterious comments about the involvement of the axiom of choice. In the talk I'd like to speak about the proof-theoretic side of Barr's theorem and aim to shed some light on the AC part. |

Organizers: María Del Rosario Martínez Ordaz and Otávio Bueno

In their day-to-day practice, scientists make constant use of defective (false, imprecise, conflicting, incomplete, inconsistent etc.) information. The philosophical explanations of the toleration of defective information in the sciences are extremely varied, making philosophers struggle at identifying a single correct approach to this phenomenon. Given that, we adopt a pluralist perspective on this issue in order to achieve a broader understanding of the different roles that defective information plays (and could play) in the sciences.

This symposium is devoted to exploring the connections between scientific pluralism and the handling of inconsistent as well as other types of defective information in the sciences. The main objectives of this symposium are (a) to discuss the different ways in which defective information could be tolerated (or handled) in the different sciences (formal, empirical, social, health sciences, etc. ) as well as (b) to analyze the different methodological tools that could be used to explain and handle such type of information.

The symposium is divided into two parts: the first tackles the issue of inconsistency and scientific pluralism. This part includes discussions of the possible connections between the different ways in which scientist tolerate contradictions in the sciences and particular kinds of scientific pluralism. This analysis is extremely interesting in itself as the phenomenon of inconsistency toleration in the science has often been linked to the development of a plurality of formal approaches, but not necessarily to logical or scientific pluralism. In fact, scientific pluralism is independent of inconsistency toleration.

The second part of the symposium is concerned with a pluralistic view on contradictions and other defects. This part is devoted to explore under which circumstances (if any) it is possible to use the same mechanisms for tolerating inconsistencies and for dealing with other types of defective information. This part includes reflections on the scope of different formal methodologies for handling defectiveness in the sciences as well as considerations on scientific communicative practices and their connections with the use of defective information and reflections on the different epistemic commitments that scientists have towards defective information.

Reasoning in social context has many important aspects, one of which is the reasoning about strategic abilities of individuals (agents) and groups (coalitions) of individuals to guarantee the achievement of their desired objectiveswhile acting within the entire society. Various logical systems have been proposed for formalising and capturing such reasoning, starting with Coalition Logic (CL) and some extensions of it, introduced the early 2000s.

Coalition Logic provides a natural, but rather restricted perspective: the agents in the proponent coalition are viewed as acting in full cooperation with each other but in complete opposition to all agents outside of the coalition,which are treated as adversaries.

The strategic interaction in real societies is much more complex, usually involving various patterns combining cooperation and competition. To capture these, more expressive and refined logical frameworks are needed.

In this talk I will first present briefly Coalition Logic and then will introduce and discuss some more expressive and versatile logical systems, including:

i. the Socially Friendly Coalition Logic (SFCL), enabling formal reasoning about strategic abilities of individuals and groups to ensure achievement of their private goals while allowing for cooperation with the entire society;

ii. the complementary, Group Protecting Coalition Logic (GPCL), capturing reasoning about strategic abilities of the entire society to cooperate in order to ensure achievement of the societal goals, while simultaneously protectingthe abilities of individuals and groups within the society to achieve their individual and group goals.

Finally, I will take a more general perspective leading towards a unifying logic-based framework for strategic reasoning in social context, and will associate it with the related concepts of mechanism design (in game theory)and rational synthesis (in computer science).

11:00 | Is There a Hard Problem for the Integrated Information Theory of Consciousness? PRESENTER: Robert Chis-Ciure ABSTRACT. Consciousness seems particularly hard to fit into our scientific worldview when we consider its subjective aspect. Neurobiological theories that account for consciousness starting from its physical substrate seem unable to explain the problem posed by experience. Why certain neural processes are accompanied by certain experiential features, while others are not? This is the Hard Problem (HP) of consciousness and any theory that attempts to explain this phenomenal feature of reality needs to address it. In this contribution, we discuss how HP affects the Integrated Information Theory (IIT), which today is regarded as one of the most prominent neurobiological theory of consciousness. By adopting a top-down approach from phenomenology to the mechanism of consciousness, IIT starts with five axioms that characterize the essential properties of every experience (i.e. intrinsic existence, composition, information, integration, and exclusion). Then, it infers for each axiom a corresponding postulate that specifies the properties that physical systems must satisfy in order to generate consciousness. Finally, IIT holds that experience is a maximally irreducible conceptual structure (MICS), which means that there is an identity between the phenomenological properties of experience and causal properties of a physical system (Oizumi et al. 2014, Tononi 2015). We propose our own analysis of Chalmers’ Hard Problem (Chalmers 2010, 2018), the Layered View of the Hard Problem, according to which there is a phenomenal layer and a further conceptual layer that together constitute HP. The first makes subjective experience an explanandum, generating the Monolayered Hard Problem (MHP). The second adds epistemic claims about how an explanation of experience should proceed, given the conceivability of zombies’ scenarios, thus creating the Double-Layered Hard Problem (DHP). We take DHP to be the standard Hard Problem as it is discussed in the literature (HP=DHP). If our analysis is correct, then the relation between HP and IIT depends on the theory’s stance on conceivability scenarios and it presents four possible different outcomes. Firstly, regarding MHP, IIT takes the road of nonreductive fundamental explanation and thus can be said to indirectly attempt to solve it. Secondly, the theory directly denies that there is a DHP for it to answer to due to its methodological choice of a top-down approach. Thirdly, IIT indirectly denies that there is in general a DHP, either by allowing only for functionally, but not physically identical zombies (no conceivability), or by holding the necessary identity between an experience and its MICS (no possibility). If our argument is sound, then IIT and HP in their current state cannot be both true: one of them needs to be revised or rejected. References Chalmers, D. (2010). The Character of Consciousness. New York: Oxford University Press. Chalmers, D. (2018). The Meta-Problem of Consciousness, 25(9-10), 6–61. Journal of Consciousness Studies. Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS Comput Biol, 10(5), e1003588. http://dx.doi.org/10.1371/journal.pcbi.1003588. Tononi, G. (2015). Integrated information theory. Scholarpedia, 10(1), 4164. http://dx.doi.org/10.4249/scholarpedia.4164. |

11:30 | What is "biological" about biologically-inspired computational models in cognitive science? Implications for the multiple realisation debate ABSTRACT. In this talk, I investigate the use of biologically-inspired computational models in cognitive science and their implications for the multiple realisation debate in philosophy of mind. Multiple realisation is when the same state or process can be realised in different ways. For example, flight is a potential multiply realised process. Birds, planes and helicopters all fly relying on the same aerodynamic principles but their mechanisms for flying differ substantially: birds have two wings which they flap in order to achieve flight, planes also have two wings, but they are static rather than flapping and helicopters use rotors on the top to produce enough lift for flight. If these “ways” of flying are considered sufficiently different, then we can conclude that flight is a multiply realised process. Philosophers of mind (such as Putnam (1967) and Fodor (1974) but more recently Polger & Shapiro (2016)) have frequently taken multiple realisation to be significant for metaphysical debates about whether mental processes can be reduced to neural processes. The idea being that if mental processes such as pain are multiple realised, then pain does not reduce to a neural process since it can be instantiated in other ways. The current literature on multiple realisation (for example, Polger and Shapiro (2016) and Aizawa (2018a; 2018b)) doesn’t consider how artificial and engineered systems such as biologically-inspired computational models fit into this debate. I argue that the use of these models in cognitive science motivates the need for a new kind of multiple realisation, which I call ‘engineered multiple realisation’ (or EMR). By this, I mean that scientists aim to create multiple realisations of cognitive capacities (such as object recognition) through engineering systems. I describe various examples of this in cognitive science and explain how these models incorporate biological elements in different ways. Given this, I claim that EMR cannot bear on debates about the nature of mental processes. Instead, I argue that, when building computational models as EMRs, there are different payoffs for incorporating biology into the models. For example, researchers are often motivated to incorporate biological elements into their models in the hope that doing so will lead to better performance of their models. (Baldassarre et al (2017); George (2017); Laszlo & Armstrong, (2013)) Other researchers incorporate biological elements into models as a way to test hypotheses about the mechanisms underlying human vision. (Tarr & Aminoff, 2016) I emphasise that these payoffs depend on the goals of different modelling approaches and what the approaches take to be biologically relevant for these goals. By sketching out different approaches and their notions of biological relevance, I conclude that there are many important roles that EMR can play instead of informing traditional metaphysical debates about the reduction of mental to neural processes. References: Aizawa, K. (2018a). Multiple Realization, Autonomy, and Integration. In D. M. Kaplan (Ed.), Explanation and Integration in Mind and Brain Science (pp. 215–235). Oxford: Oxford University Press. Aizawa, K. (2018b). Multiple realization and multiple “ways” of realization: A progress report. Studies in History and Philosophy of Science Part A, 68, 3–9. https://doi.org/10.1016/j.shpsa.2017.11.005 Baldassarre, G., Santucci, V. G., Cartoni, E., & Caligiore, D. (2017). The architecture challenge: Future artificial-intelligence systems will require sophisticated architectures, and knowledge of the brain might guide their construction. Behavioral and Brain Sciences, 40, 25–26. Fodor, J. A. (1974). Special Sciences (Or: The Disunity of Science as a Working Hypothesis). Synthese, 28(2), 97–115. George, D. (2017). What can the brain teach us about building artificial intelligence? Behavioral and Brain Sciences, 40, 36–37. Laszlo, S., & Armstrong, B. C. (2013). Applying the dynamics of post-synaptic potentials to individual units in simulation of temporally extended ERP reading data. Proceedings of the Annual Meeting of the Cognitive Science Society, 35(35). Polger, T. W., & Shapiro, L. A. (2016). The multiple realization book. Oxford: Oxford University Press. Putnam, H. (1967). Psychological Predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, mind and religion. Pittsburgh: University of Pittsburgh Press. Tarr, M. J., & Aminoff, E. M. (2016). Can big data help us understand human vision? In M. N. Jones (Ed.), Big Data in Cognitive Science (pp. 343–363). New York: Routledge. |

12:00 | Turing Redux: An Enculturation Account of Calculation and Computation ABSTRACT. Calculation and other mathematical practices play an important role in our cognitive lives by enabling us to organise and navigate our socio-cultural environment. Combined with other mathematical practices, calculation has led to powerful scientific methods that contribute to an in-depth understanding of our world. In this talk, I will argue that calculation is a practice that relies on the embodied manipulation of numerical symbols. It is distributed across the brain, the rest of the body, and the socio-cultural environment. Phylogenetically, calculation is the result of concerted interactions between human organisms and their socio-cultural environment. Across multiple generations, these interactions have led from approximate quantity estimations and object-tracking to the cumulative cultural evolution of discrete, symbol-based operations. Ontogenetically, the acquisition of competence in calculation is the result of enculturation. Enculturation is a temporally extended process that usually leads to the acquisition of culturally, rather than biologically, evolved cognitive practices (Fabry, 2017; Menary, 2015). It is associated with plastic changes to neural circuitry, action schemata, and motor programs. With these considerations in place, I will describe the recent cognitive history of computation. Based on Turing’s (1936) seminal work on computable numbers, computation can be characterised as a specific type of calculation. Computational systems, I will show, are hybrid systems, because they are realised by the swift integration of embodied human organisms and cognitive artefacts in different configurations (Brey, 2005). Classically, computations are realised by enculturated human organisms that bodily manipulate numerical symbols using pen and paper. Turing’s (1936) work built on this observation and paved the way towards the design of digital computers. The advent of digital computers has led to an innovative way to carry out hybrid computations: enculturated human organisms can now complete complex computational tasks by being coupled to computational artefacts, i.e., digital computers. Some of these tasks would be very difficult or even impossible to complete (e.g., in statistical data analysis) if human organisms were denied the use of digital computational artefacts. In sum, in this talk I will argue that computation, understood as a specific kind of calculation, is the result of enculturation. Historically, enculturated computation has enabled the development and refinement of digital computers. These digital computers, in turn, help enculturated human organisms complete computational tasks, because they can be recruited as a reliable component of coupled hybrid human-machine computational systems. These hybrid systems promise to lead to further improvements of digital computational artefacts in the foreseeable future. References Brey, P. (2005). The epistemology and ontology of human-computer interaction. Minds and Machines, 15(3–4), 383–398. Fabry, R. E. (2017). Cognitive innovation, cumulative cultural evolution, and enculturation. Journal of Cognition and Culture, 17(5), 375–395. https://doi.org/10.1163/15685373-12340014 Menary, R. (2015). Mathematical cognition: A case of enculturation. In T. Metzinger & J. M. Windt (Eds.), Open MIND (pp. 1–20). Frankfurt am Main: MIND Group. https://doi.org/10.15502/9783958570818 Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42, 230–265. |

11:00 | Dispositions and Causal Bayes Nets PRESENTER: Florian Fischer ABSTRACT. In this talk we develop an analysis of dispositions on the basis of causal Bayes nets (CBNs). Causal modeling techniques such as CBNs have already been applied to various philosophical problems (see, e.g., Beckers, ms; Gebharter, 2017a; Hitchcock, 2016; Meek & Glymour, 1994; Schaffer, 2016). Using the CBN formalism as a framework for analyzing philosophical concepts and issues intimately connected to causation seems promising for several reasons. One advantage of CBNs is that they make causation empirically tangible. The CBN framework provides powerful tools for formulating and testing causal hypotheses, for making predictions, and for the discovery of causal structure (see, e.g., Spirtes, Glymour, & Scheines, 2000). In addition, it can be shown that the theory of CBNs satisfies standards successful empirical theories satisfy as well: It provides the best explanation of certain empirical phenomena and can, as a whole theory, be tested on empirical grounds (Schurz & Gebharter, 2016). In the following we use CBNs to analyze dispositions as causal input-output structures. Such an analysis of dispositions comes with several advantages: It allows one to apply powerful causal discovery methods to find and specify dispositions. It is also flexible enough to account for the fact that dispositions might change their behavior in different circumstances. In other words, one and the same disposition may give rise to different counterfactual conditionals if its causal environment is changed. The CBN framework can be used to study such behavior of dispositions in different causal environments on empirical grounds. Because of this flexibility, our analysis can also provide novel solutions to philosophical problems posed by masks, mimickers, and finks which, one way or another, plague all other accounts of dispositions currently on the market. According to Cross (2012), the “recent literature on dispositions can be characterized helpfully, if imperfectly, as a continuing reaction to this family of counterexamples” (Cross, 2012, p. 116). Another advantage of our analysis is that it allows for a uniform representation of probabilistic and non-probabilistic dispositions. Other analyses of dispositions often either have trouble switching from non-probabilistic dispositions to probabilistic dispositions, or exclude probabilistic dispositions altogether. The talk is structured as follows: In part 1 we introduce dispositions and the problems arising for classical dispositional theories due to masks, mimickers, and finks. Then, in part 2, we present the basics of the CBN framework and our proposal for an analysis of dispositions within this particular framework. We highlight several advantages of our analysis. In part 3 we finally show how our analysis of dispositions can avoid the problems with masks, mimickers, and finks classical accounts have to face. We illustrate how these problems can be solved by means of three prominent exemplary scenarios which shall stand proxy for all kinds of masking, mimicking, and finking cases. References Beckers, S. (ms). Causal modelling and Frankfurt cases. Cross, T. (2012). Recent work on dispositions. Analysis, 72(1), 115-124. Gebharter, A. (2017a). Causal exclusion and causal Bayes nets. Philosophy and Phenomenological Research, 95(2), 153-375. Hitchcock, C. (2016). Conditioning, intervening, and decision. Synthese, 193(4), 1157-1176. Meek, C., & Glymour, C. (1994, December). Conditioning and intervening. British Journal for the Philosophy of Science, 45(4), 1001-1021. Schaffer, J. (2016). Grounding in the image of causation. Philosophical Studies, 173(1), 49-100. Schurz, G., & Gebharter, A. (2016). Causality as a theoretical concept: Explanatory warrant and empirical content of the theory of causal nets. Synthese, 193(4), 1073-1103. Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction, and search (2nd ed.). Cambridge: MIT Press. |

11:30 | Modeling Creative Abduction Bayes Net Style PRESENTER: Alexander Gebharter ABSTRACT. In contrast to selective abduction and other kinds of inferences, creative abduction is intended as an inference method for generating hypotheses featuring new theoretical concepts on the basis of empirical phenomena. Most philosophers of science are quite skeptical about whether a general approach toward such a “logic of scientific inquiry” can be fruitful. However, since theoretical concepts are intimately connected to empirical phenomena via dispositions, a restriction of the domain of application of such an approach to empirically correlated dispositions might be more promising. Schurz (2008) takes up this idea and differentiates between different patterns of abduction. He then argues for the view that at least one kind of creative abduction can be theoretically justified. In a nutshell, his approach is based on the idea that inferences to theoretical concepts unifying empirical correlations among dispositions can be justified by Reichenbach’s (1956) principle of the common cause. In this talk we take up Schurz’ (2008) proposal to combine creative abduction and principles of causation. We model cases of successful creative abduction within a Bayes net framework that can, if causally interpreted, be seen as a generalization of Reichenbach’s (1956) ideas. We specify general conditions that have to be satisfied in order to generate hypotheses involving new theoretical concepts and describe their unificatory power in a more fine-grained way. This will allow us to handle cases in which we can only measure non-strict (probabilistic) empirical dependencies among dispositions and to shed new light on several other issues in philosophy of science. We consider our analysis of successful instances of creative abduction by means of Bayes net models as another step toward a unified Bayesian philosophy of science in the sense of Sprenger and Hartmann (in press). The talk is structured as follows: We start by introducing Schurz’ (2008) approach to creative abduction. We also explain how it allows for unifying strict empirical correlations among dispositions and how it can be justified by Reichenbach’s (1956) principle of the common cause. We then briefly introduce the Bayes net formalism, present our proposal how to model successful cases of creative abduction within this particular framework, and identify necessary conditions for such cases. Next we investigate the unificatory power gained by creative abduction in the Bayesian setting and draw a comparison with the unificatory power creative abduction provides in the strict setting. Subsequently, we outline possible applications of our analysis to other topics within philosophy of science. In particular, we discuss the generation of use-novel predictions, new possible ways of applying Bayesian confirmation theory, a possible (partial) solution to the problem of underdetermination, and the connection of modeling successful instances of creative abduction Bayesian style to epistemic challenges tackled in the causal inference literature. References Reichenbach, H. (1956). The direction of time. Berkeley: University of California Press. Schurz, G. (2008). Patterns of Abduction. Synthese, 164(2), 201-234. Sprenger, J., & Hartmann, S. (in press). Bayesian philosophy of science. Oxford: Oxford University Press. |

I use Einstein’s theory of relativity to draw out some lessons about the defining features of an objective description of

reality. I argue, in particular, against the idea that an objective description can be a description from the point of view of no-one in

particular.

14:00 | Ecumenism: a new perspective on the relation between logics PRESENTER: Luiz Carlos Pereira ABSTRACT. A traditional way to compare and relate logics (and mathematical theories) is through the definition of translations/interpretations /embeddings. In the late twenties and early thirties of last century, several such results were obtained concerning some relations between classical logic (CL), intuitionistic logic (IL) and minimal logic (ML), and between classical arithmetic (PA) and intutionistic arithmetic (HA). In 1925 Kolmogorov proved that classical propositional logic (CPL) could be translated into intuitionistic propositional logic (IPL). In 1929 Glivenko proved two important results relating (CPL) to (IPL). Glivenko’s first result shows that A is a theorem of CPL iff ¬¬A is a theorem of IPL. His second result establishes that we cannot distinguish CPL from IPL with respect to theorems of the form ¬A. In 1933 Gödel defined an interpretation of PA into HA, and in the same year Gentzen defined a new interpretation of PA into HA. These interpretations/translations/embeddings were de- fined as functions from the language of PA into some fragment of the language of the HA that preserve some important properties, like theoremhood. In 2015 Dag Prawitz (see [3]) proposed an ecumenical system, a codification where classical logic and the intuitionistic logic could coexist “in peace”. The main idea behind this codification is that the classical logic and the intuitionistic logic share the constants for conjunction, negation and the universal quantifier, but each has its own disjunction, implication and existential quantifier. Similar ideas are present in Dowek [1] and Krauss [2], but without Prawitz’ philosophical motivations. The aims of the present paper are: (1) to investigate the proof theory and the semantics for Prawitz’ Ecumenical system (with a particular emphasis on the role of negation), (2) to compare Prawitz’ system with other ecumenical approaches, and (3) to propose new ecumenical systems. References 1. Dowek, Gilles, On the definitions of the classical connective and quantifiers, Why is this a proof? (Edward Haeusler, Wagner Sanz and Bruno Lopes, editors), College Books, UK, 2015, pp. 228 - 238. 2. Krauss, Peter H., A constructive interpretation of classical mathematics, Mathematische Schriften Kassel, preprint No. 5/92 (1992) 3. Prawitz, Dag, Classical versus intuitionistic logic, Why is this a proof? (Edward Haeusler, Wagner Sanz and Bruno Lopes, editors), College Books, UK, 2015, pp. 15 - 32. |

14:30 | Modal Negative Translations as a Case Study in The Big Programme ABSTRACT. This talk is about negative translations—Kolmogorov, Gödel-Gentzen, Kuroda, Glivenko and their variants—in propositional logics with a unary normal modality. More specifically, it addresses the question whether negative translations as a rule embed faithfully a classical modal logic into its intuitionistic counterpart. As it turns out, even the Kolmogorov translation can go wrong with rather natural modal principles. Nevertheless, one can isolate sufficient syntactic criteria for axioms (“enveloped implications”) ensuring adequacy of well-behaved (or, in our terminology, “regular”) translations. Furthermore, a large class of computationally relevant modal logics—namely, logics of type inhabitation for applicative functors (a.k.a. “idioms”)—turns out to validate the modal counterpart of the Double Negation Shift, thus ensuring adequacy of even the Glivenko translation. All the positive results mentioned above can be proved purely syntactically, using the minimal natural deduction system of Bellin, de Paiva and Ritter extended with Sobociński-style additional axioms/combinators. Hence, mildly proof-theoretic methods can be surprisingly successfully used in “the Big Programme" (to borrow F. Wolter and M. Zakharyaschev's phrase from the "Handbook of Modal Logic") . Most of this presentation is based on results published with my former students, who provided formalization in the Coq proof assistant. In the final part, however, I will discuss variants of a semantic approach based either on a suitable notion of subframe preservation or on a generalization of Wolter’s “describable operations”. An account of this semantic approach and comparison with the scope of the syntactic one remain unpublished yet. |

14:00 | Mutually inconsistent set theoretic-universes: An analysis of universist and multiversist strategies PRESENTER: Carolin Antos ABSTRACT. Modern set theory presents us with the very curious case of a mathematical discipline whose practice has been decidedly pluralistic in flavor during the last decades. The typical work of a set theorist today consists of starting from a model of set theory and building up different models which not only can be, but usually are, incompatible with one another in the sense of fulfilling mutually inconsistent mathematical statements. This practice is so mathematically fruitful that nowadays there is a whole cosmos of set-theoretic models that are very well researched and worked on, but which contradict each other to the point that they seem to represent different “kinds of set theories”. Recent programs in the philosophy of set theory try to resolve this situation in the most diverse ways, from claiming a pluralistic platonism to trying to return to the picture of single, “true” model of mathematics. In this talk we want to explain how such a pluralistic practice evolved in the 1960’s; why and how it has been not only tolerated but pursued until now; and we want to analyze various strategies that have been proposed to deal with the scientific pluralism implicit in this practice. |

14:30 | Informal Rigorous Mathematics and its Logic ABSTRACT. Abstract: Mathematical practice in all its forms, and despite formidable technicalities, is a natural-language practice. Thus, the logic(s) of that practice is implicit; and, in turn—like claims about the logic(s) of natural language—what logic or logics are operative in mathematics is an empirical question. There is a normative question in the neighborhood. Regardless of what the descriptive situation vis-à-vis the logic(s) of mathematics is discovered to be, we can nevertheless ask the further question: what logic(s) should mathematics be governed by? This further question requires clarity about the function(s) of mathematics. It’s important to realize that although mathematics is a natural-language practice, the answers to both the descriptive and the normative questions about mathematics and natural languages, generally, needn’t be the same. The gold standard for logic, I suggest, is Frege’s. If we compare some form of syllogistic logic, or one place-predicate logic, with the standard predicate logic, we find an enormous advance in terms of formalizing the reasoning of mathematical proof: the project of characterizing classical mathematical reasoning seems solved by the use of this logic. This is because predicate relations can be represented in the standard predicate calculus and they are crucial to mathematical reasoning. In making this claim, it needs to be shown that classical higher-order logics don’t provide characterizations of mathematical theorizing that go beyond what’s available in the first-predicate calculus—but this can be shown. In particular, categoricity results in a first-order logical system are always available even without those results being true of such systems. The important question is whether a similar advance is possible, taking us beyond the standard predicate calculus. A tempting possibility is afforded by the many examples in the history of mathematics where apparently inconsistent principles were employed. (Euler’s free and easy ways with infinite series are often cited; infinitesimal reasoning is another example.) I claim that there isn’t anything here that motivates characterizing mathematical practice according to non-classical logics that allow true contradictions, or the like. Pluralist conceptions of the logic(s) of mathematics, however, can be motivated by considerations independent of the foregoing. This is because of the global reach of mathematics. One can simply study in mathematics any subject area subject to any logic one imagines: set theory in a multivalued logic, intuitionistic real analysis, and so on. Furthermore, although it’s not uncommon to study these subject areas from a “classical” viewpoint, that isn’t required. To speak somewhat picturesquely, one can use an intuitionistic metalanguage to derive results about intuitionistic set theory. Applying mathematics to empirical sciences—I claim—is a different matter. Here the mathematics must function under one logical umbrella: whichever logic is operative in the sciences. I claim that the holism typical of the sciences—that results from any area may be applied to any other—requires the over-arching logic to be one, and (as of now, at last) to be classical. Pluralism in mathematics itself is a truism; pluralism in applied mathematics is forbidden. |

Organizers: Hitoshi Omori and Heinrich Wansing

Modern connexive logic started in the 1960s with seminal papers by Richard B. Angell and Storrs McCall. Connexive logics are orthogonal to classical logic insofar as they validate certain non-theorems of classical logic, namely

Aristotle's Theses: ~(~A-> A), ~(A-> ~A)

Boethius' Theses: A-> B)-> ~(A-> ~B), (A-> ~B)-> ~(A-> B)

Systems of connexive logic have been motivated by considerations on a content connection between the antecedent and succedent of valid implications and by applications that range from Aristotle's syllogistic to Categorial Grammar and the study of causal implications. Surveys of connexive logic can be found in:

*Storrs McCall, "A History of Connexivity", in D.M. Gabbay et al. (eds.), Handbook of the History of Logic. Volume 11. Logic: A History of its Central Concepts, Amsterdam, Elsevier, 2012, pp. 415-449.

*Heinrich Wansing, "Connexive Logic", in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2014 Edition).

There is also a special issue on connexive logics in the IfCoLog Journal of Logics and their Applications. The entire issue is available at: http://collegepublications.co.uk/ifcolog/?00007

As we are observing some growing interests in topics related to connexive logics, collecting attention from researchers working on different areas within philosophical logic, the symposium aims at discussing directions for future research in connexive logics. More specifically, we will have talks related to modal logic, many-valued logic, probabilistic logic, relevant (or relevance) logic and conditional logic, among others. There will also be some connections to experimental philosophy and philosophy of logic.

14:00 | Are connexive principles coherent? ABSTRACT. While there is a growing interest in studying connexive principles in philosophical logic, only little is known about their behaviour under uncertainty. In particular: can connexive principles be validated in probability logic? Among various approaches to probability, I advocate the coherence approach, which traces back to Bruno de Finetti. In my talk, I discuss selected connexive principles from the viewpoints of coherence-based probability logic (CPL) and the theory of conditional random quantities (CRQ). Both CPL and CRQ are characterised by transmitting uncertainty coherently from the premises to the conclusions. Roughly speaking, the theory of CRQ generalises the notion of conditional probability to deal with nested conditionals (i.e., conditionals in antecedents or consequents) and conjoined conditionals (like disjunctions and conjunctions of conditionals) without running into Lewis' triviality results. Within the frameworks of CPL and of CRQ, I investigate which connexive principles follow from the empty premise set. Specifically, I explain why Boethius' theses, Abelard's First Principle, and in particular Aristotle's theses do hold under coherence and why Aristotle’s Second Thesis does not hold under coherence. Although most of these connexive principles have a high intuitive appeal, they are neither valid in classical logic nor in many other logics (including those which were custom-made for conditionals). In the CPL and CRQ frameworks for uncertain conditionals, however, intuitively plausible connexive principles can be validated. The psychological plausibility of the proposed approach to connexivity is further endorsed by experimental psychological studies: an overview on the empirical evidence will conclude my talk. |

14:30 | Variable sharing principles in connexive logic PRESENTER: Luis Estrada-González ABSTRACT. Connexive logics are motivated by certain ways in which connection between antecedent and consequent in a true conditional is perceived, and especially in one that is logically valid. According to Routley, if the connection that needs to happen between antecedent and consequent in a true conditional is understood as content connection, and if a connection in content is achieved if antecedent and consequent are mutually relevant, “the general classes of connexive and relevant logics are one and the same”. However, it is well known that, in general, connexive and relevant logics are mutually incompatible. For instance, there are results such as the logic resulting from adding Aristotle’s Thesis to B is contradictory, and that R plus Aristotle’s Thesis is trivial. Thus, even though there is a sense which is probably lenient of ‘connection between antecedent and consequent of a true (or logically valid) conditional’ that allows to group in the same family both connexive and relevance logics, what Sylvan called at some point ‘sociative logics’, the types of connection that each type of logic demands are very different one to the other, to the point of incompatibility. A fruitful way to study logics in the relevant family was by means of certain principles about the form that theorems should have, especially the purely implicative ones, or the relevant characteristics that an acceptable logically valid proof should have. This paper is part of a broader ongoing project in which we are investigating if we can pin down the similarities and differences between connexivity and relevance through principles of this kind. The general hypothesis of this broader project is that we can indeed do so; in this paper, we will only show that some connexive logics imply principles that could be considered as extremely strong versions of the Variable Sharing Property. Furthermore, we will show that there are conceptual connections worthy of consideration amongst that type of principles and some characteristics of certain connexive logics, such as the number of appearances of a variable in an antecedent, in a consequent or in a theorem in general. This is important because relevance logics and connexive logics have some common goals, such as finding a closer bond between premises and conclusions, but at the same time, as we have said, there are strong differences between both families of logics. |

14:00 | A bimodal logic of change with Leibnizian hypothetical necessity PRESENTER: Kordula Świętorzecka ABSTRACT. We present a new extension of LCS4 logic of change and necessity. LCS4 (formulated in [2]) is obtained from the LC sentential logic of change, where changeability is expressed by a primitive operator (C… to be read: it changes that ….). LC was mentioned as a formal frame for a description of the so-called substantial changes analysed by Aristotle (disapearing and becoming of substances) ([1]). Another interesting philosophical motivation for LC (and LCS4) comes from the philosophy of Leibniz. In this case the considered changes concern global states of compossible monads. Both LC and LCS4 are interpreted in the semantics of linear histories of dichotomic changes. In the case of LCS4, in addition to the concept of C change, there is considered a second primitive notion: unchangeability represented by \Box. LC and LCS4 are complete in the intended semantics (proofs in [1], [2]). Semantically speaking, the subsequent changes cause the rise and passage of linear time. This is a well-known concept of both Aristotle and Leibniz. An idea to link LCS4 with the Leibnizian philosophy of change and time encourages us to introduce in its semantics the concept of a possible world understood as a global description of compossible atomic states of monads. A possible global state of compossible atomic states: j, k, l, …. of monad m may be represented by a conjunction of sentential constants: \alpha^m_j \land \alpha^m_k \land \alpha^m_l \dots. For every monad there may be considered many different global states, but only one of them is actual. The possible worlds of m which are not in contradiction with the actual world of m are mutually accessible. These actual atomic states of m, which occur in all possible worlds of m which are accessible from the actual one, are necessary in the sense of our new modal operator N\Box. If any actual state of m occurs in at least one possible world of m accessible from the actual one, we say that it is possible in the actual world in sense of N\Diamond. Regardless of simultaneous competitive global states of m, each of them may change in a sense of C or may be unchangeable in a sense of \Box. Our new semantics contains many linear possible histories of C changes and \Box durations, which generate the flow of time. At the syntactical level we extend LCS4 by new axioms containing primitive symbols N\Box, C, \Box. We prove a completeness theorem for our new logic. In this frame we can also explicate the specific concept of the Leibnizian hypothetical necessity. It may be said that these sentences are hypothetically necessary, which are general laws of possible worlds ‘analogous to the laws of motion; what these laws are, is contingent, but that they are such laws is necessary’ [3, 69]. In our terms a sentence \alpha is hypothetically necessary in a possible world w iff \alpha is unchangeable in some world accessible from w, this means: N\Diamond\Box\alpha is true in w. [1] Świętorzecka, K., Czermak, J., (2012) “Some Calculus for a Logic of Change”, Journal of Applied Non-Classical Logic, 22(1):1–8; [2] (2015) “A Logic of Change with Modalities”, Logique et Analyse, 232, 511–527; [3] Russell, B., (1900), A critical exposition of the philosophy of Leibniz, Cambridge University Press. |

Organizer: Paula Quinon

The HaPoC symposium "Philosophy of Big Data" is submitted on behalf of the DLMPST, History and Philosophy of Computing division

The symposium devoted to a discussion of philosophical problems related to Big Data, an increasingly important topic within philosophy of computing. Big Data are worth studying from an academic perspective for several reasons. First of all, ontological questions are central: what Big Data are, whether we can speak of them as separate ontological entity, and what their mereological status is. Second, epistemological ones: what kind of knowledge do they induce, and what methods do they require for accessing valuable information.

These general questions have also very specific counterparts raising series of methodological questions. Should data accumulation and analysis follow the same general patterns for all Sciences, or should those be relativized to particular domains? For instance, shall medical doctors and businessmen focus on the same issues related to gathering of information? Is the quality of information similarly important in all the contexts? Can one community be inspired by experience of another? To which extent human factors influence information that we issue from Big Data?

In addition to these theoretical academic issues, Big Data represents also a social phenomenon. “Big Data” is nowadays a fancy business buzzword, which - together with "AI" and "Machine Learning" – shapes business projects and the R&D job market, with data analysts among the most attractive job titles. It is believed that "Big Data" analysis opens up unknown opportunities and generates additional profits. However, it is not clear what counts as Big Data in the industry and critical reflection about it seems necessary.

The proposed symposium gathers philosophers, scientists and experts in commercial Big Data analysis to reflect on these questions. We believe that the possibility to exchange ideas, methodologies and experiences gathered from different perspectives and with divergent objectives, will enrich not only academic philosophical reflection, but will also prove useful for practical - scientific or business - applications.

15:15 | On the epistemology of data science – the rise of a new inductivism ABSTRACT. Data science, here understood as the application of machine learning methods to large data sets, is an inductivist approach, which starts from the facts to infer predictions and general laws. This basic assessment is illustrated by a case study of successful scientific practice from the field of machine translation and also by a brief analysis of recent developments in statistics, in particular the shift from so-called data modeling to algorithmic modeling as described by the statistician Leo Breiman. The inductivist nature of data science is then explored by discussing a number of interrelated theses. First, data science leads to the increasing predictability of complex phenomena, especially to more reliable short-term predictions. This essentially follows from the improved ways of storing and processing data by means of modern information technology in combination with the inductive methodology provided by machine learning algorithms. Second, the nature of modeling changes from heavily theory-laden approaches with little data to simple models using a lot of data. This change in modeling can be observed in the mentioned shift from data to algorithmic models. The latter are in general not reducible to a relatively small number of theoretical assumptions and must therefore be developed or trained with a lot of data. Third, there are strong analogies between exploratory experimentation, as characterized by Friedrich Steinle and Richard Burian, and data science. Most importantly, a substantial theory-independence characterizes both scientific practices. They also share a common aim, namely to infer causal relationships by a method of variable variation as will be elaborated in more detail in the following theses. Fourth, causality is the central concept for understanding why data-intensive approaches can be scientifically relevant, in particular why they can establish reliable predictions or allow for effective interventions. This thesis states the complete opposite of the popular conception that with big data correlation replaces causation. In a nutshell, the argument for the fourth thesis is contained in Nancy Cartwright’s point that causation is needed to ground the distinction between effective strategies and ineffective ones. Because data science aims at effectively manipulating or reliably predicting phenomena, correlations are not sufficient but rather causal connections must be established. Sixth, the conceptual core of causality in data science consists in difference-making rather than constant conjunction. In other words, variations of circumstances are much more important than mere regularities of events. This is corroborated by an analysis of a wide range of machine learning algorithms, from random trees or forests to deep neural networks. Seventh, the fundamental epistemological problem of data science as defined above is the justification of inductivism. This is remarkable, since inductivism is by many considered a failed methodology. However, the epistemological argument against inductivism is in stark contrast to the various success stories of the inductivist practice of data science, so a reevaluation of inductivism may be needed in view of data science. |

15:45 | Finding a Way Back: Philosophy of Data Science on Its Practice PRESENTER: Domenico Napoletani ABSTRACT. Because of the bewildering proliferation of data science algorithms, it is difficult to assess the potential of individual techniques, beyond their obvious ability to solve the problems that have been tested on them, or to evaluate their relevance for specific datasets. In response to these difficulties, an effective philosophy of data science should be able not only to describe and synthesize the methodological outline of this field, but also to project back on the practice of data science a discerning frame that can guide, as well as be guided by, the development of algorithmic methods. In this talk we attempt some first steps in this latter direction. In particular, we will explore the appropriateness of data science methods for large classes of phenomena described by processes mirroring those found in developmental biology. Our analysis will rely on our previous work [1,2,3] on the motifs of mathematization in data science: the principle of forcing, that emphasizes how large data sets allow mathematical structures to be used in solving problems, irrespective of any heuristic motivation for their usefulness; and Brandt's principle [3], that synthesizes the way forcing local optimization methods can be used in general to build effective data-driven algorithms. We will then show how this methodological frame can provide useful broad indications on key questions of stability and accuracy for two of the most successful methods in data science, deep learning and boosting. [1] D. Napoletani, M. Panza, and D.C. Struppa, Agnostic science. Towards a philosophy of data analysis, Foundations of Science, 2011, 16, pp. 1--20. [2] D. Napoletani, M. Panza, and D.C. Struppa, Is big data enough? A reflection on the changing role of mathematics in applications. Notices of the American Mathematical Society, 61, 5, pp. 485--490, 2014. [3] D. Napoletani, M. Panza, and D.C. Struppa, Forcing Optimality and Brandt's Principle, in J. Lenhard and M. Carrier (ed.), Mathematics as a Tool, Boston Studies in the Philosophy and History of Science 327, Springer, 2017. |

Organizers: Gisela Boeck and Benedikt Loewe

The year 2019 is the International Year of the Periodic Table (IYPT), celebrating the 150th anniversary of its year of discovery, and the International Union for History and Philosophy of Science and Technology (IUHPST) is one of the supporting institutions of IYPT.

With this event at CLMPST 2019, we aim to offer all participants of the congress, independent of whether they are working in philosophy of chemistry or not, an insight into the relevance and important of the Periodic Table. The event consists of talks for a general academic audience, with a non-technical historical introduction by Hasok Chang, two personal reflections by current or recent graduate students in philosophy of chemistry, and a local point of view by a expert from Prague. The session will be chaired by Gisela Boeck.

Organizer: Mateusz Łełyk

The aim of our symposium is twofold. Firstly, we provide a unified approach to a number of contemporary logico-philosophical results and propose to see them as being about the commitments of various prominent foundational theories. Secondly, we give an overview of formal results obtained over the past few years which shed new light on commitments of both arithmetical theories and theories of sets.

The rough intuition is that commitments of a theory are all the restrictions on the ways the world might be, which are imposed on us given that we accept all the basic principles of the theory. For clarification, during the symposium we focus on the following two types of commitments of a given foundational theory Th:

1. Epistemic commitments are all the statements in the language of Th (or possibly, in the language of Th extended with the truth predicate) that we should accept given that we accept Th.

2. Semantic commitments are all the restrictions on the class of possible interpretations of Th generated by the acceptance of a theory of truth over Th.

In the context of epistemic commitments, several authors have claimed that a proper characterisation of a set of commitments of Th should take the form of an appropriate theory of truth built over Th (see, for example, [Feferman 91], [Ketland 2005] and [Nicolai,Piazza 18]). During the symposium we give an overview of the latest results concerning the Tarski Boundary - the line demarcating the truth theories which generate new implicit commitments of Peano Arithmetic (PA) from the ones which do not. Moreover, we investigate the role of a special kind of reducibility, feasible reducibility, in this context and prove some prominent theories of compositional truth to be feasibly reducible to their base theories.

A different approach to characterize the epistemic commitments of a foundational theory Th was given in [Cieśliński 2017]. Its basic philosophical motivation is to determine the scope of implicit commitments via an epistemic notion of believability. One of the symposium talks will be devoted to presenting this framework.

While investigating the epistemic commitments of Th, we look at the consequences of truth theories in the base truth-free language. Within this approach, a truth theory Th_1 is at least as committing as Th_2 if Th_1 proves all the theorems of Th_2 in the base language. In the semantic approach, one tries to understand every possible condition which truth theories impose on the class of models of Th, instead of looking only at the conditions which are expressible in the base language. A theory Th_1 is at least as semantically committing as Th_2 if for every condition which Th_2 can impose on models of PA, the same condition is imposed already by Th_1. During the symposium we present and compare the latest formal results concerning the semantical commitments of various truth theories extending two of the most distinguished foundational theories: PA and Zermelo-Fraenkel set theory (ZF). During the talks we discuss the philosophical meaning of these developments.

References:

[Cieśliński 2017] The Epistemic Lightness of Truth, Cambridge University Press.

[Feferman 1991] Reflecting on Incompleteness, Journal of Symbolic Logic, 56(1), 1-49.

[Ketland 2005] Deflationism and the Godel Phenomena: reply to Tennant, Mind, 114(453), 75-88.

[Nicolai, Piazza 2018] The Implicit Commitments of Arithmetical Theories and its Semantic Core, Erkenntins.

15:15 | A Notion of Semantic Uniqueness for Logical Constants ABSTRACT. The demarcation problem for the logical constants is the problem of deciding which expressions of a language to count as part of the logical lexicon, and for what reason. Inferentialist approaches to meaning hold that the inferential behaviour of an expression is meaning-constitutive, and that logical expressions are special in that (among other things) the rules that govern their behaviour uniquely determine their meaning. Things become more complicated when properly semantic (model-theoretic) considerations enter the picture, yet the notion of a consequence relation or a set of rules uniquely determining the meaning of a constant is gaining currency among semantic approaches to the question of logical constanthood as well. In this talk we would like to explore the issues a proponent of a semantic criterion of logicality will encounter when adopting the inferentialist conception of uniqueness, and what a properly semantic notion of uniqueness for logicality could look like. The notion of uniqueness gained importance in the inferentialist approach to the meaning of the logical constants as it constituted a natural complement to the constraint of conservativity -- ruling out Prior's defective connective tonk (Prior 1960, Belnap 1962), -- cohering with the 'rules-as-definitions' approach pioneered by Gentzen (Gentzen 1934). Identifying the meaning of a logical constant with its inferential role, the demand of uniqueness, in its simplest form, amounted to the requirement that, for constants c, c' obeying identical collections of rules, only synonymous compounds can be formed, i.e. that in a language containing both c and c' we have that (UC) c(A_{1}, ..., A_{n}) \dashv \vdash c'(A_{1}, ..., A_{n}) Shifting perspective to a truth-theoretic framework, in which the meaning of a logical constant is given by a model-theoretic object, gives rise to interesting issues in the implementation of this kind of notion of uniqueness. For not only are semantic values underdetemined by (single-conclusion) proof-rules (a set of results collectively termed Carnap's Problem; cf. (Carnap 1943)), it is moreover not immediately clear in what way (UC) is to be 'semanticized'. Different ways of conceiving of such a semantic analogue to (UC) can be found in the literature (cf. (Bonnay & Westerstahl 2016), (Feferman 2015), (Zucker 1978)), but a comprehensive comparison and assessment of their relative merits is still outstanding. This is somewhat surprising given the central role the notion of unique determinability by rules plays in approaches to the nature of the logical constants (cf. (Hacking 1979), (Hodes 2004), (Peacocke 1987). This talk aims to investigate and compare some of the different ways in which (UC) could be adapted to a model-theoretic framework, and assesses the adequacy of these different implementations for the demarcation problem of the logical constants. Belnap, N., "Tonk, Plonk and Plink." Analysis 22 (1962): 130-134. Bonnay, D. and D. Westerstahl. "Compositionality Solves Carnap's Problem." Erkenntnis 81 (2016): 721-739. Carnap, R., Formalization of Logic. Harvard University Press, 1943. Feferman, S. "Which Quantiers Are Logical? A Combined Semantical and Inferential Criterion." Quantifiers, Quantifiers, and Quantifierers: Themes in Logic, Metaphysics and Language. Ed. Alessandro Torza. Springer, 2015. 19-31. Gentzen, G., "Untersuchungen uber das logische Schließen." Math. Zeitschrift 39 (1934): 405-431. Hacking, I., "What is Logic?." Journal of Philosophy 76 (1979): 285-319. Hodes, H.T., "On The Sense and Reference of A Logical Constant." Philosophical Quarterly 54 (2004): 134-165. Humberstone, L., The Connectives. MIT Press, 2011. Peacocke, C., "Understanding Logical Constants: A Realist's Account." Studies in the Philosophy of Logic and Knowledge. Ed. T. J. Smiley and T. Baldwin. Oxford University Press, 2004. 163-209. Prior, A.N., "The Runabout Inference-Ticket." Analysis 21 (1960): 38-39. Zucker, J.I., "The Adequacy Problem for Classical Logic." Journal of Philosophical Logic 7 (1978): 517-535. |

15:45 | Clasical Logic and Schizophrenia: for A Neutral Game Semantics PRESENTER: Juan Redmond ABSTRACT. In this paper we draw a proposal to develop a logic of fictions in the game-theoretical approach of dialogical pragmatism. From one of the main criticisms that point to classical logic: the structural schizophrenia of its semantics (Lambert, 2004: 142-143; 160), we review the ontological commitments of the two main traditions of logic (Aristotle and Frege) to highlight their limits concerning the analysis of fictional discourse, and the overcoming from a pragmatic game perspective. In specialized literature, we can often find objections against the presumed explanatory power of logic and formal languages in relation to fictional discourse. Generally, the aim of such critics is the rol that the notion of reference fulfills in the logic analysis of fiction; according to our view, the most ideal approach would be a more pragmatic account. We respond to this objection affirming that, if we elaborate a context of adequate analysis, a properly pragmatic treatment of fiction in logic is possible without the restrictions imposed by the notion of reference. Dialogic logic, which considers the arguments as a interactive chain of questions and answers, offers an ideal analysis context for a better pragmatic approach. In this sense we believe in the richness of the perspective of the dialogic logic that comprises existence through the interactional concept of choice, whose semantics is based on the concept of use and can be called pragmatic semantics. References: Aristóteles (1982), “Categorías”, Tratados de lógica (Órganon) I. Madrid: Gredos. Aristóteles (1995), “Sobre la interpretación”, Tratados de lógica (Órganon) II. Madrid: Gredos. Frege, Gottlob (1884), Die Grundlagen der Arithmetik. Eine logisch mathematische Untersuchung über den Begriff der Zahl. Breslau: Verlag von Wilhelm Koebner. Frege, Gottlob (1948), “Sense and Reference”, The Philosophical Review, Vol. 57, No. 3 (May, 1948); pp. 209-230. Frege, Gottlob (1983), Nachgelassene Schriften, edición de H. Hermes, F. Kambartel y F. Kaulbach. Hamburg: Felix Meiner Verlag Hamburg. Frege, Gottlob (1879), Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, Halle a. S.: Louis Nebert. Lambert, Karel (1960), “The Definition of E! in Free Logic”, Abstracts: The International Congress for Logic, Methodology and Philosophy of Science. Stanford: Stanford University Press. Lambert, Karel (2004), Free Logic. Selected Essays. Cambridge: Cambridge University Press. Lambert, Karel (1997), Free Logics: Their Foundations, Character, and Some Applications Thereof. ProPhil: Projekte zur Philosophie, Bd. 1. Sankt Augustin, Germany: Academia. Priest, Graham (2005), Towards Non-Being. The logic and Metaphysics of Intentionality. Oxford: Clarendon Press. Rahman, S. (2001), “On Frege’s Nightmare. A Combination of Intuitionistic, Free and Paraconsistent Logics”, en H. Wansing, ed., Essays on Non-Classical Logic, River Edge, New Jersey: World Scientific; pp. 61-85. Rahman, S. & Fontaine, M. (2010), “Fiction, Creation and Fictionality An Overview”, Revue Methodos (CNRS, UMR 8163, STL). A paraître. Rahman, S. & Keiff, L. (2004), “On how to be a dialogician”, en D. Vanderveken, ed., Logic, Thought and Action, Dordrecht: Springer; pp. 359-408. Reicher, M. (2014), “Nonexistent Objects”, Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/nonexistent-objects/. Consulta: diciembre 2016. |

16:45 | Ideals, idealization, and a hybrid concept of entailment relation ABSTRACT. The inescapable necessity of higher-type ideal objects which more often than not are “brought into being” by one of the infamously elegant combinations of classical logic and maximality (granted by principles as the ones going back to Kuratowski and Zorn), is, it may justly be argued, a self-fulfilling prophecy. Present-day classical mathematics thus finds itself at times clouded by strong ontological commitments. But what is at stake here is pretense, and techniques as multifarious as the ideal objects they are meant to eliminate have long borne witness to the fact that unveiling computational content is all but a futile endeavor. Abstract entailment relations have come to play an important role, most notably the ones introduced by Scott [6] which subsequently have been brought into action in commutative algebra and lattice theory by Cederquist and Coquand [3]. The utter versatility of entailment relations notwithstanding, some potential applications, e.g., with regard to injectivity criteria like Baer’s, seem to call for yet another concept that allows for arbitrary sets of succedents (rather than the usual finite ones), but maintains the conventional concept’s simplicity. In this talk, we discuss a possible development according to which an entailment relation be un- derstood (within Aczel’s constructive set theory) as a class relation between finite and arbitrary subsets of the underlying set, the governing rules for which, e.g., transitivity, to be suitably adjusted. At the heart of our approach we find van den Berg’s finitary non-deterministic inductive definitions [2], on top of which we consider inference steps so as to give account of the inductive generation procedure and cut elimination [5]. Carrying over the strategy of Coquand and Zhang [4] to our setting, we associate set-generated frames [1] to inductively generated entailment relations, and relate completeness of the latter with the former’s having enough points. Once the foundational issues have been cleared, it remains to give evidence why all this might be a road worth taking in the first place, and we will do so by sketching several case studies, thereby revisiting the “extension-as-conservation” maxim, which in the past successfully guided the quest for constructivization in order theory, point-free topology, and algebra. The intended practical purpose will at least be twofold: infinitary entailment relations might com- plement the approach taken in dynamical algebra, and, sharing aims, may ultimately contribute to the revised Hilbert programme in abstract algebra. [1] P. Aczel. Aspects of general topology in constructive set theory. Ann. Pure Appl. Logic, 137(1–3):3–29, 2006. [2] B. van den Berg. Non-deterministic inductive definitions. Arch. Math. Logic, 52(1–2):113–135, 2013. [3] J. Cederquist and T. Coquand. Entailment relations and distributive lattices. In S. R. Buss, P. Hájek, and P. Pudlák, editors, Logic Colloquium ’98. Proceedings of the Annual European Summer Meeting of the Association for Symbolic Logic, Prague, Czech Republic, August 9–15, 1998, volume 13 of Lect. Notes Logic, pp. 127–139. A. K. Peters, Natick, MA, 2000. [4] T. Coquand and G.-Q. Zhang. Sequents, frames, and completeness. In P. G. Clote and H. Schwichtenberg, editors, Computer Science Logic (Fischbachau, 2000), volume 1862 of Lecture Notes in Comput. Sci., pp. 277–291. Springer, Berlin, 2000. [5] D. Rinaldi and D. Wessel. Cut elimination for entailment relations. Arch. Math. Logic, in press. [6] D. Scott. Completeness and axiomatizability in many-valued logic. In L. Henkin, J. Addison, C.C. Chang, W. Craig, D. Scott, and R. Vaught, editors, Proceedings of the Tarski Symposium (Proc. Sympos. Pure Math., Vol. XXV, Univ. California, Berkeley, Calif., 1971), pp. 411–435. Amer. Math. Soc., Providence, RI, 1974. |

17:15 | The Jacobson Radical and Glivenko's Theorem PRESENTER: Peter Schuster ABSTRACT. Alongside the analogy between maximal ideals and complete theories, the Jacobson radical carries over from ideals of commutative rings to theories of propositional calculi. This prompts a variant of Lindenbaum's Lemma that relates classical validity and intuitionistic provability, and the syntactical counterpart of which is Glivenko's Theorem. Apart from perhaps shedding some more light on intermediate logics, this eventually prompts a non-trivial interpretation in logic of Rinaldi, Schuster and Wessel's conservation criterion for Scott-style entailment relations (BSL 2017 & Indag. Math. 2018). |

16:45 | Disturbing Truth ABSTRACT. The title is deliberately ambiguous. I wish to disturb, what I take to be, common conceptions about the role of truth held by philosophers of science and taxonomically inclined scientists. ‘Common’ is meant in the statistical sense the majority of institutionally recognised philosophers of science and scientists. The common view is that the various pronouncements of science are usually true, and a few are false. We eventually discover falsehoods and expel them from science. The purpose of scientific research is to discover new truths. The success of science consists in discovering truths because it is through them that we are able to make true predictions and, through technology, control and manipulate, nature. Fundamental science consists in finding fundamental truths in the form of laws. These are important because they allow us to derive and calculate further truths. With this view, there are obvious problems with the use of defective information in science. We can address these problems in a piecemeal way. And the fact that defective information does not prevent scientists from carrying out their scientific enquiries suggests the piecemeal approach. I shall present a highly idiosyncratic view which by-passes the problems of defective knowledge. The view is informed by what we might call “uncommon” scientists. According to the uncommon view: all "information" (formulas or purported truths of a theory), in science is false or defective. The purpose of their research is to understand the phenomena of science. Provisionally, we can think of ‘understanding’ as truth at a higher level than the level of propositions of a theory. The success of science is measured by opening new questions, and finding new answers, sometimes in the form of deliberately incorrect or counter-scientific information. Fundamental science does not stop with laws, but digs deeper into the logic, mathematics, metaphysics, epistemology or language of the science. |

17:15 | Quasi-truth and defective situations in science PRESENTER: Jonas Becker Arenhart ABSTRACT. Quasi-truth is a mathematical approach to the concept of truth from a pragmatic perspective. It is said that quasi-truth is a notion of truth that is more suitable to current science as actually practiced; intuitively, it attempts to provide a mathematically rigorous approach to a pragmatic aspect of truth in science, where there is not always complete information about a domain of research, and where it is not always clear that we operate with consistent information (see da Costa and French 2003 for the classical defense of those claims). In a nutshell, the formalism is similar to the well-known model theoretic notion of truth, where truth is defined for elementary languages in set theoretic structures. In the case of quasi-truth, the notion is defined in partial structures, that is, set theoretic structures whose relations are partial relations. Partial relations, on their turn, are relations that are not defined for every n-tuple of objects of the domain of investigation. Sentences are quasi-true iff there is an extension of those partial relations to complete relations so that we find ourselves dealing again with typical Tarskian structures (and also with Tarskian truth; see Coniglio and Silvestrini (2014) for an alternative approach that dispenses with extensions, but not with partial relations). In this talk, we shall first present some criticism to the philosophical expectations that were put on quasi-truth and argue that the notion does not deal as expected with defective situations in science: it fails to accommodate both incomplete information as well as inconsistent ones. Indeed, it mixes the two kinds of situations in a single approach, so that one ends up not distinguishing properly between cases where there is lack of information and cases where there is conflicting information. Secondly, we advance a more pragmatic interpretation of the formalism that suits better with it. In our interpretation, however, there are no longer contradictions and no need to hold that the information is incomplete. Quasi-truth becomes much less revolutionary, but much more pragmatic. References Coniglio, M., and Silvestrini, L. H. (2014) An alternative approach for quasi-truth. Logic Journal of the IGPL 22(2) pp. 387-410. da Costa, N. C. A. and French, S., (2003) Science and Partial Truth: A Unitary Approach to Models and Scientific Reasoning. Oxford: Oxford University Press. |

17:45 | Making Sense of Defective Information: Partiality and Big Data in Astrophysics PRESENTER: Maria Del Rosario Martinez Ordaz ABSTRACT. While the presence of defective (inconsistent, conflicting, partial, ambiguous and vague) information in science tends to be naturally seen as part of the dynamics of scientific development, it is a fact that the greater the amount of defective information that scientists have to deal with, the less justified they are in trusting such information. Nowadays scientific practice tends to use datasets whose size is beyond the ability of typical database software tools to capture, analyze, store, and manage (Manyika et al., 2011). Although much current scientific practice makes use of big data and scientists have struggled to explain precisely how do big data and machine learning algorithms actually work, they still rationally trust some significant chunks that these datasets contain. The main question we address is: In the era of big data, how can we make sense of the continued trust placed by scientists in defective information in the sciences consistently with ascribing rationality to them? In order to respond to this question, we focus on the particular case of astrophysics as an exemplar of the use of defective information. In astrophysics, information of different types (such as images, redshifts, time series data, and simulation data, among others) is received in real-time in order to be captured, cleaned, transferred, stored and analyzed (Garofalo et al. 2016). The variety of the sources and the formats in which such information is received causes the problem of how to compute it efficiently as well as the problem of high dimensional data visualization, that is, how to integrate data that have hundreds of different relevant features. Since such datasets tend to increase in volume, velocity and variety (Garofalo et al. 2016), that makes it even harder to achieve any deep and exhaustive understanding of what they contain. However, this has not prevented astronomers from trusting important chunks of the information contained in such datasets. We defend that such trust is not irrational. First, we argue that, as astrophysics is an empirical science, empirical adequacy of astronomical chunks of information plays an important role in their rational acceptance. Second, we contend that, despite their defectiveness, the chunks of information that astronomers trust are empirically adequate. In order to defend this, we appeal to a particular formulation of empirical adequacy (first introduced in Bueno, 1997) that relies on resources of the partial structures framework to accommodate inconsistent, partial, ambiguous and vague information in the current scientific practice of astrophysics. References Bueno, O. (1997): “Empirical Adequacy: A Partial Structures Approach”, Studies in History and Philosophy of Science 28, pp. 585-610. Garofalo, M., A. Botta and G. Ventre (2016): “Astrophysics and Big Data: Challenges, Methods, and Tools”, Astroinformatics (AstroInfo16), Proceedings IAU Symposium No. 325, pp. 1-4. Manyika, J., Chui, M., Brown, B., et al. (2011): Big Data: The Next Frontier for Innovation, Competition, and Productivity. McKinsey Global Institute. |

16:45 | On the Significance of Argumentation in Discovery Proof-Events PRESENTER: Ioannis Vandoulakis ABSTRACT. Many researchers claim that the role of argumentation is central in mathematics. Mathematicians do much more, than simply prove theorems. Most of their proving activity might be understood as kinds of argumentation. Lakatos’ Proofs and Refutations is an enduring classic that highlights the role of dialogue between agents (a teacher and some students) by attempts at proofs and critiques of these attempts. The comparison between argumentation supporting an assumption or a purported proof and its proof is based on the case that proof can be regarded as a specific argumentation in mathematics. Thus, argumentation theory can be used to explore certain aspects of development of discovery proof-events in time. The concept of proof-event was introduced by Joseph Goguen, [2001], who understood mathematical proof, not as a purely syntactic object, but as a social event, that takes place in specific place and time and involves agents or communities of agents. Proof-events are sufficiently general concepts that can be used to study besides the “traditional” formal proofs, or other proving activities, such as incomplete proofs, purported proofs or attempts to verify a conjecture. Since argumentation is inseparable from the process of searching for a mathematical proof, we suggest a modified model of the proof-events calculus [Stefaneas and Vandoulakis 2015] that was used to represent discovery proof-events and their sequences, based on the versions of argumentation theories advanced by Pollock [1992], Toulmin [1993] and Kakas and Loizos [2016]. We claim that the exchange of arguments and counterarguments aimed at clarifying possible gaps or implicit assumptions that occur during a proof, can be formally represented within this integrated framework. We illustrate our approach on the historical case of the sequence of proof-events leading to the proof of Fermat’s Last Theorem. References Goguen, Joseph, (2001), “What is a proof”, http://cseweb.ucsd.edu/~goguen/papers/proof.html. Kakas Antonis, Loizos Michael, (2016), “Cognitive Systems: Argument and Cognition”. IEEE Intelligent Informatics Bulletin, 17(1): 15-16. Pollock J. L. (1992), “How to reason defeasibly”. Artif. Intell., 57(1):1–42. Stefaneas, P. & Vandoulakis, I. (2015), “On Mathematical Proving”. Computational Creativity, Concept Invention, and General Intelligence Issue. Journal of General AI, 6(1): 130-149. Toulmin S. E. (1993). The use of arguments. Cambridge, Cambridge University Press. |

17:15 | PRESENTER: Peter Vojtas ABSTRACT. We introduce a general (epistemic) reasoning method based on problem reduction and show its use and discuss its justifiability in several disciplines. To introduce our concept we rephrase and extend the question-answer approach of A. Blass [B]. We consider a problem (problem domain) P = (I, S, A) consisting of set of problem instances I, set of potential solutions S and acceptability relation A, where A(i,s) means that a particular solution s in S is acceptable for a problem instance i in I. Problem reduction occurs e.g. when a problem solution is delegated to an expert, or a client computation asks a server to do a part of a job, or an agent with limited resources asks agent with richer knowledge. Assume we have two problem domains P1 = (I1, S1, A1) (sometimes called target domain) and P2 = (I2, S2, A2) (sometimes called source domain). We consider a typical scenario: assume we are not able (or it is very complex or very expensive) to solve problem instances from I1. Moreover, assume there is a problem domain P2 where we have methods to find acceptable solutions (efficient, cheaper). If we happen to efficiently reduce problem instances from I1 to problem instances of I2 in such a way that acceptable solutions in S2 can be transformed to acceptable solutions to original problem instance, we are done. There is a wide space for what acceptability can mean. It can be e.g. correct, reasonable, reliable, etc. Problem reduction (PR). Reduction of a problem P1 to a problem P2 consists of a pair of mappings (r, t): r maps problem instances i1 in I1 of P1 to problem instances r(i1) in I2 of P2 and t maps solutions s2 in S2 to solutions t(s2) in S1 in such a way, that an acceptable (in the sense of relation A2) solution s2 to instance r(i1) is transferred to an solution t(s2) which is A1-acceptable to original problem instance i1. Formally we require: for all i1 in I1 and s2 in S2 A2(r(i1), s2) implies A1 (i1, t(s2)) holds true. (PRi) Motivated by [Hr] we combine decision and search problems, and assume every set of solutions contains an extra element “nas” = “no_acceptable_solution” and we require the above implication should be valid also for r2 = nas2 and t(nas2) = nas1. This helps us to avoid empty fulfillment of (PRi) implication and to preserve category theoretical character of problem reductions. Our approach generalizes analogical reasoning [A], in that we show that along of similarity it works also under some quite complementary situations Following [He] we can read following question answer: SPIEGEL: And what now takes the place of Philosophy? Heidegger: Cybernetics. We will show that our general (epistemic) reasoning method based on problem reduction can be used to understanding cybernetics as modeling of dynamic processes with feedback. This can shed light to Heidegger's answer. Another application comes from modeling and abstraction in System Sciences. Inspired by Peter Senge [S], we propose the correlation of the organizational analysis’ depth with the different type of possible actions (solutions). First reducing problem instances from events analysis to patterns of behavior analysis we reach finally systemic structure analysis (on problem instances side). Finding an acceptable generative action and transforming back along the solution side to responsive action and finally to reactive action closes the use of our back and forth reduction and translation. We consider also use in the management by objectives model in organizational envisioning and narration. These were justified empirically in a certain degree of acceptance of solution. Our reasoning works also under uncertainty. Problem reduction itself, as a reasoning method, can be quite challenging (similarly to finding mathematical proofs). Nevertheless we believe that advantages of finding P2, r, t and proving implication (PRi) for solving P1 are worth of these difficulties. [A] Paul Bartha, "Analogy and Analogical Reasoning", The Stanford Encyclopedia of Philosophy (winter 2016 Edition), Edward Zalta (ed.) [B] Andreas Blass. Questions and Answers - A Category Arising in Linear Logic, Complexity Theory, and Set Theory. Advances in Linear Logic, eds. J.-Y. Girard etal. London Math. Soc. Lecture Notes 222(1995)61-81 [He] Martin Heidegger - The 1966 interview published in 1976 after Heidegger's death as "Only a God Can Save Us". Translated by William J. Richardson. Der Spiegel. 1976-05-31. pp. 193–219. [Hr] Juraj Hromkovic. Why the Concept of Computational Complexity is Hard for Verifiable Mathematics. Electronic Colloquium on Computational Complexity, TR15-159 [S] Peter Michael Senge. The Fifth Discipline. Doubleday, New York 1990 |

Ends 17:45 now.

16:45 | Learning Subjunctive Conditional Information ABSTRACT. If Oswald had not Killed Kennedy, someone else would have. What do we learn when we learn such a subjunctive conditional? To evaluate the conditional, Stalnaker (1968) proposes, we move to the most similar possible world from our actual beliefs where Oswald did not kill Kennedy, and check whether someone else killed Kennedy. Based on Stalnaker's semantics, Günther (2018) has developed a method of how a rational agent learns indicative conditional information. Roughly, an agent learns 'if A then B' by imaging on the proposition expressed by the corresponding Stalnaker conditional. He goes on to generalize Lewis's (1976) updating method called imaging to Jeffrey imaging. This makes the method applicable to the learning of uncertain conditional information. For example, the method generates the seemingly correct predictions for Van Fraassen's (1981) Judy Benjamin Problem. To the best of our knowledge, there is no theory for the learning of subjunctive conditional information. Psychologists of reasoning and philosophers alike have almost only tackled the learning of indicative conditionals. (See, for example, Evans and Over (2004), Oaksford and Chater (2007) and Douven (2015).) Here, we aim to extend Günther's method to cover the learning of information as encoded by subjunctive conditionals. On first sight, Günther's method of learning indicative conditional information seems to be applicable to the learning of subjunctive conditionals. From ''If Oswald had not killed Kennedy, someone else would have'' you learn that the most similar world in which Oswald did not kill Kennedy, is a world in which someone else did. However, it is widely agreed upon that the meaning of this subjunctive is different from its corresponding indicative conditional ''If Oswald did not kill Kennedy, someone else did''. You can reject the former, while you accept the latter. The pair of Oswald-Kennedy conditionals suggest that we might learn different propositions. More specifically, the propositional content of a subjunctive might differ from its corresponding indicative conditional. To account for the difference, we aim to amend Stalnaker's possible worlds semantics for conditionals. The idea is that the mood of the conditional may influence which world is judged to be the most similar antecedent world. In the case of indicative conditionals, the world we move to is just the most similar antecedent world to the actual world. In the case of subjunctive conditionals, the world we move to is the most similar antecedent world to the actual world as it has been immediately before the time the antecedent refers to. The evaluation of subjunctives thus involves mental time travel, while the evaluation of indicatives does not. (The idea to fix the past up until the time to which the antecedent refers can be traced back to Lewis (1973). We will address the accounts of antecedent reference due to Bennett (1974), Davis (1979) and, more recently, Khoo (2017).) As a consequence, when you move to the most similar antecedent world in the subjunctive case, you are not restricted by the facts of the actual world in-between the reference time of the antecedent and the now. We show how this simple amendment to Stalnaker's semantics allows to extend Günther's method to the learning of subjunctive conditionals. Bennett, J. (1974). Counterfactuals and Possible Worlds. Canadian Journal of Philosophy 4(December): 381–402. Davis, W. A. (1979). Indicative and Subjunctive Conditionals. Philosophical Review 88(4): 544–564. Douven, I. (2015). The Epistemology of Indicative Conditionals: Formal and Empirical Approaches. Cambridge University Press. Evans, J. S. B. T. and Over, D. E. (2004). If. Oxford: Oxford University Press. Günther, M. (2018). Learning Conditional Information by Jeffrey Imaging on Stalnaker Conditionals. Journal of Philosophical Logic 47(5): 851–876. Khoo, J. (2017). Backtracking Counterfactuals Revisited. Mind 126(503): 841– 910. Lewis, D. (1973). Causation. Journal of Philosophy 70(17): 556–567. Lewis, D. (1976). Probabilities of Conditionals and Conditional Probabilities. The Philosophical Review 85(3): 297–315. Oaksford, M. and Chater, N. (2007). Bayesian Rationality: The Probabilistic Approach to Human Reasoning. Oxford Cognitive Science Series, Oxford, New York: Oxford University Press. Stalnaker, R. (1968). A Theory of Conditionals. In: Studies in Logical Theory (American Philosophical Quarterly Monograph Series), edited by N. Rescher, no. 2, Oxford: Blackwell. pp. 98–112. Van Fraassen, B. (1981). A Problem for Relative Information Minimizers in Probability Kinematics. The British Journal for the Philosophy of Science 32(4): 375–379. |

17:15 | On the complexity of formulas in semantic programming PRESENTER: Sergey Ospichev ABSTRACT. We study the algorithmic complexity of hereditarily finite list extensions of structures \cite{bib0}. The generalized computability theory based on $\Sigma$-definability, which has been developed by Yuri Ershov \cite{Ershov1996} and Jon Barwise \cite{Barwise1975}, considers hereditarily finite extensions consisting of hereditarily finite sets. In the papers by Yuri Ershov, Sergei Goncharov, and Dmitry Sviridenko \cite{bib1,bib4} a theory of hereditarily finite extensions has been developed, which rests on the concept of Semantic Programming. In the paradigm of Semantic Programming, a program is specified by a $\Sigma$-formula in a suitable superstructure of finite lists. Two different types of implementation of logic programs on the basis of $\Sigma$-definability have been considered \cite{bib7}. The first one is based on deciding the truth of $\Sigma$-formulas corresponding to the program in the constructed model. The second approach is based on the axiomatic definition of the theory of the list superstructure. Both of these approaches raise the natural question of how fast one can compute a program represented by $\Sigma$-formulas. In the recent papers \cite{bib7,bib8} Sergey Goncharov and Dmitry Sviridenko constructed conservative enrichment of language of bounded quantifiers by conditional and recursive terms and have put a hypotheses that in case the base model $\mathcal{M}$ is polynomially computable then deciding the truth of a given $\Delta_0$-formula in this enriched language in a hereditarily finite list extension of $\mathcal{M}$ has polynomial complexity. Here we confirm these hypotheses and consider the complexity of this problem for a number of natural restrictions on $\Delta_0$-formulas. \begin{thebibliography}{99} \bibitem{bib0} \textit{Ospichev S. and Ponomarev D.}, On the complexity of formulas in semantic programming, Sib. Electr. Math. Reports, vol. 15, 987--995 (2018) \bibitem{Ershov1996} \textit{Ershov Yu. L.}, Definability and computability. Consultants Bureau, New York (1996) \bibitem{Barwise1975} \textit{Barwise, J.}, Admissible sets and structures. Springer, Berlin (1975) \bibitem{bib1} \textit{Goncharov S. S. and Sviridenko D. I.}, $\Sigma$-programming, Transl. II. Ser., Amer. Math. Soc., no. 142, 101--121 (1989). \bibitem{bib4} \textit{Ershov Yu. L., Goncharov S. S., and Sviridenko D. I.}, Semantic Programming, in: Information processing 86: Proc. IFIP 10th World Comput. Congress. Vol. 10, Elsevier Sci., Dublin, 1093--1100 (1986). \bibitem{bib7} \textit{Goncharov S. S.}, Conditional Terms in Semantic Programming, Siberian Mathematical Journal, vol. 58, no. 5, 794--800 (2017). \bibitem{bib8} \textit{Goncharov S. S. and Sviridenko D. I.}, Recursive Terms in Semantic Programming, Siberian Mathematical Journal, vol. 59, no. 6, 1279--1290 (2018). \end{thebibliography} |

17:45 | Hyperintensions as abstract procedures ABSTRACT. Hyperintensions are defined in my background theory Transparent Intensional Logic (TIL) as abstract procedures that are encoded by natural-language terms. For rigorous definition of a hyperintensional context it is crucial to distinguish two basic modes in which a given procedure can occur, namely displayed and executed. When a procedure C occurs displayed, then C itself figures as an object on which other procedures operate; while if C occurs executed, then the product of C figures as an object to operate on. Procedures are structured wholes consisting of unambiguously determined parts, which are those sub-procedures that occur in execution mode. They are not mere set-theoretic aggregates of their parts, because constituents of a molecular procedure interact in the process of producing an object. Furthermore, this part-whole relation is a partial order. On the other hand, the mereology of abstract procedures is non-classical, because the principles of extensionality and idempotence do not hold. A hyperintensional context is the context of a displayed procedure, and it is easy to block various invalid inferences, because different procedures can produce one and the same function-in-extension, be it mathematical mapping or PWS-intension. But blocking invalid inferences in hyperintensional contexts is just the starting point. There is the other side of the coin, which is the positive topic of which inferences should be validated and how these valid inferences should be proved. The problem is this. A displayed procedure is a closed object that is not amenable to logical operations. To solve this technical difficulty, we have developed a substitution method that makes it possible to operate on displayed procedures within and inside a hyperintensional context. Having defined the substitution method we are in a position to specify beta-conversion by ‘value’. I am going to prove that unlike beta-conversion by name the conversion by value is validly applicable so that the redex and contractum are logically equivalent. However, though TIL is a typed lambda-calculus, the Church-Rosser theorem is not valid for beta-reduction by name, which is so due to hyperintensional contexts. Hence, I am going to specify a fragment of TIL for which the theorem is valid. On the other hand, the Church-Rosser theorem is valid in TIL for beta-reduction by value. Yet, in this case the problem is just postponed to the evaluation phase and concerns the choice of a proper evaluation strategy. To this end we have implemented the algorithm of context recognition that makes it possible to evaluate beta-conversion by value in a proper way so that the Church-Rosser Theorem is valid in all kinds of a context. There are still other open issues concerning the metatheory of TIL; one of them is the problem of completeness. Being a hyperintensional lambda-calculus based on the ramified theory of types, TIL is an incomplete system in Gödel’s sense, of course. Yet, I am going to specify a fragment of TIL that is complete in a Henkin-like way. |

16:45 | DSTIT modalities through labelled sequent calculus PRESENTER: Edi Pavlovic ABSTRACT. Dstit (deliberately seeing to it that) is an agentive modality usually semantically defined on a tree-like structure of successive moments. Any maximal sequence of moments forms a history, with individual moments parts of different histories, but all histories sharing some moment. The tree has forward branching time (BT), corresponding to indeterminacy of the future, but no backward branching, corresponding to uniqueness of the past, and is enriched by agent choice (AC). Choice is a function mapping an agent/moment pair to a partition of all histories passing through that moment (since an agent’s choice determines what history comes about only to an extent). In such (BT+AC) frames, formulas are evaluated at moments in histories. Specifically, an agent a deliberately seeing to it that A holds at the moment m of a history h holds iff (i) A holds in all histories choice-equivalent to h for the agent a, but (ii) doesn’t hold in at least one history that the moment m is a part of. In simple terms, the agent sees to it that A if their choice brings about those histories where A holds, but nonetheless it could have been otherwise (i.e. an agent can’t bring about something that would have happened anyway). Stit modalities, including Dstit, received an extensive axiomatic treatment in [1], and the proof-theoretic approaches to it so far have been done via labelled tableaux [5, 6]. In contrast, in this paper we investigate Dstit modality by means of a sequent calculus. Following [2, 3], we employ the axioms-as-rules approach, and develop a G3-style labelled sequent calculus. This is shown to possess all the desired structural properties of a good proof system, including being contraction- and cut-free. Moreover, we demonstrate multiple applications of the system. We prove the impossibility of delegation of tasks among independent agents, the interdefinability of Dstit with an agent-relative modality, cstit, and an agent-independent modality, settled true, as well as the treatment of refraining from [1] and [4]. Finally, we demonstrate the metatheoretical properties of our system, namely soundness, completeness and decidability via a bounded proof search. References [1] N.D. Belnap, M. Perloff, M. Xu, Facing the Future: Agents and Choices in Our Indeterminist World, Oxford: Oxford University Press, 2001. [2] S. Negri, J. von Plato, Structural Proof Theory, Cambridge: Cambridge University Press, 2001. [3] S. Negri, J. von Plato, Proof Analysis, Cambridge: Cambridge University Press, 2011. [4] G.H. von Wright, Norm and Action: A Logical Enquiry, Routledge & Kegan Paul, 1963. [5] H. Wansing, Tableaux for multi-agent deliberative-stit logic, in G. Governatori, I.M. Hodkinson, Y. Venema (eds.), Advances in Modal Logic 6, pp. 503-520, 2006. [6] G.K. Olkhovikov, H. Wansing, An Axiomatic System and a Tableau Calculus for STIT Imagination Logic, Journal of Philosophical Logic, 47(2), pp. 259-279, 2018. |

17:15 | Counterfactuals and Reasoning about Action PRESENTER: Mahfuz Rahman Ansari ABSTRACT. A rational agent creates counterfactual alternatives when they act on actions concerning “if..then'', “what if'' and constantly assess how the past actions could have been different. For instance, we create counterfactual alternatives that change an undesirable action to be like a desirable one. For instance, if she had put up her usual effort she would have done better in the exam. The capacity to extract causal knowledge from the environment allows us to predict future events and to use those predictions to decide on a future course of action. Counterfactual conditionals are the special kind of subjunctive conditionals that facilitate us to explore other alternatives, in which the antecedent is typically considered to be false. Since counterfactuals are one of the best means of talking about unrealized possibilities, we find its prominence in the understanding of causation, laws, planning, and the reasoning about action. Analysis of various kinds of counterfactuals and its role in the common sense reasoning, mainly the semantics of counterfactuals have attracted the attention of philosophers from antiquity. In this paper, we restrict our self to counterfactuals involving action. We argue that the prominent approach concerning semantics for counterfactuals due to David Lewis has been challenged on the basis of unlikely, or impossible, events. The present study will be an attempt to link the metaphysical conception of possible-world semantics, and the role of action in the evaluation of counterfactuals. We present an extended version of Lewis-Stalnaker semantics that deals with action counterfactuals. References: 1. Ginsberg, M. L. (1986). "Counterfactuals". Artificial Intelligence, 30: 35–79. 2. Lent, J., & Thomason, R. H. (2015). Action models for conditionals. Journal of Logic, Language, and Information, 24(2), 211–231. 3. Lewis, D. (1973) Counterfactuals, Cambridge, MA: Harvard University Press. Reissued London: Blackwell, 2001. 4. Nute, Donald. “Counterfactuals and the Similarity of Words.” The Journal of Philosophy, vol. 72, no. 21, 1975, pp. 773–778. 5. Pearl, J. (1996), “Causation, Action and Counterfactuals,” in Proceedings of the Sixth Conference on Theoretical Aspects of Rationality and Knowledge, ed. Y. Shoham, San Francisco, CA: Morgan Kaufman, pp. 57–73. 6. Stalnaker, R., 1968. ‘A Theory of Conditionals’ in Studies in Logical Theory, American Philosophical Quarterly Monograph Series, 2. Oxford: Blackwell, pp. 98–112. 7. Thomason, Richmond, “Conditionals and Action Logics”, in AAAI 2007 Spring Symposium on Commonsense Reasoning, Eyal Amir, Vladimir Lifschiz and Rob Miller, eds., Menlo Park, California: AAAI Press, 156–161. 8. Zhang, J. (2013). A Lewisian logic of causal counterfactuals. Minds and Machines, 23, 77–93. |

17:45 | Logic of Scales ABSTRACT. We mean by „scale” a set of predicates so that: 1) All predicates of a same scale are contrary each to another. 2) If g does not belong to the scale S, then there is an f in S so that f and g are compatible. 3) The predicates of a scale, S, are complementary, namely, any would be an object, a, there is an element of S, for instance f, so that it takes place fa. The scales, as systems of predicates, exists. For any predicate there is a negative predicate, namely, to the predicate f corresponds the predicate non-f, or f*. The set {f, f*} is a scale since its elements are contrary and complementary predicates, hence, the conditions 1-3 are fulfilled. Let F = {f1, f2, …, fn} be a set of contrary predicates. In this case, the set S ={f1, f2, …, fn, O}, where O = (f1 v f2 v … v fn)*, is a scale. Indeed, it is easy to prove that O is contrary relatively to any fi predicate. Moreover, if g is contrary to every fi predicates, then g is compatible to O. On the other side, the elements of S are complementary. Any object satisfies a predicate from F or not. In this last case, it will satisfy the predicate O, therefore any object satisfies a predicate from S. We call the predicate O the origin of the scale S and the predicate O*, namely, the predicate F = f1 v f2 v … v fn, is the genus predicate of the scale S. We can easily notice that the set which includes only the predicates O and F is a scale. In this way, every scale can be represented using its origin and its genus. The genus and the origin of a scale are contradictory, namely, F = O*. If {F, O} is a scale, and F = (f1 v f2 v … v fn), then, for every predicate fi it takes place (x)(fx - Fx). Therefore, there must be a set of operators, xi, so that fi = xiF. We will call these operators numbers. It follows that a predicate can be analyzed using a genus and a number. An object satisfies to a given moment a single predicate from a scale. If t1 and t2 are two different moments and f1 and f2 are predicates of the same scale, S, then it takes place: E = (f1, t1)a & (f2, t2)a. We will introduce the following convention: E =not (f1, f2)(t1, t2)a. The ordered pair (f1, f2) represent a change relatively to the scale S. We have got the result that the elements of the Cartesian square S x S = S2 are all possible changes definable upon the elements of the scale S. On the other hand, the function h: S - S, which assigns to any element of S an element and only one from S, is a transformation inside of the scale S. We may now introduce new kinds of propositions, like the propositions of change and the propositions of transformation having a specific logic. |

Place: Plzenska restaurant – Municipal House, náměstí Republiky 5, 110 00 Prague 1

The congress dinner will be served at the Plzenska restaurant located in the basement of the Municipal House in the historical centre of Prague. Art Nouveau Plzenska restaurant is a place which mingles traditional Czech cuisine with an unique interiror from the early twenties (opened in 1912). The interior was decorated by the best Czech painters and artists of the time.

The price includes three courses menu and accompanying wine, beer and soft drinks.